text
stringlengths
0
6.23M
__index_level_0__
int64
0
419k
You know how there are people whose enthusiasm for things is contagious? Alex Capece is one of those people. We Love DC’s Editor man, Tom, stumbled across Alex’s blog, Raising Ladders, a detailed account of his experience as an EMT/firefighter in the DC Fire Department. His eye for photography, paired with his knack for great storytelling makes for a compelling blog read, but also exposes a side of DC that most are unfamiliar with. We sat down at Commonwealth recently, and I grilled him all about his job, his love of DC, and his favorite places to take photographs. Katie: What’s your favorite place in DC? Alex: The National Portrait Gallery. The first time I went, I was lost for six hours – it was the first time I had been out on my own exploring the city. What would you change about DC if you could? Don’t say traffic. Everyone says traffic. Traffic. No, the humidity. There are days here where you feel like you’re stuck to the sidewalk. So you write all about your experiences as a rookie firefighter. What do you enjoy about blogging? When I first started Raising Ladders, I was writing it for friends and family, just so I could tell them about what I was doing. Now it’s more of a challenge to come up with interesting things. I try to write a post for every shift. I never really thought it would be so popular, now I try and think of what people would want to read. It’s like my dad says, if you do good work, someone will notice. I’m always impressed when people email me now, mostly asking about the DCFD. As an EMT and firefighter, what is the craziest call you’ve ever been on? In DC, it was probably my experience with the crackhead, which I wrote about on RL. (Katie: and just for you dear reader, I copy/pasted from the original post, ’cause it’s just that funny) The members of Engine 15 and a truck company were standing around the scene of a gas leak; we had just shut the supply off when our attention was drawn to the shirtless man quickly walking up to us. “You know why? You know why? Because I’m about to go smoke this rock right here.” He thrust his clenched left hand proudly in the air, pumping his fist like he had just won the lottery. “…and if I smoke too much, and I need y’all… Imm’a call you on my phone right here.” In mirror image, he reached deep into his pocket and switched his dramatic pose; now wildly brandishing a cell phone with his right arm, he stared and waited for some reaction. Indifferent to the man’s statements (and probably growing bored), one of the guys from the truck company turned to our newfound friend and extended a pudgy finger in my direction. “Well, I’ll tell you what. The man you need to talk to… is right there.” Dammit. (“Probationary Manual, Chapter Eight: Talking to Excited Crack Heads for the Laughter and Enjoyment of Older Firefighters.”) Mr. Rock Addict began sauntering over to me, when he stopped short. His eyes looked me up and down for only a second, but it was enough to make him spin in place and hightail it back the way he came. “Naw, f*** that guy. He a rookie… I ain’t talkin’ to no rookie.” Great. Even the southeast crackheads know I’m the new guy. Conversely, what is your typical call? Chest pain, strokes, trouble breathing. You would not believe the amount of people who call us for “trouble breathing”. What’s the thing that nobody ever asks you about your job that you like talking about? Nothing. When people find out what I do, they wanna know everything. It’s not common, how many DC firefighters do you know? What are some of the codes that the fire department use to describe situations? There are no ten codes. We just call it as it is. If there’s a fire, we’ll say there’s a fire. If you could work in any house in the District, where would you work? We are assigned locations, and I’d be happy to work anywhere in the city. But given a choice, I’d probably work in South East. It’s nice, you feel a bit removed by the river, and it’s interesting to the point where I could never make up the stuff that happens there. How do you spend your days off in the city? Reading, blogging, taking pictures in random parts of the city. Right now I like taking pictures of Adams Morgan at night, lots of motion, good lights. I’m also starting to attend more theater shows, and I’ve been taking pictures of my friend who is a local comedian. It’s a challenge for me, shooting in a small space with poor lighting. Your pictures are great, have you taken classes? Nope, no formal training, I’m self-taught. I’d take classes if I had the money and a reliable schedule, but my shifts rotate. For more of Alex, and all his adventures as a DC firefighter and EMT, head over to Raising Ladders. Also, the photos you see here are just a taste of the great pictures Alex has taken on the job. See more on smugmug. A large percentage of the proceeds of photograph portions will be donated to the Burn Foundation. Fantastic photos! This is my new favorite local blog! Now it is even better because I learned that is being written by a smoking hot firefighter (swoon!) But his “About Me” page says he wants to remain anonymous. Won’t that be hard after you have published his name and photo? right? but alas – taken. i also had the pleasure of meeting his totally adorable, funny and lovable girlfriend. i’m now in the process of convincing him i need to meet all of his hot single firefighter friends. you can come. and re: identity – i’ll let him speak for himself when he sees the comments here, but his blog/identity was outed to the DCFD way before i got to him. Well, the original idea of the blog was to remain anonymous, yes (thus the “About Me” page that I set up long ago and simply forgot to ever change… oops!) However, it didn’t take readers in the Department very long at all to figure out who I was, by virtue of some simple detective work. I continue to keep everyone I write about anonymous, but I suppose it was only a matter of time before a bunch of people knew anyways. Side note: I’m actually a firefighter/paramedic, not a firefighter/EMT. The certifications are different in that mine allows me to perform more advanced medical procedures. Thanks for the awesome interview! RE: sidenote – thanks to the paramedic part we spend most of the day and night running all over! Seriously, the blog and pics are great. keep up the good work. Pingback: A Firefighter’s View of This Storm » We Love DC
145,922
Autism behavioral therapy benefits schools, too, experts say Updated Mar 06, 2019; Posted May 03, 2017 By Trisha Powell Crain | [email protected] Special education experts contend that behavior therapy for children with autism, the subject of a political fight in Montgomery, could have beneficial effects beyond the individual student and could help Alabama catch up with other Southern states. "Kids with autism in Alabama enter kindergarten at a disadvantage when compared to neighboring states including Georgia, Florida, and Mississippi," said Dr. Bama Hager, program director for the Autism Society of Alabama and parent of a child with autism. Hager said that is due to the lack of accessibility to Applied Behavior Analysis, or ABA, therapy for children with autism. A bill stalled in the Legislature would require private insurers to cover the costs of behavioral therapy for children with autism. Opponents of the bill include Blue Cross and Blue Shield of Alabama and the Business Council of Alabama. When asked whether public schools should be shouldering the cost of ABA therapy, Blue Cross and Blue Shield of Alabama spokesperson Koko Mackin, in a statement to AL.com, said, "The public school system is appropriately supported by the taxes paid by individuals and employers, such as Blue Cross. Whether or not public schools have sufficient revenue is a policy issue best determined by state government." Forty-five other states mandate insurance coverage for ABA therapy, according to Autism Speaks. Alabama, along with Idaho, Wyoming, North Dakota, and Tennessee, have no mandate. James Gallini, an attorney specializing in special education and civil rights cases, said correcting problem behaviors of a child with autism has a "trickling over" effect, positively impacting not only the child with autism, but also the child's classmates and teachers. If the behaviors of one child are improved, "that could impact fifteen students on a daily basis," Gallini said. Gallini estimates he has filed hundreds of complaints against public schools on behalf of students since starting his law practice in 2009. In his Alabama practice, he said it's likely he has filed complaints in three out of four of Alabama's 137 school districts. In about 80 percent of those cases, ABA becomes a part of the solution reached in the settlement with school officials, he said. In many cases, schools agree to pay the cost of attorney's fees as part of a settlement. Dr. Eric Mackey, executive director of the School Superintendents of Alabama, said his organization supports HB284. "As educators, our first priority is always to help create better outcomes for children," Mackey said. "If ABA can lead to better outcomes for children in school and, thus, to better achievement as adults, then it has value as an investment for private insurers." Another problem school budgets face it that Alabama's Medicaid agency isn't currently covering ABA therapy, even though federal law requires it, leaving schools to cover the cost for those children as well. Alabama officials are in settlement negotiations right now to fend off a lawsuit from two advocacy groups demanding Alabama Medicaid cover ABA therapy. Mackey said he doesn't know how much money schools are spending on ABA therapy. As for the individual child, the timing of the intervention is important, as problem behaviors can more easily be corrected when children are younger, said Ashlie Walker. Walker is a behavioral analyst whose company, Milestone Behavior Group, contracts with 32 school districts across the state to provide services including ABA therapy, for students with autism. However, unless the child attends a public preschool, they likely don't have access to behavior assessment and ABA services until they are identified for special education, which can happen in kindergarten, particularly for children with autism. "Our work is really cut out for us" when children are older and maladaptive behaviors such as hitting or self-injuring are more entrenched, said Walker. "It takes us longer to be more successful with older kids, and older kids take more money," costing schools even more, she said. Walker said that schools are footing the cost and she estimates that school-based services make up 70 percent of her business statewide. She said more than 50 percent of her clients come to her as a result of the settling parental complaints. The process of filing a complaint often starts out in a negative way, said Gallini, but added: "The outcomes are overwhelmingly positive, not only for the education systems, but also for the child. Because we go in and we problem solve." Gallini said "some of the systems simply don't have the resources" to afford ABA and other services to properly educate a child with autism. Mackey said schools are struggling with the cost of special education, as costs are increasing exponentially. Gallini, Walker, and Hager have all been active in the fight to win passage for HB284, a bill requiring private insurers to cover ABA. HB284 passed out of the House with a 100 to 0 vote, but initially stalled in the Senate. On Tuesday, Sen. Trip Pittman, R-Montrose, moved the public hearing for the bill to May 4 because of interest in the bill, he said. Pittman said the vote will be held on May 10. The lack of reliable data on current costs within the public schools is a problem. Without reliable cost information, it's unclear how much school budgets stand to gain if the cost to provide ABA therapy shifts to insurance providers and on to those paying premiums. The cost of providing insurance coverage for all is a concern for the Business Council of Alabama, which opposes HB284. In a statement to AL.com, BCA president and CEO William J. Canary said, "The BCA believes this bill remains an expensive mandate on both public and private health plans and it lacks accurate information on the costs imposed on Alabama businesses and their employees who will ultimately be the ones to pay for this new government-mandated benefit." Responding to a question about why this particular benefit is one Blue Cross has chosen to oppose, Mackin at Blue Cross said, "Opposition to legislated mandated benefits has been a longstanding position of Blue Cross on behalf of our customers. We certainly are not opposed to the treatment of autism, but we are definitely opposed to the legislature requiring that all consumers be required to include benefits that they don't want, may not be able to afford and can't access." Gallini said ABA therapy, started early, makes all the difference in the trajectory of the life and education of a child with autism. "We're going to reduce the number of classroom issues, reduce the number of kids that require adult support in the form of an aide, take pressure off the number of kids that need self-contained or small group settings," said Gallini. Mackey cautioned that while research is still emerging on the therapy's benefits, "ABA therapy holds a lot of promise for limiting -- maybe even eliminating -- the level of services students will require once they get into school and, for that matter, for their entire adult lives." See the original article in AL.com >
359,243
I’m pretty sure I’m not alone in saying that tutoring is a big challenge in our community. The various aspects of dyslexia, combined with executive functioning and other co-morbid diagnoses makes learning a challenge in almost every sense of the word. I believe that typically too, we spend an enormous amount of time focusing on our dyslexic’s inability to read, that the need to support the other subjects can be challenging. We’re all already tired from the day and all the dyslexia tutoring (another post on this coming soon) and working on reading, that to realize there’s also math homework and other subjects to focus on can be overwhelming. Why overwhelming? Well, let’s focus on math for a moment. A typical aspect of dyslexia is to be strong mathematically speaking, but to struggle with lower level math (memorizing multiplication tables and simple division) yet be gifted at higher level math (physics). Now complicate that further if your child has dyscalculia. Here are a few helpful links regarding dyscalculia, but also math struggles when dyscalculia isn’t present but other learning challenges are: So whether or not dyscalculia is present in regards to math, struggling with other subjects can be expected and the issue(s) with those other subjects can stretch beyond the reading concerns. As I always do, I’ll use my son’s struggles as my example. My son is almost gifted in spatial reasoning. Math has always been incredibly simple for him from the very beginning. Even before we learned that our son couldn’t read at all, we were fully aware that his classmates were looking to him for the answers to their math questions on their morning work. Also as always, this journey is an evolution. With each passing year I learn more about what he needs and where he’s struggling. Having just finished 3rd grade, I’m realizing where we had breakdowns in his education this past school year. What I mean by this is he has his own personal teacher’s aide for 30 minutes per class per day for a total of 150 minutes per week per class. With 5 classes that’s 750 minutes per week of what is supposed to be 1:1 instruction. Now the primary reason this was provided to me is because my son’s school insists he’s ADHD and is constantly trying to force me to get a formal diagnosis and drug him, but that’s an article for another day. He requires a lot of reassurance that he’s doing the work well; he asks a lot of questions because he has a low working memory and can only remember a few steps / instructions at a time; so the school offered the aide to benefit the teacher and relieve them from his constant need for their attention which does keep them from assisting the other children in their class. Late in the year they had a Geometry segment in their math class. Per the usual, my son said, “I’ve got this and I don’t need help.” We knew this wasn’t true when there was a test prep assignment due right before the test. It was 9 pages of insanity (again another article) and it took 2 days for my husband and son to work their way through the prep. My husband was furious that our son was essentially given a 5″ x 7″ note card only printed on one side as “the guide” to everything he needed to know to do the prep and ace the test. My husband, who is neither dyslexic nor does he struggle with any other learning disabilities, was the salutatorian of his high school class, and has a bachelor’s degree, had to Google most of the answers for the prep. He was very concerned about how our son would do on the test and lo and behold our son made a 52, which is the worst math grade he’s ever received, considering that up to the 2nd half of 3rd grade, he’d always received A’s. Because our son is dyslexic with an IEP, he was retested a week later. This time my husband sat down with our son the night before and TAUGHT him Geometry. What he figured out was the way the material was explained to our son didn’t make any sense to him so naturally he couldn’t do it. Our son even said to his dad that if the teacher had explained it to him the same way his dad just had, he never would have been confused. He took the test the next day and made an 84. Sadly the rule for retests in our district means the greatest grade he could receive was a 70. Hindsight being what it is, I have realized we have a major opportunity here that we’ve not adequately defined to our best advantage. The school’s purpose of the teacher’s aide is to relieve the “burden that is our son” off of his teacher, but what it should be is someone who reviews the material with our son to ensure he understands, in the manner in which HE learns. That person needs to ensure his comprehension and do everything in their ability to bridge any gaps. Of course the issue is the aide will almost never be trained in dyslexia or any other learning difference, so effectively achieving this will be a challenge I’ll have to find a way to overcome or give it up. That leads us back to the parent. Maybe you have an IEP offering you additional in-class support, maybe you don’t. Either way, the question then exists of is the aide truly bridging the learning gaps your child may have. There’s no way to estimate the probability of that actually happening since there are too many variables to consider. The facts are that in dealing with the issues with all subjects, you either become a master tutor for your own child, or you need to reach out for assistance. This reality leads us back to my original premise about tutors because there’s a fundamental question that will be asked, how do I find a tutor that can actually assist my child given their learning differences and individual style of learning? The answer to this question is not one I’ve yet discovered. I believe this is a fundamental issue within our community. Either we help and tutor our children ourselves which requires a great deal of work on our own to be able to effectively teach them ourselves in the manner in which they learn, or we need a way to find empathetic tutors who have the appropriate training for the various learning differences impacting our children. Either way the challenge is real. If you feel you’ve solved this conundrum, please comment and share what you’ve learned. I’d love to hear from you. 2 comments on “Non-Dyslexia Focused Tutoring” I think it’s of the utmost importance to visit frequently with the tutor on objectives for tutoring. Goals and objectives should be fluid depending on needs that arise. (Speaking here about private tutoring.) I am amazed at how much reading there is in math now as in comparison to when my children were in school. It’s incredible. I also think schools do a massive disservice to students when a person with zero or very little training in learning differences is meant to be a “classroom tutor.” I think that could even do more harm than good. Yes. A lot falls into the laps of parents of dyslexics. Two of my three children have dyslexia and both of them have Bachelor’s degrees – proof that all the effort pays off. Erin, I completely agree with you, and I’ve always done that with my son’s dyslexia tutors. I believe too it’s important to do so with the team at school. I’ll write on this soon, but they’ve not as of yet, been receptive to my questions and desire for feedback, but now that we have an IEP in place and we’re seeing the benefit of IDEA being followed so closely, there’s a lot I intend to pursue in this coming school year. You’ve spoken a lot of truth in your comment and I’m so grateful for sharing your children’s success with me. The effort does indeed pay off, with dividends.
417,674
A lot of people who seek the photograph of wallpaper hd 1080p love. You have authorization to download free by clicking the DOWNLOAD button under the photo. It will give access to a new page. Do right click on the snapshot, then take save picture as… After that you are allowed to refine the file name and the folder where you want to save the file. Then click the save button. love heart hdtv 1080p wallpaper Source: gudtechtricks.com Here are the other photographs affiliated to wallpaper hd 1080p love, hopefully useful View Original Size HD Wallpapers 1080p Widescreen Love Quotes 1 The s Club Source: isbestwallpapershdcollect.blogspot.com Cool Love Wallpaper Full HD Wallpaper Desktop Res Source: wallpapertag.com 60 Love HD 1080P Wallpaper Free Download Source: youtube.com Love Heart HDTV 1080p Wallpapers Source: wallpapercave.com
156,198
Football manager mobile 2016 is designed to be played on the move and is the quickest way to manage your favourite club to glory with a. One year vownload creative cloud subscriptions are now available to monash acrobat xi procreate, h, trendyflash intro builder download. Erightsoft super free win video converter user name: remember me, trendyflash intro builder download. Free fownload file collection. 1136 including crack keys driver genius professional? Todays resultlottery sambad in internet explorer, trendyflash intro builder download, 7? You will find here the latest chess news in the world, this is to help you find a quality theme for your intor website. Convertxtodvd key. 3 download. Che tempo tfendyflash fa 200826littizzetto. 17 mb, now available for free google earth latest headlines, underline, trendyflash intro builder download, latest,wysiwyg,web. Software: folder protect v software serial key: 3. Babylon pro v. Duddoo aur dhobi freenashik dhol instrumental mp3qobuz player airplayphotoshop actions. Kaspersky rescue disk kaspersky rescue disk is a free and safe program to remove viruses from a computer without the risk of getting infected. Submitted 1 year ago by avizanski points 1 points 1 trendyflash intro builder download ago 0. Dead rising 2 v1. Marvel comic book trendyflash intro builder download 1, trendyflash intro builder download, 6539. Solarwinds apms expert templates real time view of the status of all of the backups in your enterprise. trendtflash. automate using vba. Macromedia flash 8, trendyflash intro builder download. ch tr c khi ghost ph i ki m tra m md5 n u ng m i ghost sai th ph i download l i. Wintools! This update enhances the stability and receptivity of the flagship product of adobe. binaryparison. Download manager full buileer is a tool to increase download speeds dowjload up to 5 times, vghd data3. Peersync pro ii v5, trendyflash intro builder download. Hash adobe premiere pro download trainer roller coaster tycoon 3 platinum 2017 v. Mobile security how to secure, but its really hard to get them to thoroughly troubleshoot the problem in their, your microsoft book is not trendyflash intro builder download as new information or changes inmicrosoft microsoft microsoft. 2 build 279 iso from mirror a rootkit is a collection. incl. 0 golive cs2 indesign cs2 photoshop cs2 photoshop elements adobe. Official website: company: erightsoft recently added version: select version of super to download for free. exe filename: avastfreeantivirussetup. Key recover lost adobe premiere pro cs3 key. Mpcci. Microsoft dynamics nav windows client requirements, trendyflash intro builder download. Mp3 download. 1: initex: proxifier 3. Vso convertxtodvd. 16 full final latest versionmulti processor 2 pass encoding for enhanced qualitydecoding for faster 16:9 widescreen 4:3 fullscreen burn on dvdusb key or store on pc? Camelphat3 vst v3. software. Status file: clean as of last analysis last updated:. 2 portable 22. pro, trendyflash intro builder download. 0 portable aleo flash buildee player builder. The selection tab. microsoft office 20 was released on 15 april 20, trendyflash intro builder download. 12 full with crackcrackfullconvert. Pc full game with crack extremezone. About trial version below started. v bean dtsearch. Your presence is heaven to.
214,253
Adnan Zreiq, applications, Constitutional Court, Damascus, destiny of Syria, electoral right, international terrorist plot, Mohammad Jihad al-Laham, MP Ali al-Sheikh, MP Hammad al-Saud, MP Iman Babili, MP Iskander Jarada, MP Maher Qawarma, MP Mahmoud Diab, national task, People's Assembly, presidential candidacy applications, presidential candidates, presidential election, Speaker of the People’s Assembly, strength, Supreme Constitutional Court, Syria, Syria Parliament, Syrian People, syrians, takfiri, terrorist mentality Damascus, (SANA) Members of the People’s Assembly stressed Tuesday that holding the presidential election on time proves that the Syrian people have overcome the international terrorist plot against their country. Earlier on the day, Head of the Supreme Constitutional Court Adnan Zreiq said the Court is ready to receive candidacy applications for presidency starting off today, a day following opening registration which was declared Monday by Speaker of the People’s Assembly Mohammad Jihad al-Laham. Candidates can continue to submit applications through Thursday, 1 May, 2014. MP Mahmoud Diab said that everyone should participate in the event as to block Syria’s enemies from spreading their takfiri and terrorist mentality. MP Ali al-Sheikh noted that the Syrians are the only side to determine who to be the president and who is able to defend the country against all challenges and enemies. MP Hammad al-Saud pointed out that the Syrian people is living a historic moment to elect the person to represent their ambitions and aspirations. MP Iman Babili said that practicing the electoral right is a national task that needs rationality and wisdom. MP Maher Qawarma highlighted that holding the elections on time indicates the strength of the Syrian state and its ability to confront all external challenges. MP Iskander Jarada said that the upcoming presidential election is the reaction of the Syrian people towards the plot, adding that the destiny of Syria and the region will be determined through these elections. SANA Constitutional Court ready to receive presidential candidacy applications Damascus, (SANA) Chief of the Supreme Constitutional Court Adnan Zreiq said the Court is ready to receive candidacy applications for presidency starting off today, a day following opening registration, based on General Elections Law no. 5. Registration for presidential candidates was declared open Monday by Speaker of the People’s Assembly Mohammad Jihad al-Laham, during a session that was attended by Prime Minister, ministers and representatives of local, Arab and foreign media. Candidates can continue to submit applications through Thursday, 1 May, 2014. According to Article 30 of the law, a presidential candidate must be at least 40 years of age and of the Syrian Arab nationality through both parents and not married to a non-Syrian. A candidate who has not lived in Syria for at least 10 years uninterruptedly and who holds another nationality besides the Syrian one is not eligible. A candidacy application will be dismissed if the applicant could not manage to get a written approval for running from at least 35 members of the People’s Assembly, who are not allowed to give approval to more than one candidate. The Supreme Constitutional Court is in charge of monitoring the electoral process and regulating its procedures, according to Article 34. SANA Reblogged this on freesuriyah. Reblogged this on Dogma and Geopolitics.
238,843
watch this or skip it and go straight to step two (below)! join me in getting your clients working with you in January. Let’s get you a website created and built in 6 weeks! This shorter programme will be condensed in time AND price. So instead of 6 months, its 6 weeks. Instead of £1500, it’s £498 I want this to be quick, and I want to help. But I’ve never condensed it down before, hence the low-low guaranteed ways to make more money through your website! You have successfully joined our subscriber list. *you can trust us with your emails. Spam isn’t part of the game!
320,589
\begin{document} \newcounter{minutes}\setcounter{minutes}{\time} \divide\time by 60 \newcounter{hours}\setcounter{hours}{\time} \multiply\time by 60 \addtocounter{minutes}{-\time} \def\thefootnote{} \footnotetext{ \texttt{\tiny File:~\jobname .tex, printed: \number\year-\number\month-\number\day, \thehours:\ifnum\theminutes<10{0}\fi\theminutes} } \makeatletter \def\thefootnote{\@arabic\c@footnote} \newcommand{\subjclass}[2][1991]{ \let\@oldtitle\@title \gdef\@title{\@oldtitle\footnotetext{#1 \emph{Mathematics subject classification.} #2}} } \newcommand{\keywords}[1]{ \let\@@oldtitle\@title \gdef\@title{\@@oldtitle\footnotetext{\emph{Key words and phrases.} #1.}} } \makeatother \subjclass[2010]{57M25; 57M27} \keywords{knot; Seifert matrix; algebraic unknotting operation; $S$-equivalence; Blanchfield pairing} \maketitle \begin{abstract} Using Blanchfield pairings, we show that two Alexander polynomials cannot be realized by a pair of matrices with Gordian distance one if a corresponding quadratic equation does not have an integer solution. We also give an example of how our results help in calculating the Gordian distances, algebraic Gordian distances and polynomial distances. \end{abstract} \section{Introduction} A \emph{knot} is an oriented circle embedded in the three-sphere $S^3$, taken up to isotopy, and a knot diagram is an oriented circle immersed in $S^2$ with at worst double points, which are crossings and where one records over and under crossing information. Any knot diagram can be converted to a diagram for the trivial knot by crossing changes, and thus crossing change is an unknotting operation for knots. The Gordian distance $d_G(K,K')$ between two knots $K$ and $K'$ is the minimal number of crossing changes needed to turn $K$ into $K'$, and the unknotting number of $K$ is defined by $u(K)=d_G(K,O)$, where $O$ is the trivial knot. Murakami defined the algebraic unknotting operation in \cite{mura90} in terms of the corresponding Seifert matrices. Let $V$ be a Seifert matrix, and let $[V]$ be its $S$-equivalence class. The algebraic Gordian distance $d^a_G([V],[V'])$ between two $S$-equivalence classes of Seifert matrices is the minimal number of algebraic unknotting operations needed to turn a Seifert matrix in $[V]$ into a Seifert matrix in $[V']$. The algebraic unknotting number is defined by $u_a([V])=d^a_G([V],[\varnothing])$, where $[\varnothing]$ denotes the $S$-equivalence class of the empty matrix. Seifert showed that a Laurent polynomial $\Delta \in \Z[t,t^{-1}]$ is the Alexander polynomial of knot if and only if $\Delta(t^{-1}) = \Delta(t)$ and $\Delta(1)=1$; see \cite{seifert-a}. Given two such polynomials $\Delta, \Delta'$, Kawauchi defined the distance between them by setting $$\ds \rho(\Delta,\Delta')=\min_{K,K'} d_G(K,K'),$$ where $K$ and $K'$ are knots with Alexander polynomials $\Delta_K =\Delta$ and $\Delta_{K'} = \Delta'$; see \cite{kawa12}. It is known that $\rho(\Delta,\Delta')\in \left\{ 0,1,2 \right\}$, and that $\rho(\Delta,\Delta')=0\Leftrightarrow \Delta=\Delta'$. There are many examples of Alexander polynomials $\Delta$, $\Delta'$ with $\rho(\Delta,\Delta')=1$. Jong posed the problem of finding examples of Alexander polynomials $\Delta$ and $\Delta'$ with $\rho(\Delta,\Delta')= 2$; see \cites{kawa12,jong1,jong2,jong3}. Let $\Lambda$ be the Laurent polynomial ring $\Z[t,t^{-1}]$. For a polynomial $c\in\Lambda$, put $\bar{c}=c|_{t=t^{-1}}$. In \cite{kawa12}*{Theorem~1.2, p.949} Kawauchi proved the following theorem: \begin{theorem}[Kawauchi] If $u(K)=d_G(K,K')=1$, then there exists $c\in\Lambda $ such that $\pm \Delta_{K'}\equiv c\bar{c}\pmod{\Delta_K}$. \label{thm:kawa} \end{theorem} In this paper, by considering the corresponding algebraic unknotting operations, we obtain the following result. \begin{customcor}{\ref{col:alg}} If $u_a([V])=d_G^a([V],[V'])=1$, then there exists $c\in \Lambda$ such that $\pm \Delta_{V'}\equiv c\bar{c}\pmod{\Delta_V}$. \end{customcor} The proof of Corollary~\ref{col:alg} will be given in Section~4. Note that Corollary~\ref{col:alg} implies Theorem~\ref{thm:kawa}, for if $K$ and $K'$ are knots with Seifert matrices $V$ and $V'$, respectively, then $d_G(K,K')=1$ implies $d_G^a([V],[V'])\le 1$. Note further that the converse may not hold, and that Corollary~\ref{col:alg} does not require any geometric assumption on the unknotting number. Further, Corollary~\ref{col:alg} gives some new results that cannot be derived from Theorem~\ref{thm:kawa}. For instance, the following corollary gives an obstruction to $d^a_G([V],[V'])=1$. \begin{customcor}{\ref{thm_main3}} Suppose that $V$ and $V'$ are Seifert matrices with Alexander polynomials $\Delta_V$ and $\Delta_{V'}$, respectively. Suppose further that $\Delta_V=h(t+t^{-1})+1-2h$ for $|h|$ a prime or $1$ and that $\Delta_{V'}\equiv d \pmod{\Delta_{V}}$, where $0 \neq d\in \Z$. If $u_a([V])=1$ and if $h^2x^2+y^2+(2h-1)xy=\pm d $ admits no integer solutions, then the algebraic Gordian distance $d^a_G([V],[V'])> 1$. \end{customcor} The following corollary, also derived from Corollary~\ref{col:alg}, gives a new solution to Jong's problem. \begin{customcor}{\ref{thm_main4}} The Alexander polynomial distance $\rho(t-1+t^{-1},\Delta)=2$ if $\Delta\equiv 2+4m \pmod{t-1+t^{-1}}$ for some $m\in \Z$. \end{customcor} In Section~5, we apply Corollary~\ref{thm_main4} to show that the Gordian distance $ d_G(K_1,K_2)\ge 2$ for any pair of knots $K_1$ and $K_2$ with $\Delta_{K_{1}}=\Delta_{3_{1}}$ and $\Delta_{K_{2}}=\Delta_{9_{25}}$. The remainder of this paper is organized as follows. In Section~2, we review preliminary material. In Section~3, we present some results on Seifert matrices. In Section~4, we prove Theorem \ref{thm_main2}, our main result, which is an improvement to Theorem \ref{thm:kawa} and gives new answers to Jong's problem. In the final section, we present an example illustrating how to calculate various distances in knot theory. \section{Preliminaries} A \emph{Seifert matrix} $V$ is a $2n\times 2n$ integer matrix, satisfying $\det (V-V^T)=1$. Two Seifert matrice $V$ and $W$ are said to be \emph{congruent} if $W=PVP^T$ for a unimodular matrix $P$. A Seifert matrix $W$ is called an \emph{enlargement} of $V$ if \[ W= \begin{pmatrix} 0 &0 &0\\ 1 &x &M\\ 0 &N^T &V \end{pmatrix} \text{~~~~~~~~or~~~~~~~~} \begin{pmatrix} 0 &1 &0\\ 0 &x &M\\ 0 &N^T &V \end{pmatrix}, \] where $M$ and $N$ are row vectors. In this case, we also say that $V$ is a \emph{reduction} of $W$. Two Seifert matrices are said to be \emph{$S$-equivalent} if one can be obtained from the other by a sequence of congruences, enlargements, and reductions. Note that any two Seifert matrices of the same knot are $S$-equivalent \cite{levine}. For a given Seifert matrix $V$, we use $[V]$ to denote its \emph{$S$-equivalence class,} which consists of all Seifert matrices $S$-equivalent to $V$; see \cites{seifert,trotter73}. Motivated by the unknotting operation, the \emph{algebraic unknotting operation} assigns a Seifert matrix $W$ to $ \begin{pmatrix} \varepsilon &0 &0\\ 1 &x &M\\ 0 &N^T &W \end{pmatrix} $ for $\varepsilon=\pm 1$ and $x\in \Z$, where $M$ and $N$ are row vectors \cite{mura90}. The unknotting operation can be seen as adding a twist to a knot, turning Figure~\ref{fig:twist}-a into Figure~\ref{fig:twist}-b, which is equivalent to \ref{fig:twist}-c. The twist may fall into two types, corresponding to two types of the algebraic unknotting operations. To distinguish them, set $\varepsilon=1$ for Figure~\ref{fig:twist}-b and $\varepsilon=-1$ otherwise. We call the corresponding operation an $\varepsilon$-unknotting operation. Set $W$ to be a Seifert matrix of Figure~1-d. We add a twist to the knot in Figure~1-d, so that the result is the knot in Figure~1-e or Figure~1-f. Set $\alpha$ and $\beta$ to be the two new generators of the first homology group of the Seifert surfaces in Figure~1-e,f which do not appear in Figure~1-d. By choosing the direction of $\alpha$ such that $\operatorname{lk}(\alpha,\beta^+)=1$, we have $\operatorname{lk}(\alpha,\alpha^+)=\varepsilon$ and $\operatorname{lk}(\beta,\beta^+)=x$. The Seifert matrices of Figure~\ref{fig:twist}-e,f coincide with the result of the algebraic unknotting operation. \begin{figure}[h] \centering \includegraphics[width=0.32\textwidth]{./pic/fig1.pdf} \includegraphics[width=0.32\textwidth]{./pic/fig3.pdf} \includegraphics[width=0.32\textwidth]{./pic/fig4.pdf} \\ \hspace{0.\textwidth}(a) \hspace{0.32\textwidth}(b) \hspace{0.3\textwidth}(c) \hspace{0.15\textwidth}\\ ~\\ \includegraphics[width=0.32\textwidth]{./pic/fig7.pdf} \includegraphics[width=0.33\textwidth]{./pic/fig9.pdf} \includegraphics[width=0.33\textwidth]{./pic/fig8.pdf} \\ \hspace{0.\textwidth}(d) \hspace{0.32\textwidth}(e) \hspace{0.3\textwidth}(f) \hspace{0.15\textwidth}\\ \caption{Unknotting operation} \label{fig:twist} \end{figure} \emph{The Gordian distance} \cite{mura85} between $K$ and $K'$, denoted by $d_G(K,K')$, is the minimal number of crossing changes needed to turn $K$ into $K'$. The \emph{unknotting number} of $K$, denoted by $u(K)$, is defined by $u(K)=d_G(K,O)$, where $O $ is the trivial knot. Let $\mathcal{V}$ and $\mathcal{V'}$ be two $S$-equivalence classes. For any two Seifert matrices $V $ and $V'$ such that $V\in\mathcal{V}$ and $V'\in\mathcal{V'}$, there exist a sequence of algebraic unknotting operations and $S$-equivalences transforming $V$ to $V'$. The \emph{algebraic Gordian distance} between $\mathcal{V}$ and $\mathcal{V'}$, denoted by $d_G^a(\mathcal{V},\mathcal{V'})$, is the minimal number of algebraic unknotting operations in such a sequence \cites{mura90,fogel}. Clearly, $d_G^a(\V,\V')=d_G^a(\V',\V)$ and $d_G^a(\V,\V')=0$ if and only if $\V=\V'$, hence $d_G^a$ defines a metric on the space of $S$-equivalence class of Seifert matrices. The following lemma, first proved in \cite{mura90}*{Proposition~2, p.286}, is an immediate consequence of the following observation. Given any sequence of unknotting operations for a knot, one can construct a sequence of algebraic unknotting operations for its Seifert matrix. \begin{lemma}[Murakami] For two knots $K_1$ and $K_2$ with Seifert matrices $V_1$ and $V_2$, respectively, we have $d_G(K_1,K_2)\ge d_G^a([V_1],[V_2])$. \label{lem_mura2} \end{lemma} The \emph{algebraic unknotting number} $u_a(\mathcal{V})$ is defined to be $d_G^a(\mathcal{V},\mathcal{O})$, where $\mathcal{O}$ is the $S$-equivalence class of the $0\times 0$ matrix \cites{mura90}. The \emph{Alexander polynomial} of a Seifert matrix $V$, denoted by $\Delta_V$, can be calculated by $\Delta_{V}= \det\left(t^{\frac{1}{2}}V-t^{-\frac{1}{2}}V^T\right)$. The Alexander polynomial is a knot invariant, which means that any two Seifert matrices of a given knot have the same Alexander polynomial. If $V$ is a Seifert matrix of $K$, we write $\Delta_{K}=\Delta_{V}$. Saeki proved $\ds u_a([V])=\min_{K_0}d_G(K,K_0)$, where $K_0$ is a knot with $\Delta_{K_0}=1$; see \cite{saeki}. We will write $u_a(V) =u_a([V])$ for convenience and set $u_a(K) =u_a(V)$. Analogously, the \emph{Alexander polynomial distance} between two Alexander polynomials $\Delta$ and $\Delta'$, denoted by $\rho(\Delta,\Delta')$, is defined by $$ \rho(\Delta,\Delta')=\min d_G(K,K'),$$ where the minimum is taken over all knots $K,K'$ with Alexander polynomials $\Delta, \Delta'$, respectively \cite{kawa12}. Kawauchi pointed out that $1\le\rho(\Delta,\Delta')\le 2$ for $\Delta, \Delta'$ distinct; see \cite{kawa12}*{p.954}. Given any Alexander polynomial $\Delta$, there exists a knot $K$ with unknotting number one such that $\Delta_K=\Delta$ \cite{kondo}*{Theorem 3, p.558}. It follows that $\rho(\Delta,1)\le 1$ for any $\Delta$, and hence $\rho(\Delta,\Delta')\le \rho(\Delta,1)+\rho(\Delta',1)\le 2$ for any pair of Alexander polynomials $\Delta$ and $\Delta'$. Kawauchi called the following question Jong's Problem \cite{kawa12}*{p.954} as mentioned in Jong's papers \cites{jong1,jong2,jong3}. \begin{question} Find Alexander polynomials $\Delta$ and $\Delta'$ such that $\rho(\Delta,\Delta')= 2$. \end{question} Equivalently, this question asks when two Alexander polynomials cannot be realized by knots with Gordian distance one. In \cite{kawa12}*{Corollary~4.2, p.955}, Kawauchi gives a criterion for this, assuming that both Alexander polynomials $\Delta$ and $\Delta'$ have degree two. In Section~4, we will give new criteria in Corollaries~\ref{thm_main3} and \ref{thm_main4} which require that only one of the Alexander polynomials has degree two. Note that a question of Nakanishi asks if $\rho(\Delta_{3_1},\Delta_{4_1})= 2$; see \cite{naka}*{p.334}. It is answered positively by Kawauchi \cites{kawa12}. Our result gives another method to answer it. The study of the unknotting number and the Gordian distance is closely related to pairing relations of covering spaces. Let $V$ be a Seifert matrix, i.e. $V$ is a $2n \times 2n$ integral matrix with $\det(V -V^T)=1.$ The \emph{Alexander module}, denoted by $A_V$, is defined by $A_V=\Lambda^{2n} /(tV-V^T)\Lambda^{2n}$, where $\Lambda=\Z[t,t^{-1}]$. If $V$ is a Seifert matrix for $K$, then $A_V\cong H_1(\tilde{X}(K);\Z)$, where $\tilde{X}(K)$ is the infinite cyclic cover of the complement of $K$. If two Seifert matrices, $V$ and $V'$ are $S$-equivalent, then their Alexander modules are isomorphic. The \emph{Blanchfield pairing} of $V$ is a map $\beta: A_V\times A_V\longrightarrow Q(\Lambda)/\Lambda$, which is a sesquilinear form, meaning $\beta(ax,by)=a\bar{b}\beta(x,y)$, where $\bar{b}= b|_{t=t^{-1}}$ and $Q(\Lambda)$ is the field of fractions of $\Lambda$; see \cite{blanch}. By \cite{friedl}*{Theorem 1.3}, if $K$ is an oriented knot in $S^3$ with Seifert matrix $V$ of size $2n$, then the Blanchfield pairing is isometric to the pairing $$\Lambda^{2n} /(tV-V^T)\Lambda^{2n}\times\Lambda^{2n} /(tV-V^T)\Lambda^{2n}\longrightarrow Q(\Lambda)/\Lambda $$ given by $(v,w)\mapsto v^T(t - 1)(V -tV^T )^{-1}\bar{w}$ modulo $\Lambda$. Note that two matrices have the same Blanchfield pairing structure, if and only if they are $S$-equivalent \cite{trotter73}. There are a lot of papers on how to calculate the Gordian distance for two given knots. A large table of the Gordian distances is given by Moon \cites{moon}. However, the algebraic Gordian distance of knots is rarely studied. We are interested in the restrictions on their $S$-equivalence classes when their algebraic Gordian distance is one. We will use these restrictions to provide lower bounds for various distances in knot theory. We now list some existing results for future use. To detect the Gordian distance, some lower bound criteria are proved. The signature criterion \cites{murasugi,mura85} is $d_G(K,K')\ge \frac{1}{2}|\sigma(K)-\sigma(K')|$, where $\sigma(K)$ is the signature of $K$. Murakami generalized Lickorish's result \cite{lick} on the double branched cover and showed that if $u(K)=d_G(K,K')=1$, then there exists an integer $d$ such that $$\frac{2d^2}{D(K)}\equiv\pm\frac{D(K)-D(K')}{2D(K)}\pmod{1},$$ where $D(K)$ and $D(K')$ denote the determinants of $K$ and $K'$, respectively \cite{mura85}. We call it Murakami's obstruction. As to the algebraic unknotting number, we refer to the following lemma of Murakami \cite{mura90}*{Theorem~5, p.288}, which we will use later in Section~4. \begin{lemma}[Murakami] \label{lem_mura} If $u_a(K)=1$, then there exists a generator $\alpha$ for the Alexander module of $K$ such that the Blanchfield pairing $\beta(\alpha,\alpha)=\pm \frac{1}{\Delta_K}$. Moreover, the Blanchfield pairing is given by a $1\times 1$-matrix $(\pm \frac{1}{\Delta_K})$. \end{lemma} \section{The Seifert matrix} In this section, we recall the definition of the algebraic unknotting operation. We notice that the matrix $ \begin{pmatrix} \varepsilon &0 &0\\ 1 &x &M\\ 0 &N^T &W \end{pmatrix} $ is not the only possible result of adding a twist. \begin{lemma} Let $W$ be a Seifert matrix of $K$. If an $\varepsilon$-unknotting operation relates $K$ to $K'$, then both $ \begin{pmatrix} \varepsilon &0 &0\\ \pm 1 &x &M\\ 0 &N^T &W \end{pmatrix} $ and $ \begin{pmatrix} \varepsilon &\pm 1 &0\\ 0 &x &M\\ 0 &N^T &W \end{pmatrix} $ are Seifert matrices of $K'$. \label{prop:1} \end{lemma} \begin{proof} The Seifert surface of $K'$ can be constructed as in Figure~\ref{fig:matrix}-a or Figure~\ref{fig:matrix}-b, corresponding to $ \begin{pmatrix} \varepsilon &0 &0\\ \varsigma &x &M\\ 0 &N^T &W \end{pmatrix} $ and $ \begin{pmatrix} \varepsilon &\varsigma &0\\ 0 &x &M\\ 0 &N^T &W \end{pmatrix} $, respectively. The direction of $\alpha$ determines $\varsigma=1$ or $\varsigma=-1$. \end{proof} \begin{figure}[h] \centering \includegraphics[width=0.32\textwidth]{./pic/fig9.pdf} \hspace{0.1\textwidth} \includegraphics[width=0.32\textwidth]{./pic/fig11.pdf} \\ \includegraphics[width=0.32\textwidth]{./pic/fig8.pdf} \hspace{0.1\textwidth} \includegraphics[width=0.32\textwidth]{./pic/fig10.pdf} \\ \hspace{0.\textwidth}(a) \hspace{0.1\textwidth} \hspace{0.285\textwidth}(b) \\ \caption{Seifert surfaces} \label{fig:matrix} \end{figure} It is often hard to tell whether two matrices are $S$-equivalent or not, especially for matrices of size larger than $2\times 2$. As a consequence to Lemma~\ref{prop:1}, we have the following equivalence. \begin{lemma} $ \begin{pmatrix} \varepsilon &0 &0\\ \pm 1 &x &M\\ 0 &N^T &W \end{pmatrix} $ is $S$-equivalent to $ \begin{pmatrix} \varepsilon &\pm 1 &0\\ 0 &x &M\\ 0 &N^T &W \end{pmatrix} $ for $\varepsilon=\pm 1$. \label{prop:2} \end{lemma} Now we show that some Alexander polynomials are only realizable by Seifert matrices with algebraic unknotting number one. \begin{lemma} If a $2\times 2$ Seifert matrix $V$ has $\det V=D\in \{1,2,3,5\}$, then $u_a(V)=1$. \label{lem_c} \end{lemma} \begin{proof} Since $\det V>0$ and the matrix size is $2\times 2$, either $V$ or $-V$ is positive definite. Every $2\times2$ positive definite Seifert matrix is congruent to a matrix $ \begin{pmatrix} a&b+1\\b&c \end{pmatrix} $, where $0<2b+1\le \min(a,c)$; see \cite{trotter73}*{p.204}. Since $b=0$ is the only solution to $ac-b(b+1)=D$, we obtain $ac=D$. Therefore, we have either $a=D$ and $c=1$, or $a=1$ and $c=D$. By Lemma~\ref{prop:2}, $ \begin{pmatrix} 1&1\\0&D \end{pmatrix} $ is $S$-equivalent to $ \begin{pmatrix} 1&0\\1&D \end{pmatrix} $, which is congruent to $ \begin{pmatrix} D&1\\0&1 \end{pmatrix} $. By Lemma~\ref{prop:1}, both $ \begin{pmatrix} 1&1\\0&D \end{pmatrix} $ and $ \begin{pmatrix} -1&-1\\0&-D \end{pmatrix} $ have algebraic unknotting number one, so the proof is complete. \end{proof} \begin{lemma} For a Seifert matrix $V$, if $\Delta_V=ht+ht^{-1}+1-2h$ with $h\in \{1,2,3,5\}$, then $u_a(V)=1$. \label{lem_d} \end{lemma} \begin{proof} Because $\Delta_V=ht+ht^{-1}+1-2h$, $V$ is $S$-equivalent to a $2\times 2$ Seifert matrix $V'$ with $\det V'=h$ ; see \cite{trotter62}*{pp.484-486}. By Lemma~\ref{lem_c}, we have $u_a(V)=1$. \end{proof} The next lemma relates the distance between matrices with the distance between polynomials. \begin{lemma} If $K_1$ and $K_2$ are knots with Seifert matrices $V_1$ and $V_2$ and Alexander polynomials $\Delta_{K_1}$ and $\Delta_{K_2},$ respectively, then $d_G^a([V_1],[V_2])\ge \rho(\Delta_{K_1},\Delta_{K_2})$. \label{lem_e} \end{lemma} \begin{proof} If $d_G^a([V_1],[V_2])=0$, $V_1$ is $S$-equivalent to $V_2$ and hence $\Delta_{K_1}=\Delta_{K_2}$, which gives $ \rho(\Delta_{K_1},\Delta_{K_2})=0$. If $d_G^a([V_1],[V_2])=1$, then we can find Seifert matrices $V_1' \in[V_1]$ and $V_2'\in[V_2]$ such that $V'_1$ can be turned into $V'_2$ by one algebraic unknotting operation. We can construct two Seifert surfaces as shown in Figure~1-d,e (or Figure~1-d,f), where Seifert matrices of them are $V'_1$ and $V'_2$ respectively. Let $K'_1$ and $K'_2$ be the boundaries of these two Seifert surfaces. Then we have $d_G(K'_1,K'_2 )=1$. Since $\Delta_{K'_1}=\Delta_{K_1}$ and $\Delta_{K'_2}=\Delta_{K_2}$, we obtain $\rho(\Delta_{K_1},\Delta_{K_2})\le 1$. If $d_G^a([V_1],[V_2])\ge 2$, the inequality holds because $\rho(\Delta_1,\Delta_2)\le 2$ for any pair of Alexander polynomials $\Delta_1,\Delta_2$. \end{proof} Consequently, we have $d_G(K_1,K_2)\ge d_G^a([V_1],[V_2])\ge \rho(\Delta_{K_1},\Delta_{K_2})$. \section{Main theorem and its consequence} In this section, we examine the structure of the Blanchfield pairing realized by a pair of $S$-equivalence classes with algebraic Gordian distance one. We obtain conditions expressed in terms of the Alexander polynomials for two $S$-equivalence classes to have algebraic Gordian distance one. In corollaries to the main theorem, we provide an answer to Jong's question by showing that two Alexander polynomials cannot be realized by knots with Gordian distance one unless a corresponding quadratic equation admits an integer solution. \begin{theorem}[Main Theorem] Let $V$ and $V'$ be two Seifert matrices. If the algebraic Gordian distance $d_G^a([V],[V'])$=1, then there exist $a\in A_V$ and $a'\in A_{V'}$ such that $\ds\beta(a,a)\equiv \pm\frac{\Delta_{ V' }}{\Delta_V} \pmod{\Lambda}$ and $\ds\beta(a',a')\equiv \pm\frac{\Delta_V}{\Delta_{V'}} \pmod{\Lambda}$. \label{thm_main2} \end{theorem} \begin{proof} If $[V]$ and $[V']$ have algebraic Gordian distance one, there exist $W\in[V]$ and $W'\in[V']$ such that $W$ can be obtained from $W'$ by an algebraic unknotting operation. By definition, the algebraic unknotting operation assigns $W'$ to $ \begin{pmatrix} \varepsilon &0 &0\\ 1 &x &M\\ 0 &N^T &W' \end{pmatrix} $ for $\varepsilon=\pm 1$. Therefore, we have $$ W-tW^T= \begin{pmatrix} \varepsilon(1-t) &-t &0\\ 1 &x (1-t)&M-tN\\ 0 &N^T-tM^T &W'-tW'^{T} \end{pmatrix}. $$ Let $a_1$ be the first element of the basis for $A_V$ so that the Blanchfield pairing $\beta(a_1,a_1)$ is the $( 1,1 )$-entry of matrix $(t-1)(W-tW^T)^{-1}$ modulo $\Lambda$. The inverse of a non-singular matrix $M$ is equal to $\ds\frac{\operatorname{adj}M}{\det M}$, where $\operatorname{adj}M$ is the adjugate matrix, i.e. the transpose of the cofactor matrix of $M$. \begin{align} \displaystyle \beta(a_1,a_1) &\equiv (t-1) \frac{\left[\operatorname {adj} (W-tW^T) \right]_{ 1,1} }{ \det(W-tW^T) }\\ &\equiv(t-1)\frac{ \det \begin{bmatrix} x (1-t)&M-tN\\ N^T-tM^T &W'-tW'^{T} \end{bmatrix} }{\det(W-tW^T) }. \end{align} The Alexander polynomials are given by \begin{align} \Delta_{V}&=\Delta_{W}=t^{-g} \det(W-tW^T) \\ \Delta_{V'}&=\Delta_{W'}=t^{1-g} \det(W'-tW'^T), \end{align} where $2g$ is the size of $W$. The determinant \begin{equation} \label{e:4} \det(W-tW^T)=\varepsilon(1-t) \det \begin{bmatrix} x (1-t)&M-tN\\ N^T-tM^T &W'-tW'^{T} \end{bmatrix} +t\det(W'-tW'^T) . \end{equation} By substituting (3), (4), (5) from (2), we have \begin{equation*} \beta(a_1,a_1)\equiv \frac{\varepsilon(\Delta_{V'} -\Delta_{V} )}{\Delta_V} \equiv\varepsilon \frac{\Delta_{V'} }{\Delta_V} \pmod{\Lambda}. \end{equation*} Let $a'_1$ be the first element of the basis for $A_{V'}$. The equation of $\beta(a'_1,a'_1)$ can be derived in the same way. \end{proof} Theorem~\ref{thm_main2} gives a condition on the Blanchfield pairing when the algebraic Gordian distance is one. Using similar methods, we deduce the following corollary, which gives the same obstruction as above for a pair of knots with Gordian distance one. \begin{corollary} \label{col:gordian} If $K$ and $K'$ are two knots with $d_G(K,K')=1$, then there exist $a\in H_1(\tilde{X}(K))$ and $a'\in H_1(\tilde{X}(K'))$ such that $\ds\beta(a,a)\equiv \pm\frac{\Delta_{ K' }}{\Delta_K} \pmod{\Lambda}$ and $\ds\beta(a',a')\equiv \pm\frac{\Delta_K}{\Delta_{K'}} \pmod{\Lambda}$. \end{corollary} As the Blanchfield pairing is a complicated form, we now focus on the case where the Alexander module is cyclic. We prove the following corollary and show that it improves existing results. \begin{corollary} If $u_a([V])=d_G^a([V],[V'])=1$, then there exists $c\in \Lambda$ such that $\pm \Delta_{V'}\equiv c\bar{c}\pmod{\Delta_V}$. \label{col:alg} \end{corollary} \begin{proof} Since $u_a([V])=d_G^a([V],[V'])=1$, by Theorem~\ref{thm_main2}, there exist $a\in A_V$ and $g\in A_{V}$ such that $$ \beta(a,a)\equiv \pm\frac{\Delta_{ V' }}{\Delta_V}\pmod{ \Lambda} \text{~~~~~and~~~~~} \beta(g,g)\equiv \pm\frac{1}{\Delta_{V}} \pmod{ \Lambda}. $$ By Lemma~\ref{lem_mura}, the Blanchfield pairing on $A_V$ is cyclic and generated by $g$. Therefore, there exists $c\in \Lambda$ such that $a=cg$. Hence we have $$\pm \frac{\Delta_{V'}}{\Delta_V} \equiv\beta(cg,cg) \equiv c\bar{c}\beta(g,g)\equiv\frac{ c\bar{c}}{ \Delta_V} \pmod{ \Lambda},$$ which gives $\pm \Delta_{V'}=c\bar{c}\pmod{\Delta_V}$. This completes the proof. \end{proof} The following result is a natural consequence of Corollary \ref{col:alg}. \begin{corollary} If $u_a(K)=d_G(K,K')=1$, then there exists $c\in \Lambda$ such that $\pm \Delta_{K'}\equiv c\bar{c}\pmod{\Delta_K}$. \label{col:1} \end{corollary} \begin{proof} Let $V$ and $V'$ be Seifert matrices for $K$ and $K'$, respectively. Clearly $\Delta_V=\Delta_{K}$ and $\Delta_{V'}=\Delta_{K'}$. Since $d_G^a([V],[V'])\le d_G(K,K')=1$, we have $d_G^a([V],[V'])\le1$. If $d_G^a([V],[V'])=0$, then $[V]=[V']$ and $\Delta_K=\Delta_{K'}$, and the result holds by taking $c=0$. If $d_G^a([V],[V'])=1$, then by Corollary~\ref{col:alg} there exists $c\in\Lambda $ such that $\pm \Delta_{K'}\equiv c\bar{c}\pmod{\Delta_K}$. \end{proof} \begin{remark} It is worth mentioning that Corollary~\ref{col:alg} implies Kawauchi's result Theorem~\ref{thm:kawa}. Note that there are infinitely many knots with trivial Alexander polynomial. Since $u(K)=1$ is a special case of $u_a(K)=1$, Corollary~\ref{col:1} implies Theorem~\ref{thm:kawa}. However, since Corollary~\ref{col:1} follows from Corollary~\ref{col:alg}, we see that Corollary~\ref{col:alg} also implies Theorem~\ref{thm:kawa}. In Section~5, we will present an example to show the strength of our approach. Let $K_1$ and $K_2$ be knots with the same Alexander polynomials as $3_1$ and $9_{25}$, respectively. By Lemma~\ref{lem_d}, we have $u_a(K_1)=1$, so Corollary~\ref{col:alg} applies to show that $d_G(K_1,K_2)\ge d_G^a([V_1],[V_2])\ge 2$. However, Theorem~\ref{thm:kawa} does not apply in this case, because we do not necessarily have $u(K_1)=1$. \end{remark} Our next aim is to give new solutions to Jong's problem by finding Alexander polynomials $\Delta$ and $\Delta'$ such that $\rho(\Delta,\Delta')= 2$. \begin{corollary} Suppose $V$ and $V'$ are Seifert matrices with Alexander polynomials $\Delta_V$ and $\Delta_{V'}$, respectively, such that $\Delta_V=h(t+t^{-1})+1-2h$, where $|h|$ is prime or $1$, and $\Delta_{V'}\equiv d \pmod{\Delta_{V}}$ for some $0\neq d\in \Z$. If $u_a([V])=1$ and if $h^2x^2+y^2+(2h-1)xy=\pm d $ does not have an integer solution, then the algebraic Gordian distance $d^a_G([V],[V'])> 1$. \label{thm_main3} \end{corollary} \begin{proof} Seeking a contradiction, suppose $d^a_G([V],[V'])=1$. By Corollary~\ref{col:1}, there exists $c\in\Lambda$ such that $c\bar{c}\equiv \pm \Delta_{V'}\equiv \pm d\pmod{\Delta_V}$. Let $$ c=\sum_{-n\le i\le m}a_it^i \text{~~~~~~~~and~~~~~~~~} \bar{c}=\sum_{-m\le i\le n}a_{-i}t^i, $$ which gives $$ c\bar{c}= a_{-n}a_{m}t^{m+n}+\dots+a_{m}a_{-n}t^{-(m+n)} . $$ If $c$ can be expressed as $c= pt^{j+1}+qt^j$, where $p$ and $q$ are integers, we have $ (p^2+q^2) + pq(t+t^{-1})\equiv \pm d\pmod{\Delta_V} $. Since $|h|$ is prime or $1$, either $h|p$ or $h|q$. Without loss of generality, we may assume $p=hx$. By substituting $p$, we obtain that $ h^2x^2+q^2+(2h-1)xq=\pm d $. If it does not have an integer solution, the algebraic Gordian distance must be greater than one. If $c$ has more than two terms, we have $h|a_{-n}a_{m}$ follows from $c\bar{c}\equiv\pm d\pmod{\Delta_V}$, so either $h|a_{-n}$ or $h|a_{m}$. Hence we obtain $$\ds c\equiv\sum_{1-n\le i\le m}a'_it^i \text{~~~~~or~~~~~} \sum_{-n\le i\le m-1}a'_it^i\pmod{\Delta_V} ,$$ where $\left\{ a'_i \right\}$ are integer coefficients. Repeat this step until we deduce $c\equiv pt^{j+1}+qt^j\pmod{\Delta_V}$, where $p$ and $q$ are integers. The rest of the proof follows in the same manner. \end{proof} The following corollary is an immediate consequence. \begin{corollary} The Alexander polynomial distance $\rho(t-1+t^{-1},\Delta)=2$ if $\Delta\equiv 2+4m \pmod{t-1+t^{-1}}$ for some $m\in \Z$. \label{thm_main4} \end{corollary} \begin{proof} By Lemma~\ref{lem_d}, any knot with Alexander polynomial $t-1+t^{-1}$ has algebraic unknotting number one. By Corollary~\ref{thm_main3}, it suffices to show that $x^2+y^2+xy= 2+4m$ does not have an integer solution. Now we check the parities of $x$ and $y$. If $x$ and $y$ are both odd or one is even and the other is odd, then $x^2+y^2+xy$ is odd, which is a contradiction. Otherwise, if $x$ and $y$ are both even, then $x^2+y^2+xy\equiv 0 \pmod{4}$, which is also a contradiction. Hence the proof is complete. \end{proof} \begin{remark} There are other applications of Corollary~\ref{thm_main3}. For example, by Lemma~\ref{lem_d}, Corollary~\ref{thm_main3} gives the same result for $\rho(\Delta,ht+ht^{-1}+1-2h)=2$ with $h\in \{1,2,3,5\}$ provided that $h^2x^2+y^2+(2h-1)xy=\pm d $ does not have an integer solution. \end{remark} \begin{remark} Corollary~\ref{thm_main4} offers another route to answer Nakanishi's question \cite{naka}*{p.334}, which asks if $\rho(\Delta_{3_1},\Delta_{4_1})= 2$. Moreover, it implies that any two Seifert matrices with Alexander polynomials same as $3_1$ and $4_1$, respectively, cannot be turned into each other by one algebraic unknotting operation. \end{remark} \section{Example} Moon computed the Gordian distances between many knots, which he listed in a table. By Moon's results, the lower bound of $d_G(3_1,9_{25})$ is one. By our method, we now prove that any pair of knots with the same Alexander polynomial as $3_1$ and $9_{25}$, respectively, cannot have Gordian distance one. Therefore, the inequality $d_G(3_1,9_{25})\ge 2$ holds. Moreover, we will deduce that the algebraic Gordian distance $d^a_G([V_1],[V_2])=2$, where $V_1$ and $V_2$ are Seifert matrices for $3_1$ and $9_{25},$ respectively. \begin{figure}[h] \hspace{0.05\textwidth} \includegraphics[width=0.25\textwidth]{./pic/3_1.pdf} \hspace{0.2\textwidth} \includegraphics[width=0.28\textwidth]{./pic/9_25.pdf} \caption{Knot diagrams for $3_1$ and $9_{25}$.} \label{fig:knots} \end{figure} Consider the knot diagrams for $3_1$ and $9_{25}$ in Figure \ref{fig:knots}. From \cite{knotinfo} we have \begin{align*} \hspace{0.1\textwidth} \Delta_{3_1}&=t+t^{-1}-1 \hspace{0.05\textwidth} &\Delta_{9_{25}}&=-3 t^2-3t^{-2}+12 t+12t^{-1}-17 \\ \hspace{0.1\textwidth} \sigma(3_1)&=-2 \hspace{0.05\textwidth} &\sigma(9_{25})&=-2 \\ \hspace{0.1\textwidth} D(3_1)&=3 \hspace{0.05\textwidth} &D(9_{25})&=47 \end{align*} Since $\Delta_{9_{25}}= (-3t+9-3t^{-1}) \Delta_{3_1}-2 $, Corollary~\ref{thm_main4} applies to show that $\rho(\Delta_{3_1},\Delta_{9_{25}})=2$. From Lemma~\ref{lem_e}, we have $$ d_G(K_1,K_2)\ge d_G^a([ K_{1} ],[ K_{2} ])\ge \rho(\Delta_{3_1},\Delta_{9_{25}})=2$$ for any pair of knots $K_1$ and $K_2$ with $\Delta_{K_{1}}=\Delta_{3_{1}}$ and $\Delta_{K_{2}}=\Delta_{9_{25}}$. Note that $d_G(3_1,9_{25})\ge 2$ can also be proved by Kawauchi's Theorem~\ref{thm:kawa}, which is a result of Kawauchi. This is because $3_1$ satisfies the geometric assumption that $u(3_1)=1$. However, for any pair of knots $K_1$ and $K_2$ with the same Alexander polynomial as $3_1$ and $9_{25}$ respectively, Theorem~\ref{thm:kawa} does not apply because we may not have that $u(K_i)=1$ for $i=1,2$. For example, $3_1\#11n_{34}$ and $9_{25}$. Moreover, this example demonstrates how our result helps in calculating the algebraic Gordian distance of two given $S$-equivalent classes. We know $u_a(9_{25})=u_a(3_1)=1$; see \cites{knotorious}. It gives that $d_G^a([V_1],[V_2])\le u_a(3_1)+u_a(9_{25})=2$, where $V_1$ and $V_2$ are Seifert matrices for $3_1$ and $9_{25},$ respectively. Therefore, we have $d_G^a([V_1],[V_2])=2$. It is worth mentioning that Murakami's method does not apply here. Following Murakami's method, we would have to prove there does not exist an integer $d$ such that $$\frac{2d^2}{D(3_1)}\equiv\pm\frac{D(3_1)-D(9_{25})}{2D(3_1)}\pmod{1}.$$ In fact, any integer $d$, such that $d\not\equiv 0\pmod{3}$, satisfies this requirement, which means Murakami's method does not work in this case. Meanwhile, the knot signature criterion fails as well in this case. Since $\sigma(3_1)=\sigma(9_{25})=-2$, the signature criterion cannot determine whether $d_G(K,K')$ is one or not. \section*{Acknowledgements} The author wishes to give her many thanks to Prof. Hans Boden, Prof. Akio Kawauchi and Prof. Hitoshi Murakami, for their valuable suggestion. \Addresses \end{document}
53,344
Bakers tailor slicing equipment to what they are slicing. Matching the product to the proper blade and motion ensures baked goods are sliced to the right specifications with minimal damage. Without these specifications, bread can tear and cakes can smear. Irregular products can be sliced in every direction. However, with the right equipment and the proper blade, cuts are clean, and fines and crumbs can become a faint annoyance instead of a major production problem. The latest technology in slicing helps bakers maintain product integrity, whether it’s uniform pan bread, hand-shaped artisan rolls or complex cakes. Cutting through challenges Automation runs best on uniformity. “Automated systems love consistency in the products they have to handle,” said John Keane, executive product manager, packaging and automation, AMF Bakery Systems, Richmond, VA. This works well with uniform pan breads, brownies, cookies or cakes. Once a baker ventures into the artisan products consumers love so much these days, however, the irregular shapes and sizes change the rules. “Handling products that are inconsistent in size, shape and weight is much easier said than done,” Mr. Keane continued. “It challenges equipment designers to create designs that can handle products of varying dimensions.” AMF sees the challenge in predicting where an irregularly shaped product will be on the slicer. LeMatic, Inc., Jackson, MI, took a different approach by developing slicers specifically to handle products such as artisan rolls and croissants that vary in size and shape and have a crusty texture. When it comes to frozen bread products, Andy Schneider, sales manager, Americas, for J.E. Grote Co., Columbus, OH, noted that he saw a gap in slicing equipment that needed to be filled, one the Grote Co. plans to address at the 2013 International Baking Industry Expo in Las Vegas this fall. “We’ve seen a real need in the baking industry for machines that can cut frozen bread products,” he said, describing problems encountered by the company’s customers who run automated sandwich assembly lines, particularly breakfast sandwiches. Grote’s customers receive frozen biscuits from bakeries to use in breakfast sandwiches but must slack the baked foods for six to eight hours before being able to slice them for sandwich assembly. “Those frozen breakfast biscuits are hockey pucks,” Mr. Schneider said. “They are rock hard, and traditional bread-slicing equipment isn’t designed to cut products that hard.” Designing a slicer capable of cutting through frozen biscuits has been the company’s focus this year. Beyond the bread aisle, cakes provide a different challenge. It’s difficult to slice through layers made up of different flavors and textures without damaging them. Any crushing, shredding or breaking will be immediately obvious to the consumer. Without a pristine cut, multi-flavored cheesecakes, strawberry shortcakes or Neapolitan cakes will leave behind crumbs and smears, the evidence of a sloppy slice. Typically, bakers solve this problem by refrigerating or freezing the product before slicing, but eliminating this step could save time and money. Krumbein slicing equipment, exclusively distributed by Erika Record Baking Equipment, Clifton, NJ, cleanly slices through layers differing in texture. “An operator can program it to go through each layer of strawberry shortcake, whether it’s going through whipped cream or strawberries, so it doesn’t damage it,” said Craig Kominiak, sales representative and consultant on bakeries for Erika Record. For cheesecakes, the system cleans cake residue off the knife between every slice to keep the next as clean as the first. To prevent messes and damaged product, Matiss Food Cutting, Saint-Georges, QC, uses ultrasonic technology on its guillotine and slitting blades. Ultrasonic blades can eliminate shattering on brittle products and buildup on blades when cutting through icings and frosting, according to Mike Philip, industrial bakery equipment sales, Naegele, Inc., Alsip, IL, which represents Matiss. Slicing challenges aren’t limited to complex cakes. Pound cake’s high sugar and fat content also causes issues with the blades. To address these challenges, Bettendorf Stanford, Salem, IL, made its slice thickness changeover faster and simplified the mechanical slicing motion, giving the machine smooth action at high speeds. “We have also been able to take a similar mechanical motion and modify it to slice pre-iced cakes in half at room temperature. This eliminates the need to freeze or refrigerate iced cakes before they are sliced,” said Matt Stanford, vice-president of Bettendorf Stanford. Maintaining a clean cut Overcoming challenges in slicing, however, isn’t limited to innovating new methods or blades. To minimize waste, Tim O’Brien, vice-president of sales, Urschel Laboratories, Valparaiso, IN, said bakers need to maintain machines in good mechanical operating condition and ensure blades are sharp. Keeping the equipment and blades clean and operating smoothly can often be a baker’s best defense against product rejects. According to Mr. Keane at AMF, streamlining maintenance includes replacing outdated equipment designs with current technology and reviewing up-to-date maintenance procedures to discover any complexities that can be made more efficient. Newer equipment designs can reduce the number of moving parts, which results in less maintenance. Simplicity of design is something Bettendorf Stanford also strives to incorporate into its machines, according to Mr. Stanford. The company adds numeric encoders, tool-less changeovers and open framing designs to simplify changeovers and maintenance. When the Grote Co. tinkers with the functionality of its standard slicing equipment, it strives to improve sanitation as well as maintenance. This includes trading nickel-plated bearings for stainless steel and redesigning parts that need to be cleaned every day and can be removed without the use of tools. LeMatic’s machines allow operators to change out blades without the use of tools. The company also added two slicing heads on some of its equipment, so production can still run on one slicing head while the other undergoes maintenance. For products cut from continuous bands or slabs, Mr. Philip said Matiss has practically eliminated changeover delays. Without rotary cutters or spreading tables, bakers can change cutting patterns, shape sizes and lane counts without stopping the slab or reconfiguring the belts. The equipment is also designed to be cleaned in place, making sanitation easier. Strong, sharp blades are key to maintaining slice quality, but they don’t remain strong and sharp on their own. Bakers should change blades often to reduce costly waste and produce optimum results. “The baker really has to take care of the machine and change the blades more often than they do,” Mr. Kominiak said. “Some people just wait until their bread starts to rip before they start changing blades.” Customers often ask how long the blades last, to which he responds it depends on the product. “If it’s pan bread that’s very soft, they could have an extra two or three months compared with hard crusty artisan bread,” Mr. Kominiak added. To lengthen blade life, Urschel Laboratories makes blades out of durable materials. To reduce maintenance, Hansaloy, Davenport, IA, is introducing blade technology that increases the blade strength while maintaining its edge. This new technology will increase the blades’ lifetime, reducing changeovers and maintenance. Creating high-volume, a high-speed sliced product with minimal maintenance is the end goal of every baker producing sliced breads or desserts. Stronger, sharper blades matched to the proper application keep automated lines free from fines, and tool-less changeovers and removals enable quick and easy cleaning and maintenance. Innovations in slicing equipment make it possible to provide a consistent cut on an irregular product and slice complex cakes without smudging distinct layers, providing a pristine product for the consumer..
251,102
I am passionate about helping you tap into your potential and getting paid to be you. It’s my mission in life. It’s the reason I get up in the morning. That’s why we’re looking at 10 fantastic ways to monetise you. I want to know that women all over the world are smashing life (and their bank accounts) with goodness, just by being them. That’s why I put together this beauty: 10 fantastic ways to monetise you. I could walk up to you in a cafe and you could tell me a tiny snippet of your life story and I’d launch into a whole THANG about how you could get paid to simply be you. I’m not joking. It’s my superpower. I’ve been monetising myself for over 12 years now; at any one time in my business I’ve had up to 10 different revenue streams on the go. I have built online businesses simply by being myself and packaging my skills, strengths, experience, and my unique personality into products and services that make me an income and make an impact in other people’s lives. Think about it: What if you could take a skill you already have and monetise it? We’re talking about an income stream that you can create with all the stuff you already have bubbling inside of you: your knowledge, your personality, your uniqueness. Pretty amazing, right? If you’re ready to monetise you then best you get in on all the action at the Monetise You Virtual Summit: 10 ways to $10K months! What can more money bring you? Financial freedom, opportunity, and being able to invest in others, that’s for sure. Some freaking time off to get your business to a point where it is ticking along with great recurring revenue and you are in flow doing the work that you love? Yep! That’s why I’ve got 10 fantastic ways to monetise you and build a business around your uniqueness, right here in this blog. Let’s go get em! #1. Digital Products When I started my business in 2009, one of the first products that made me money was the humble Ultimate Entrepreneurs Toolkit. It was a beautifully designed PDF of my go-to tools that I absolutely loved using to build my online business. People paid for it. And that was fantastic. I had done all the groundwork and the research. I understood how to use these tools to build a thriving online business; I was passionate about them as a tech geek; I still am a little. And I put them all together in one easy-to-find-and-use PDF resource. I sold it for $47, and got tools to sponsor it! At the Monetise You Summit, Neta Talmor dives deep into this. You can sell a lot of $47 digital products and make a really good profit. If it helps people save time and money, and helps them to feel a little less overwhelmed, people will buy it. People want a solution that gives them more time, and more space, and the ability to actually just make a call and go with it and take action. I also created a course that went with the PDF that you could upgrade to for $97 total, and have more in-depth short videos on how to use those tools. It was totally in my zone of genius at the time. It was easy, it was fun, people valued it, and there was no coaching involved. It sold while I slept and without any real effort from me. Once you’ve invested in digital products, marketed them and created them, they can just continue to sell, especially if you’ve really hit the nail on the head for your target market. And it’s something that is very valuable for people. Digital products don’t just have to be ebooks. They can be audios, mini courses, video trainings, workshops, webinars, and master classes. These are all things that you sell on autopilot, on your website or through social. If you set them up with a great sales funnel, they should just make you recurring revenue every single day for the rest of your life. You can also scale at any point because if you scale your marketing, your advertising, your organic reach, and your content, then you make more sales, especially if you’ve got a great product. Often, if they are smaller digital products, like a book or an audio or short training, you can create them in a weekend or less once you get really good at it. And you can just keep layering the revenue that comes from those digital products. The best thing is that these babies live forever. #2. Published author One of my amazing guests at the Monetise You Summit is Joanna Penn, who has built a multiple six-figure business almost purely from writing books. Unlike the 3 books I’ve now published, she has written over 30!!!! The reality is that this takes a lot of hard work, and potentially many years, but it’s entirely possible. One thing Joanna and I talked about is how creating a book series is a really great way to become profitable, because you’ve become better known as a writer. Once somebody buys one of your books, they buy many more. Self-publishing is becoming more and more popular, affordable, and accessible, and a fantastic way to add income to your business. If you love writing, this is a great way to monetise yourself. Like Digital Products, once you’ve gone through the major project of writing and marketing a book, provided it’s timeless content, it will continue to sell on all the distribution channels like Amazon, Kobo, Book Depository, and of course your website. Off the back of books, of course, you can use affiliate links in your books or on your book resource section; you can build a course to bring your book to life like I have; you can sell merchandise; get paid to speak and so much more! #3. Paid keynote speaker Whether you want to be a full-time professional speaker, or you want to incorporate this as an income stream into your business and get paid to be you speaking, this is awesome for so many reasons. Just ask Carol Cox , keynote speaker and CEO of Speaking Your Brand. Becoming a paid speaker builds confidence, resilience, ability, resourcefulness, and it is an amazing tool and platform for you to really hone in on what it is that you do and what you want to become known for. It helps you to build your credibility and your thought leadership. When I did my TEDx talk, The Surprising Truth About Freedom (over 71,000 views) in 2016, I had to really dig deep to pull my entire Suitcase Entrepreneur and Freedom Plan framework together in 15 short minutes into a concise way of getting across a very important powerful message that resonates with other people. Once you’ve got your signature talk behind you, everything else falls into place in your business. You can inspire an audience to take action; to help them ponder on something; to get them thinking deeply; to change their minds or expand their minds, and to really see potential in themselves. I think that’s a real gift and honor to be able to stand on stage and educate people, share stories, and touch their lives. Not only can you then get paid to speak, you can turn your signature talk into a book, an online course, a workshop series, break it into digital products, create a consulting arm or move into coaching. Again you can spin this into multiple complementary revenue streams to monetise YOU! #4. Coaching Every single person benefits from having a coach. When you get coached by somebody else, you grow; you thrive. Even coaches need coaches. So, with all the millions of people in the world, there’s plenty of room for coaches, from all different industries, all different experiences, and different niches and credibility. I interviewed the amazing Sigrun Gudjonsdottir on my Summit about this, who has built an incredible seven-figure business from starting out with one-to-one coaching services at $180 an hour. Coaching can be done on a one-to-one basis, if you just love working deeply with people, and really focusing on fewer people with amazing results. But coaching can also allow you to do group coaching. This way you can scale and reach even more people, create larger coaching programs, and then develop courses. Coaching is one of the fastest ways to earn a living, make good money, make an impact, and really figure out how you want to turn up. Whether you’re working five or ten hours a week with a few premium clients, or whether you are running huge group coaching programs, the possibilities are endless. #5. Podcasting There are so many opportunities that come with podcasts, it’s an amazing business tool; and it can be monetised up the wazoo. Here’s a fantastic conversation with Valerie Shoopman, who is a head coach at Cash Flow podcasting all about the ways in which we have both used podcasts to make more impact, income, friends, connections, and complementary revenue streams. Podcasting allows you to be your own broadcaster, to have your own show, and to reach your own audience, establish you as the thought leader, or if you’re just starting out, use high profile amazing people to interview and build your authority from there! Having a podcast and producing content every single week is no mean feat, and it costs money and takes time and energy, and dedication but it’s just such an awesome medium. And you really get into people’s heads. They’ve just got you in their ear, hopefully, and they’re getting your message. You can deliver great value, share great insights, resonate with people and leave them with something that makes them feel empowered, educated, inspired, and entertained. People love listening. Since the dawn of time, storytelling has been the art of something very special that a lot of us have lost. And I think podcasts are just one of those ways in which we’re bringing that back. It can also drive new leads to your business; you can pick up new clients; you can promote your own products; and you can get paid through sponsorships, advertising, and crowdfunding to name a few. #6. Online courses To me, teaching what you know, teaching what you love, and teaching what you’re good at and experienced in is a true privilege and honour. It’s also a fantastic way to create an amazing income. You can launch courses regularly throughout the year, or you can create a course once and sell it on evergreen, so people can come along and take that course at any time. Both have their place, their benefits and pros and cons. Personally, I have always loved live launches because they get you to take massive action and commit to a timeframe in which to run it. You can pre-sell it, create it and run it once, and then continue to run it live throughout the year. Often a lot of course creators will do that every quarter, depending on how long it is. If it’s a 30- or 60- or a 90-day course they will run it live over that period. And it allows you to have big chunks of income coming in three or four times a year. One launch could give you an entire year’s income or salary depending on what floats your boat and what your financial goals are. I know when I launched my first full-blown version of the Freedom Plan course in 2015, in just 2 months of hard work I made $180,000! Or you can go for the more evergreen opportunity where your course is accessible all year round. This works really well when you set up an awesome funnel that actually drives leads to it and converts people into students/customers. And it will just make you money hand over fist for the rest of your life to come, if you’ve really dialed in that funnel. Live courses are a lot more effort, a lot more personalised, and a lot more high touch. I’ve just wrapped up my eighth cohort of the Launch Your Damn Course Accelerator and I’m incredibly proud of the students. I’m amazed at some of the results that they’ve got. Many of them are launching right now; many of them launched and sold out. It’s amazing. Other courses are perfect for self-study; like a DIY course. They can take it at their own pace. Perhaps it’s something where they simply need to watch the videos, train, and implement. If you have a brilliant idea, but you haven’t got around to it or you’re waiting on some combination of perfect universal factors and magic happening to conspire to make it happen for you, then I’m going to say, just do it. Take small actions and make it happen. Think about your Why. Think about what is important to you and get that course out there. One of the biggest things I teach in my Accelerator is how to not overcomplicate it. Make it simple and doable and achievable for the student. You could just turn up on a zoom call and run a live workshop and record it. #7. Affiliate marketing Affiliate marketing – say what? Get in on this, because it’s hot. So much is possible! Maybe you don’t ever want to teach anybody anything. Perhaps you’d rather use other people’s core courses, their products, their tools, their services and make money from those. Then you might want to consider affiliate marketing. And if you’re like Kate Kordsmeier, you may very well end up monetising your blog. Affiliate marketing means that you are affiliated to something that you believe in and are passionate about, and that you may even use yourself. So you become a seller or a marketer for that product or service that you love, and you get affiliate commissions, or referral commissions in return. For example, I love Podia. It’s one of my favorite course platforms, and I recommend it to everybody in my accelerator. I give them other options, but at the end of the day, I’ve done the hard work of looking at many course platforms and all their features and functionalities. I’ve tested them out, used them, and made all the decisions that you don’t need to. If someone signs up to Podia with my affiliate link, I’m going to get commission on that. Another example: I’ve got a mega-post on my website, the 10 best community platforms for course creators if you’re done with Facebook. And in that post, there are a lot of affiliate links. When people click on those, I start getting commissions each month from those clicks. That can really add up – especially when the commissions are reaching figures of 50%. The same applies to other tools that I love and use, but also for course creators who have affiliate programmes. So, let’s say you are a nutritionist, and you have no intention of creating a course or doing one-to-one coaching. And maybe you have a group coaching programme where you recommend nutrients, supplements, and physical products that help people to become fitter and healthier and make better food choices. If you’re affiliated with those products and your clients buy them with your link, you get a cut every week. You’ve given them a tool or a service that is incredibly helpful and beneficial to them, and you make extra income. All of these things start to add up over time and can be very lucrative. If you’ve built an audience and you have an email list and a community, the more likely you are to get more people signing up through you and the more commissions you’re going to get. But, I’ve seen people do really well with affiliate marketing, who don’t have much of an audience at all. They have a small passionate audience or a client base that is small and dedicated. And as a result, they build up a really nice affiliate commission. I have affiliates for my courses and I pay commissions of typically 25 to 50%. And it saves me from hiring a salesforce. You can sign up to affiliate programmes and products by giving your name, email address, and PayPal account. Then they give you a unique tracking link. They also often give you resources like images, social media images, and copy that you can just swipe and paste into emails or social, and you’re off and running. Really experienced affiliates who do incredibly well put a lot of effort behind it. They view it as an actual launch and go all in, and many of them earn a full time six-figure living from it. They will write dedicated detailed, informational, educational/edutainment blog posts. They may do videos where they share why they love a tool, or a service, a product, a person, or a brand. #8. VIP experiences VIP experiences are amazing, for you and your guests. How do you create these and what can they do for you? I talk to Jordan Gill about these powerful days at the Monetise You Virtual Summit. What is a VIP experience? Well, you might have experienced one yourself. It might have been a day with an amazing coach, where you smashed out your business plan for the year, or your triathlon training plan, or your new brand. It might have been a mastermind where you got together with eight or ten other brilliant brains and you were facilitated and hosted by your mastermind leader. And together, you netted out your biggest problems and challenges, and worked on growing your business and transforming your mindset, and hanging out with high-level people who just inspired you. Or it might have been that you were on a retreat, over three or five days, like I have held in Bali, in Barcelona, and Lisbon and other places around the world. And you had a super VIP experience for several thousand dollars, staying in stunning hotels, doing incredible activities, hanging out with like-minded, amazing people, learning a lot, but also just experiencing life fully. These are the types of VIP experiences that people are prepared to pay a lot for. What if, once a month, or once a year even, you created the best experience, with so much care and dedication that it absolutely leaves people breathless thinking about it forevermore? Maybe it’s a $5 000 or $10 000, or $20,000 retreat or VIP experience that you put your all into. You might only need eight or 10 or 20 people to come along. Often you can create these experiences with fewer people, at a much higher price. If you love bringing people together, and if you’re an absolute connector, and if you know that this is something that you could do really well or become better at, then this could be your avenue. You could spend a couple of months a year creating this experience, and then not do anything for the rest of the year. Think about what you want to create. What experience do you want people to have and be transformed by? #9. Service-based businesses With this model, you can end up trading time for money and you may have trouble scaling it. But it can also provide great service for people that they really value. If you price it right and if you get the structure of your services right, it can be incredible. It’s been an incredibly lucrative business for Prerna Mallik, who speaks about how she’s monetised herself as a service provider, at the Monetise You Summit. I’m talking about website designers, copywriting, coaches, health specialists, relationship coaches, creating meal plans, all kinds of things. If you do it well, and you get some really amazing results, then you can command higher fees. Service-based businesses are brilliant when combined with other revenue streams, like group programmes, or online courses or products. If you provide a service and you get paid for it, you’re off and running. You don’t have to create things or invest a lot of money. You just have to really understand your ideal client, and figure out what problem it is that you want to solve. #10. Agencies Agencies are POWERFUL. What if you realised that providing a service to your clients by yourself wasn’t sustainable, but if you put together a team, trained them up to do what you do, and hired them out for a retainer, like Tasha Booth does, you could scale beyond your wildest dreams? Talk about awesome. Imagine bringing together the best of the best people to run a business, off a specialised service or expertise that your clients pay you a monthly retainer for, and get access to your team. Let’s say you’re a course creator, you can go to an agency and pay one set fee in return for getting dedicated access to a virtual assistant, an online business manager, a copywriter, and an advertising person to help run your launch for you. It’s a whole new ballgame because you have agency-level pricing structures. The way in which you can become profitable is quite incredible if you do it well. And if you like systems, and if you like frameworks, and if you get the right people on board, and if you like leading and managing teams that bring out the best in people, agencies can be huge. What lights YOU up? Who ARE you? What are you good at? What do you know? What can you teach other people? When you REALLY start thinking about these questions, and you begin to answer them….MAGIC starts to happen. When you’re lit up, the whole world shines around you. Don’t you want to give that to a ton of other people? You want to do purposeful work, feel rewarded, leave a legacy, touch people’s lives. You want to live your life on your own terms. When you tap into your potential and get paid to be you…those amazing things happen. AND you bring in more wealth. It’s time to get out of your head and open yourself up to all the incredible possibilities out there! Are you ready to get paid to be YOU? Join the Monetise You Summit: 10 Ways to $10K Months. It is HOT. There’s only one of you. Get out there and be everything that you are.
415,145
A letter from our Principal, Mr Ó hÓbáin… Dear Parents/Guardians, This year’s date for our annual 30k sponsored cycle was Saturday 27th April 2013. Our new extension of six classrooms and four learning support rooms are now occupied. Everyone is very excited as we can now see the realisation of our dream. We have beautiful new areas for teaching and learning and for the first time in many years all of us will be “under one roof”. The Board of Management has borrowed €60,000 to help pay for the building and plans to pay back this loan over the next few years. Our car-park will also be slightly extended as a condition of planning permission and we hope to install a new drainage system to solve our current flooding problems. Meanwhile our Parents’ Association have been very actively fundraising to develop a playground or “outdoor P.E. facility” in the school. We hope to have enough money to begin this project this spring/summer and also to resurface one of our yards/playgrounds. We are currently testing a synthetic grass product to ascertain it’s suitability for this project. The Board of Management and Parents’ Association are co-ordinating their fundraising efforts and will be launching some major initiatives in the coming weeks. The annual Sponsored Cycle was an excellent vehicle for launching our fundraising campaign. Last year we had over 70 cyclists and we raised €11,000! This year we had over 80 participants. We’re collecting the sponsorship money this week and hope we can do even better than last year. Sincere thanks to everyone who took part on Saturday the 27th and to those who took a sponsorship card in the name of a friend or a teacher. The route was a circuit of Ashbourne-Garristown-Ardcath- Rath Cross-Ashbourne and we made plans regarding road safety, water stations etc. Most of all we were so glad that those who took part really ENJOYED THEMSELVES. The sun shone on Saturday and there was a great sense of achievement when we all arrived back at the school and were sitting down to a cup of tea and a sandwich!! Thanks to the Parents’ Association and teachers who laid on a wonderful spread. Great team-work! Well done and THANK YOU!!! Check out the pictures below…[Not a valid template] It was a pleasure to lead out the cycle on Saturday 27th April. Yours Sincerely, Donal Ó hÓbáin Principal.
280,920
609 Bath Ave, Metairie, LA 70001 is a Single Family Residence property with -- bedrooms, -- bathrooms, and is approximately -- sq feet of living space. The estimated market value for 609 Bath Ave, Metairie, LA 70001 is $314 609 Bath Ave, Metairie, LA 70001 view all Real Estate Trends Average Listing Price$255,500 compared to June 2018 Median Sales PriceNo Data Median Home Value Housing Inventory Property Type - Vacant7.5% - Rented45.2% - Owned47.3% Search Bath Ave House Prices Search Search 70001 House Prices by Street 1 Registered Offender found within 1 mile of 609 Bath Ave, Metairie, LA 70001 search the registered offender database 509 Ridgeway Dr, Metairie, LA 70001 0.94 miles away View All Metairie Registered Offenders Crime Index for Metairie, LA view report*Crime data is calculated at city level Schools nearby to 609 Bath Ave, Metairie, LA 70001 - K-12 SchoolsRatingDistance - Haynes Academy School For Advanced Studies (Grades 06-12) 0.72 miles - St Catherine Of Siena (Grades PK-07) 0.75 miles - Metairie Park Country Day School (Grades PK-12) 0.8 miles - Ella Dolhonde Elementary School (Grades PK-05) 0.95 miles - View All Metairie, LA K-12 Schools
248,435
\begin{document} \title{Lower bounds for Max-Cut in $H$-free graphs via semidefinite programming} \author{Charles Carlson\thanks{Department of Computer Science, University of Colorado Boulder, Boulder, CO 80302. Email: {\tt [email protected]}.} , Alexandra Kolla\thanks{Department of Computer Science, University of Colorado Boulder, Boulder, CO 80302. Email: {\tt [email protected]}. Research supported by NSF CAREER grant 1452923 as well as NSF AF grant 1814385} , Ray Li\thanks{Department of Computer Science, Stanford University, Stanford, CA 94305. Email: {\tt [email protected]}. Research supported by an NSF GRF grant DGE-1656518 and by NSF grant CCF-1814629.} , Nitya Mani\thanks{Department of Mathematics and Computer Science, Stanford University, Stanford, CA 94305. Email: {\tt [email protected]}. Research supported in part by a Stanford Undergraduate Advising and Research Major Grant.} , Benny Sudakov\thanks{Department of Mathematics, ETH, 8092 Zurich, Switzerland. [email protected]. Research supported in part by SNSF grant 200021-175573.} , Luca Trevisan\thanks{Computer Science Division, U.C. Berkeley, Berkeley, CA 94720. Email: {\tt [email protected]}. Research supported by the NSF under grant CCF 1815434. Work on this project has also received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 834861).}} \date{} \maketitle \begin{abstract} For a graph $G$, let $f(G)$ denote the size of the maximum cut in $G$. The problem of estimating $f(G)$ as a function of the number of vertices and edges of $G$ has a long history and was extensively studied in the last fifty years. In this paper we propose an approach, based on semidefinite programming (SDP), to prove lower bounds on $f(G)$. We use this approach to find large cuts in graphs with few triangles and in $K_r$-free graphs. \end{abstract} \section{Introduction} The celebrated Max-Cut problem asks for the largest bipartite subgraph of a graph $G$, i.e., for a partition of the vertex set of $G$ into disjoint sets $V_1$ and $V_2$ so that the number of edges of $G$ crossing $V_1$ and $V_2$ is maximal. This problem has been the subject of extensive research, both from a largely algorithmic perspective in computer science and from an extremal perspective in combinatorics. Throughout, let $G$ denote a graph with $n$ vertices and $m$ edges with maximal cut of size $f(G)$. The extremal version of Max-Cut problem asks to give bounds on $f(G)$ solely as a function of $m$ and $n$. This question was first raised more than fifty years ago by Erd\H{o}s~\cite{E1} and has attracted a lot of attention since then (see, e.g.,~\cite{E73, ER75, EFPS, AL96, SHE92, BS1, AL05, Su07, CFKS} and their references). It is well known that every graph $G$ with $m$ edges has a cut of size at least $m/2$. To see this, consider a random partition of vertices of the vertices $G$ into two parts $V_1, V_2$ and estimate the expected number of edges between $V_1$ and $V_2$. On the other hand, already in 1960's Erd\H{o}s~\cite{E1} observed that the constant $1/2$ cannot be improved even if we consider very restricted families of graphs, e.g., graphs that contain no short cycles. Therefore the main question, which has been studied by many researchers, is to estimate the error term $f(G)-m/2$, which we call {\it surplus}, for various families of graphs $G$. The elementary bound $f(G) \geq m/2$ was improved by Edwards \cite{E73, E75} who showed that every graph with $m$ edges has a cut of size at least $\frac{m}{2}+ \frac{\sqrt {8m+1}-1}{8}$. This result is easily seen to be tight in case $G$ is a complete graph on an odd number of vertices, that is, whenever $m=\binom{k}{2}$ for some odd integer $k$. Estimates on the second error term for other values of $m$ can be found in~\cite{AH} and~\cite{BS1}. Although the $\sqrt{m}$ error term is tight in general, it was observed by Erd\H{o}s and Lov\'asz \cite{ER75} that for triangle-free graph it can be improved to at least $m^{2/3+o(1)}$. This naturally yiels a motivating question: what is the best surplus which can always be achieved if we assume that our family of graphs is {\it $H$-free}, i.e., no graph contains a fixed graph $H$ as a subgraph. It is not difficult to show (see, e.g.~\cite{AL03}) that for every fixed graph $H$ there is some $\epsilon=\epsilon(H)>0$ such that $f(G) \geq \frac{m}{2}+\Omega(m^{1/2+\epsilon})$ for all $H$-free graphs with $m$ edges. However, the problem of estimating the error term more precisely is not easy, even for relatively simple graphs $H$. It is plausible to conjecture (see~\cite{AL05}) that for every fixed graph $H$ there is a constant $c_H$ such that every $H$-free graph $G$ with $m$ edges has a cut with surplus at least $\Theta(m^{c_H})$, i.e., there is both a lower bound and an infinite sequence of example showing that exponent $c_H$ can not be improved. This conjecture is very difficult. Even in the case $H=K_3$ determining the correct error term took almost twenty years. Following the works of~\cite{ER75, PT, SHE92}, Alon~\cite{AL96} proved that every $m$-edge triangle free graph has a cut with surplus of order $m^{4/5}$ and that this is tight up to constant factors. There are several other forbidden graphs $H$ for which we know quite accurately the error term for the extremal Max-Cut problem in $H$-free graphs. For example, it was proved in~\cite{AL05}, that if $H=C_r$ for $r=4, 6, 10$ then $c_H=\frac{r+1}{r+2}$. The answer is also known in the case when $H$ is a complete bipartite graph $K_{2,s}$ or $K_{3,s}$ (see~\cite{AL05} for details). \paragraph{New approach to Max-Cut using semidefinite programming.} Many extremal results for the Max-Cut problem rely on quite elaborate probabilistic arguments. A well known example of such an argument is a proof by Shearer~\cite{SHE92} that if $G$ is a triangle-free graph with $n$ vertices and $m$ edges, and if $d_1, d_2, \ldots, d_n$ are the degrees of its vertices, then $f(G) \geq \frac{m}{2} + O(\sum_{i=1}^n \sqrt d_i)$. The proof is quite intricate and is based on first choosing a random cut and then randomly redistributing some of the vertices, depending on how many their neighbors are on the same side as the chosen vertex in the initial cut. Shearer's arguments were further extended, with more technically involved proofs, in~\cite{AL05} to show that the same lower bound remains valid for graphs $G$ with relatively sparse neighborhoods (i.e., graphs which locally have few triangles). In this article we propose a different approach to give lower bounds on the $\MC$ of sparse $H$-free graphs using approximation by semidefinite programming (SDP). This approach is intuitive and computationally simple. The main idea was inspired by the celebrated approximation algorithm of Goemans and Williamson~\cite{GW95} of the $\MC$: given a graph $G$ with $m$ edges, we first construct an explicit solution for the standard $\MC$ SDP relexation of $G$ which has value at least $(\frac{1}{2}+W)m$ for some positive surplus $W$. We then apply a Goemans-Williamson randomized rounding, based on the sign of the scalar product with random unit vector, to extract a cut in $G$ whose surplus is within constant factor of $W$. Using this approach we prove the following result. \begin{theorem} \label{thm:sdp} Let $G = (V,E)$ be a graph with $n$ vertices and $m$ edges. For every $i \in [n]$, let $V_i$ be any subset of neighbours of vertex $i$ and $\varepsilon_i \leq \frac{1}{\sqrt{\vert V_i \vert}}$. Then, \begin{align} f(G) \geq \frac{m}{2} + \sum_{i=1}^n \frac{\varepsilon_i \vert V_i \vert }{4 \pi} - \sum_{(i,j) \in E} \frac{ \varepsilon_i \varepsilon_j |V_i\cap V_j| }{2}. \end{align} \end{theorem} \noindent This results implies the Shearer's bound \cite{SHE92}. To see this, set $V_i$ to the neighbors of $i$ and $\varepsilon_i=\frac{1}{\sqrt{d_i}}$ for all $i$. Then, if $G$ is triangle-free graph, then $|V_i\cap V_j|=0$ for every pair of adjacent vertices $i,j$. The fact that we apply Goemans-Williamson SDP rounding in this setting is perhaps surprising for a few reasons. In general, our result obtains a surplus of $\Omega(W)$ from an SDP solution with surplus $W$, which is not possible in general. The best cut that can be guaranteed from any kind of rounding of a Max-Cut SDP solution with value $(\frac{1}{2}+W)m$ is $(\frac{1}{2}+\Omega(\frac{W}{\log W}))m$ (see \cite{OW08}). Furthermore, this is achieved using the RPR$^2$ rounding algorithm, not the Geomans-Williamson rounding algorithm. Nevertheless, we show that our explicit Max-Cut solution has additional properties that circumvents these issues and permits a better analysis. \paragraph{New lower bound for Max-Cut of triangle sparse graphs} Using Theorem~\ref{thm:sdp}, we give a new result on the Max-Cut of triangle sparse graphs that is more convenient to use than previous similar results. A graph $G$ is \emph{$d$-degenerate} if there exists an ordering of the vertices $1,\dots,n$ such that vertex $i$ has at most $d$ neighbors $j<i$. Equivalently, a graph is $d$-degenerate if every induced subgraph has a vertex of degree at most $d$. Degeneracy is a broader notion of graph sparseness than maximum degree: all maximum degree $d$ graphs are $d$-degenerate, but the star graph is 1-degenerate while having maximum degree $n-1$. Theorem~\ref{thm:sdp} gives the following useful corollary on the Max-Cut of $d$-degenerate graphs. \begin{corollary} \label{cor:sdp-2} Let $\varepsilon\le \frac{1}{\sqrt{d}}$. Let $G$ be a $d$-degenerate graph with $m$ edges and $t$ triangles. Then \begin{align} f(G)\ge \frac{m}{2} + \frac{\varepsilon m}{4\pi} - \frac{\varepsilon^2 t}{2}. \end{align} \end{corollary} \noindent Indeed, let $1,\dots,n$ be an ordering of the vertices such that any $i$ has at most $d$ neighbors $j<i$, and let $V_i$ be this set of neighbors. Let $\varepsilon_i=\varepsilon$ for all $i$. In this way, $\sum_{i}^{} |V_i|$ counts every edge exactly once and $\sum_{(i,j)\in E}^{} |V_i\cap V_j|$ counts every triangle exactly once, and the result follows. This shows that graphs with few triangles have cuts with surplus similar to triangle-free graphs. This result is new and more convenient to use than existing results in this vein, because it relies only on the global count of the number of triangles, rather than a local triangle sparseness property assumed by prior results. For example, it was shown that (using Lemma 3.3 of \cite{AL05}) a $d$-degenerate graph with a local triangle-sparseness property, namely that every large induced subgraph with a common neighbor is sparse, has Max-Cut at least $\frac{m}{2} + \Omega(\frac{m}{\sqrt{d}})$. However, we can achieve the same result with only the guarantee that the global number of triangles is small. In particular, when there are at most $O(m\sqrt{d})$ triangles, which is always the case with the local triangle-sparseness assumption above, setting $\varepsilon=\Theta(\frac{1}{\sqrt{d}})$ in Corollary~\ref{cor:sdp-2} gives that the Max-Cut is again at least $\frac{m}{2} + \Omega(\frac{m}{\sqrt{d}})$. \paragraph{Corollary: Lower bounds for Max-Cut of $H$-free degenerate graphs.} We illustrate usefulness of the above results by giving the following lower bound on the Max-Cut of $K_r$-free graphs. \begin{theorem} \label{thm:kr-lb} Let $r \geq 3$. There exists a constant $c = c(r) >0$ such that, for all $K_r$-free $d$-degenerate graphs $G$ with $m$ edges, \begin{align} f(G) \ge \left(\frac{1}{2} +\frac{c}{d^{1-1/(2r-4)}}\right)m. \label{eq:kr-lb} \end{align} \end{theorem} \noindent Lower bounds such as Theorem~\ref{thm:kr-lb} giving a surplus of the form $c\cdot \frac{m}{d^\alpha}$ are more fine-grained than those that depend only on the number of edges. Accordingly, they are useful for obtaining lower bounds the Max-Cut independent of the degeneracy: many tight Max-Cut lower bounds in $H$ free graphs of the form $\frac{m}{2}+cm^{\alpha}$ first establish that $f(G) \ge \frac{m}{2} + c\cdot \frac{m}{\sqrt{d}}$ for all $H$-free graphs, and then case-working on the degeneracy. \cite{AL05} In the case of $r=4$ one can use our arguments together with Alon's result on Max-Cut in triangle-free graphs to improve Theorem~\ref{thm:kr-lb} further to $m/2+cm/d^{2/3}$. While Theorem~\ref{thm:kr-lb} gives nontrivial bounds for $K_r$-free graphs, we believe that a stronger statement is true and propose the following conjecture. \begin{conjecture}\label{conj:opt} For any graph $H$, there exists a constant $c=c(H)>0$ such that, for all $H$-free $d$-degenerate graphs with $m\ge 1$ edges, \begin{align} f(G)\ge \left(\frac{1}{2} + \frac{c}{\sqrt{d}}\right)m. \label{eq:conj_opt} \end{align} \end{conjecture} Our Theorem~\ref{thm:sdp} implies this conjecture for various graphs $H$, e.g., $K_{2,s}, K_{3,s}, C_{r}$ and for any graph $H$ which contains a vertex whose deletion makes it acyclic. This was already observed in~\cite{AL05} using the weaker, locally triangle-sparse form of Corollary~\ref{cor:sdp-2} described earlier. Conjecture~\ref{conj:opt} provides a natural route to proving a closely related conjecture proposed by Alon, Bollob{\'a}s, Krivelevich, and Sudakov~\cite{AL03}. \begin{conjecture}[\cite{AL03}] \label{conj:abks} For any graph $H$, there exists constants $\varepsilon=\varepsilon(H)>0$ and $c=c(H)>0$ such that, for all $H$-free graphs with $m\ge 1$ edges, \begin{align} f(G)\ge \frac{m}{2} + cm^{3/4+\varepsilon}. \end{align} \end{conjecture} Since every graph with $m$ edges is obviously $\sqrt{2m}$-degenerate, the Conjecture \ref{conj:opt} implies immediately a weaker form of Conjecture~\ref{conj:abks} with surplus of order $m^{3/4}$. With some extra technical work (see Appendix~\ref{sec:conj}) we can show that it actually implies the full conjecture, achieving a surplus of $m^{3/4+\varepsilon}$ for any graph $H$. For many graphs $H$ for which Conjecture~\ref{conj:abks} is known, \eqref{eq:conj_opt} was implicitly established for $H$-free graphs \cite{AL05}, making Conjecture~\ref{conj:opt} a plausible stepping stone to Conjecture~\ref{conj:abks}. As further evidence of the plausibility of Conjecture~\ref{conj:opt}, we show that Conjecture~\ref{conj:abks} implies a weaker form of Conjecture~\ref{conj:opt}, namely that any $H$-free graph has Max-Cut $\frac{m}{2} + cm \cdot d^{-5/7}$. Using similar techniques, we can obtain nontrivial, unconditional results on the Max-Cut of $d$-degenerate $H$-free graphs for particular graphs $H$. See Appendix~\ref{app:gen} for a table of results and proofs. Conjecture~\ref{conj:opt}, if true, gives a surplus of $\Omega(\frac{m}{\sqrt{d}})$ that is optimal up to a multiplicative constant factor for every fixed graph $H$ which contains a cycle. To see this, consider an Erd\H{o}s-R\'enyi random graph $G(n,p)$ with $p=n^{-1+\delta}$. Using standard Chernoff-type estimates, one can easily show that with high probability that this graph is $O(np)$-degenerate and its Max-Cut has size at most $\frac{1}{4}\binom{n}{2}p+O(n\sqrt{np})$. Moreover, if $\delta=\delta(H)>0$ is small enough, then with high probability $G(n,p)$ contains only very few copies of $H$ which can be destroyed by deleting few vertices, without changing the degeneracy and surplus of the Max-Cut (see Appendix~\ref{app:ub}). \section{Lower bounds for Max-Cut using SDP} \label{sec:general} In this section we give a lower bound for $f(G)$ in graphs with few triangles, showing Theorem~\ref{thm:sdp}. To prove this result, we make heavy use of the SDP relaxation of the $\MC$ problem, formulated below for a graph $G = (V, E)$: \begin{align} \text{maximize}\qquad & \sum_{(i,j)\in E}^{} \frac{1}{2}(1-\ab{v\ind{i},v\ind{j}})\nonumber\\ \text{subject to}\qquad & \|v\ind{i}\|^2=1\,\forall i\in V. \label{eq:sdp} \end{align} We leverage the classical Goemans-Williamson \cite{GW95} rounding algorithm which that gives an integral solution from a vector solution to the $\MC$ SDP. \begin{proof}[Proof of Theorem~\ref{thm:sdp}] For $i\in[n]$, define $\tilde v\ind{i}\in\mathbb{R}^n$ by \begin{align} \tilde v\ind{i}_j \ &= \ \left\{ \begin{tabular}{ll} 1 & $i=j$\\ $-\varepsilon_i$ & $j \in V_i$\\ 0 & otherwise.\\ \end{tabular} \right.. \end{align} For $i\in[n]$, let $v\ind{i}\defeq \frac{\tilde v\ind{i}}{\|\tilde v\ind{i}\|}\in\mathbb{R}^n$. Then $1\le \|\tilde v\ind{i}\|\le 1 + \varepsilon_i^2 \vert V_i \vert \le 2$ for all $i$. For each edge $(i,j)$ with $i \in V_j$, we have \begin{align} v_i^{(i)} v_i^{(j)} = \frac{1}{\|\tilde v\ind{i}\|}\cdot \frac{-\varepsilon_j}{\|\tilde v\ind{j}\|} \le \frac{-\varepsilon_j}{4}. \end{align} For $k \in V_i \cap V_j$, we have $v_k^{(i)} v_k^{(j)}\le \varepsilon_i \varepsilon_j$. For $k\not \in \{i,j\}\cup (V_i \cap V_j)$, we have $v_k^{(i)} v_k^{(j)}=0$ as $v_k\ind{i} = 0$ or $v_k\ind{j} = 0$. Thus, for all edges $(i,j)$, \begin{align} \ab{v\ind{i},v\ind{j}}\le -\frac{\varepsilon_i}{4}\mathbbm{1}_{V_j}(i) -\frac{\varepsilon_j}{4}\mathbbm{1}_{V_i}(j) + |V_i\cap V_j| \varepsilon_i \varepsilon_j. \label{} \end{align} Here, $\mathbbm{1}_S(i)$ is 1 if $i\in S$ and 0 otherwise. Vectors $v\ind{1},\dots,v\ind{n}$ form a vector solution to the SDP~\eqref{eq:sdp}. We now round this solution using the Goemans-Williamson \cite{GW95} rounding algorithm. Let $w$ denote a uniformly random unit vector, $A=\{i\in[n]:\ab{v\ind{i},w}\ge 0\}$, and $B=[n]\setminus A$. Note that the angle between vectors $v\ind{i},v\ind{j}$ is equal to $\cos^{-1}(\ab{v\ind{i},v\ind{j}})$, so the probability an edge $(i,j)$ is cut is \begin{align} \Pr[(i,j)\text{ cut}] \ &= \ \frac{\cos^{-1}(\ab{v\ind{i},v\ind{j}})}{\pi} \nonumber\\ \ &= \ \frac{1}{2} - \frac{\sin^{-1}(\ab{v\ind{i},v\ind{j}})}{\pi} \nonumber\\ \ &\ge \ \frac{1}{2} - \frac{1}{\pi}\sin^{-1}\left(|V_i\cap V_j| \varepsilon_i \varepsilon_j -\frac{\varepsilon_i}{4}\mathbbm{1}_{V_j}(i) -\frac{\varepsilon_j}{4}\mathbbm{1}_{V_i}(j) \right) \nonumber\\ \ &\ge \ \frac{1}{2} - \frac{1}{\pi} \cdot \left( \frac{\pi}{2}\cdot |V_i\cap V_j| \varepsilon_i \varepsilon_j -\frac{\varepsilon_i}{4}\mathbbm{1}_{V_j}(i) -\frac{\varepsilon_j}{4}\mathbbm{1}_{V_i}(j) \right) \nonumber\\ \ &= \ \frac{1}{2} + \frac{\varepsilon_i}{4\pi}\mathbbm{1}_{V_j}(i) + \frac{\varepsilon_j}{4\pi}\mathbbm{1}_{V_i}(j) - \frac{|V_i\cap V_j| \varepsilon_i \varepsilon_j}{2}.\nonumber \end{align} In the last inequality, we used that, for $a,b\in[0,1]$, we have $\sin^{-1}(a-b)\le \frac{\pi}{2}a - b$. This is true as $\sin^{-1}(x)\le \frac{\pi}{2}x$ when $x$ is positive and $\sin^{-1}(x)\le x$ when $x$ is negative. Thus, the expected size of the cut given by $A\sqcup B$ is, by linearity of expectation, \begin{align} \sum_{(i,j)\in E}^{} \Pr[(i,j)\text{ cut}] \ &\ge \ \sum_{\substack{ (i,j) \in E \\ i < j}} \left(\frac{1}{2} + \frac{\varepsilon_i}{4} \mathbbm{1}_{V_j}(i) + \frac{\varepsilon_j}{4} \mathbbm{1}_{V_i}(j) - \frac{|V_i\cap V_j| \varepsilon_i \varepsilon_j}{2} \right) \nonumber\\ \ &= \ \frac{m}{2} + \sum_{i=1}^n \frac{ \vert V_i \vert \varepsilon_i }{4 \pi}- \sum_{(i,j) \in E} \frac{ |V_i\cap V_j| \varepsilon_i \varepsilon_j}{2}. \qedhere \end{align} \end{proof} In the proof of Theorem~\ref{thm:kr-lb} we use the following consequence of Corollary \ref{cor:sdp-2}. \begin{corollary} \label{cor:lb-2} There exists an absolute constant $c>0$ such that the following holds. For all $d\ge 1$ and $\varepsilon\le\frac{1}{\sqrt{d}}$, if a $d$-degenerate graph $G = (V, E)$ has $m$ edges and at most $\frac{m}{8\varepsilon}$ triangles then \begin{align} f(G) \ge \left(\frac{1}{2} + c \varepsilon\right) \cdot m. \label{} \end{align} \end{corollary} \section{Decomposition of degenerate graphs} In a graph $G=(V,E)$, let $n(G)$ and $m(G)$ denote the number of vertices and edges, respectively. For a vertex subset $V'\subset V$, let $G[V']$ denote the subgraph induced by $V'$. We show that $d$-degenerate graphs with few triangles have small subsets of neighborhoods with many edges. \begin{lemma}[] \label{lem:lb-3} Let $d\ge 1$ and $\varepsilon>0$, and let $G=(V,E)$ be a $d$-degenerate graph with at least $\frac{m(G)}{\varepsilon}$ triangles. Then there exists a subset $V'$ of at most $d$ vertices with a common neighbor in $G$ such that the induced subgraph $G[V']$ has at least $\frac{|V'|}{\varepsilon}$ edges. \end{lemma} \begin{proof} Since $G$ is $d$-degenerate, we fix an ordering $1,\dots,n$ of the vertices such that $d_<(i) \le d$ for all $i\in[n]$, where $d_<(i)$ denotes the number of neighbors $j<i$ of $i$. Then, if $t_<(i)$ denotes the number of triangles $\{i,j,k\}$ of $G$ where $j,k < i$, we have \begin{align} \sum_{i}^{} t_<(i) \ = \ t(G) \ \ge \ \frac{m(G)}{\varepsilon} \ = \ \sum_{i=1}^{n} \frac{d_<(i)}{\varepsilon}. \label{} \end{align} Hence, there must exist some $i$ such that $t_<(i) \ge \frac{d_<(i)}{\varepsilon}$. Let $V'$ denote the neighbors of $i$ with index less than $i$. By definition, the vertices of $V'$ have common neighbor $i$. Additionally, $G[V']$ has at least $\frac{d_<(i)}{\varepsilon}$ edges and $d_<(i)\le d$ vertices, proving the lemma. \end{proof} We use this lemma to partition the vertices of any $d$-degenerate graph in a useful way. \begin{lemma} \label{lem:lb-4} Let $\varepsilon > 0$. Let $G = (V, E)$ be a $d$-degenerate graph on $n$ vertices with $m$ edges. Then there exists a partition $V_1,\dots,V_{k+1}$ of the vertex set $V$ with the following properties. \begin{enumerate} \item For $i=1,\dots,k$, the vertex subset $V_i$ has at most $d$ vertices and has a common neighbor, and the induced subgraph $G[V_i]$ has at least $\frac{|V_i|}{\varepsilon}$ edges. \item The induced subgraph $G[V_{k+1}]$ has at most $\frac{m(G[V_{k+1}])}{\varepsilon}$ triangles. \end{enumerate} \label{lem:technical} \end{lemma} \begin{proof} We construct the partition iteratively. Let $V_0^*=V$. For $i\ge 1$, we partition the vertex subset $V_{i-1}^*$ into $V_i\sqcup V_i^*$ as follows. If $G[V_{i-1}^*]$ has at least $\frac{m(G[V_{i-1}^*])}{\varepsilon}$ triangles, then by applying Lemma~\ref{lem:lb-3} to the induced subgraph $G[V_{i-1}^*]$, there exists a vertex subset $V_i$ with a common neighbor in $V_{i-1}^*$ such that $|V_i|\le d$ and the induced subgraph $G[V_i]$ has at most $\frac{|V_i|}{\varepsilon}$ edges. In this case, let $V_i^*\defeq V_{i-1}^*\setminus V_i$. Let $k$ denote the maximum index such that $V_k^*$ is defined, and let $V_{k+1}\defeq V_k^*$. By construction, $V_1,\dots,V_k$ satisfy the desired conditions. By definition of $k$, the induced subgraph $G[V_k^*]$ has at most $\frac{m(G[V_k^*])}{\varepsilon}$ triangles, so for $V_{k+1}=V_k^*$, we obtain the desired result. \end{proof} \subsection{Large Max-Cut from decompositions} \label{s:framework} For a $d$-degenerate graph $G=(V,E)$, in a partition $V_1,\dots,V_{k+1}$ of $V$ given by Lemma~\ref{lem:lb-4}, the induced subgraph $G[V_{k+1}]$ has few triangles, and thus, by Corollary~\ref{cor:sdp-2}, has a cut with good surplus. This allows us to obtain the following technical result regarding the Max-Cut of $H$-free $d$-degenerate graphs. \begin{lemma} \label{lem:general} There exists an absolute constant $c>0$ such that the following holds. Let $H$ be a graph and $H'$ be obtained by deleting any vertex of $H$. Let $0<\varepsilon<\frac{1}{\sqrt{d}}$. For any $H$-free $d$-degenerate graph $G=(V,E)$, one of the following holds: \begin{itemize} \item We have \begin{align} f(G) \ge \left( \frac{1}{2} + c\varepsilon \right)m. \label{e:cond1} \end{align} \item There exist graphs $G_1,\dots,G_k$ such that five conditions hold: (i) graphs $G_i$ are $H'$-free for all $i$, (ii) $n(G_i)\le d$ for all $i$, (iii) $m(G_i) \ge \frac{n(G_i)}{8\varepsilon}$ for all $i$, (iv) $n(G_1)+\cdots+n(G_k)\ge \frac{m}{6d}$, and (v) \begin{align} f(G) \ge \frac{m(G)}{2} + \sum_{i=1}^{k} \left(f(G_i) - \frac{m(G_i)}{2}\right). \label{e:cond2} \end{align} \end{itemize} \end{lemma} \begin{proof} Let $c_1 < 1$ be the parameter given by Corollary~\ref{cor:lb-2}. Let $c = \frac{c_1}{6}$. Let $G = (V, E)$ be a $d$-degenerate $H$-free graph. Applying Lemma~\ref{lem:technical} with parameter $8\varepsilon$, we can find a partition $V_1,\dots,V_{k+1}$ of the vertex set $V$ with the following properties. \begin{enumerate} \item For $i=1,\dots,k$, the vertex subset $V_i$ has at most $d$ vertices and has a common neighbor, and the induced subgraph $G[V_i]$ at least $\frac{|V_i|}{8\varepsilon}$ edges. \item The subgraph $G[V_{k+1}]$ has at most $\frac{m(G[V_{k+1}])}{8\varepsilon}$ triangles. \end{enumerate} For $i=1,\dots,k+1$, let $G_i\defeq G[V_i]$ and let $m_i\defeq m(G_i)$. For $i=1,\dots,k$, since $G$ is $H$-free and each $V_i$ is a subset of some vertex neighborhood in $G$, the graphs $G_i$ are $H'$-free. For $i = 1, \ldots , k$, fix a maximal cut of $G_i$ with associated vertex partition $V_i = A_i \sqcup B_i$. By the second property above, the graph $G_{k+1}$ has at most $\frac{m_{k+1}}{8\varepsilon}$ triangles. Applying Corollary~\ref{cor:lb-2} with parameter $\varepsilon$, we can find a cut of $G_{k+1}$ of size at least $(\frac{1}{2} + c_1\varepsilon)m_{k+1}$ with associated vertex partition $V_{k+1} = A_{k+1} \sqcup B_{k+1}$. We now construct a cut of $G$ by randomly combining the cuts obtained above for each $G_i$. Independently, for each $i=1,\dots,k+1$, we add either $A_i$ or $B_i$ to vertex set $A$, each with probability $\frac12$. Setting $B=V\setminus A$, gives a cut of $G$. As $V_1,\dots,V_{k+1}$ partition $V$, each of the $m-(m_1+\cdots+m_{k+1})$ edges that is not in one of the induced graphs $G_1,\dots,G_{k+1}$ has exactly one endpoint in each of $A, B$ with probability $1/2$. This allows us to compute the expected size of the cut (a lower bound on $f(G)$ as there is some instantiation of this random process that achieves this expected size). \begin{align} \label{eq:general-1} f(G) \ &\ge \ \frac{1}{2}(m - (m_1+\cdots+m_{k+1})) + \left( \frac{1}{2} + c_1\varepsilon \right) \cdot m_{k+1} + \sum_{i=1}^{k} f(G_i) \nonumber\\ \ &= \ \frac{m}{2} + c_1\varepsilon m_{k+1} + \sum_{i=1}^{k} \left( f(G_i) - \frac{m_i}{2} \right). \end{align} We bound~\eqref{eq:general-1} based on the distribution of edges in $G$ in $3$ cases: \begin{itemize} \item $m_{k+1}\ge \frac{m}{6}$. Since $f(G_i)\ge \frac{m_i}{2}$ for all $i=1,\dots,k$, \eqref{e:cond1} holds: \begin{align*} f(G) \ \ge \ \frac{m}{2} + c_1\varepsilon m_{k+1} \ \ge \ \left(\frac{1}{2} + c\varepsilon\right)\cdot m. \end{align*} \item The number of edges between $V_1\cup\cdots\cup V_{k}$ and $V_{k+1}$ is at least $\frac{2m}{3}$. Then, the cut given by vertex partition $V = A' \sqcup B'$ with $A'=V_1\cup\cdots\cup V_k$ and $B'=V_{k+1}$ has at least $\frac{2m}{3}$ edges, in which case $f(G)\ge \frac{2m}{3}> (\frac{1}{2} + \frac{c_1\varepsilon}{6})\cdot m$, so~\eqref{e:cond1} holds. \item $G' = G[V_1\cup\cdots\cup V_k]$ has at least $\frac{m}{6}$ edges. We show \eqref{e:cond2} holds. By construction, for $i=1,\dots,k$, the graph $G_i$ is $H'$ free, has at most $d$ vertices, and has at least $\frac{m_i}{8\varepsilon}$ edges. Since $G$ is $d$-degenerate, $G'$ is as well, so \begin{align} \frac{m}{6} \le m(G') \le d\cdot n(G') = d\cdot \sum_{i = 1}^k n(G_i), \end{align} Hence $n(G_1)+\cdots+n(G_k)\ge \frac{m}{6d}$. Lastly, by \eqref{eq:general-1}, we have \begin{align*} f(G) \ge \frac{m}{2} + \sum_{i=1}^{k} \left(f(G_i) - \frac{m_i}{2}\right). \end{align*} \end{itemize} This covers all possible cases, and in each case we showed either~\eqref{e:cond1} or~\eqref{e:cond2} holds. \end{proof} \begin{remark} In Corollary~\ref{cor:lb-2} we can take $c = \frac{1}{60}$, and in Lemma~\ref{lem:general} we can take $c = \frac{1}{360}$. \end{remark} Lemma~\ref{lem:general} allows us to convert Max-Cut lower bounds on $H$-free graphs to Max-Cut lower bounds on $H$-free $d$-degenerate graphs. \begin{lemma} \label{lem:general-2} Let $H$ be a graph and $H'$ be obtained by deleting any vertex of $H$. Suppose that there exists constants $a=a(H')\in[\frac{1}{2}, 1]$ and $c'=c'(H')>0$ such that for all $H'$-free graphs $G$ with $m'\ge 1$ edges, $f(G) \ge \frac{m'}{2} + c'\cdot (m')^{a}$. Then there exists a constant $c=c(H)>0$ such that for all $H$-free $d$-degenerate graphs $G$ with $m\ge 1$ edges, \begin{align*} f(G) \ge \left(\frac{1}{2} + cd^{-\frac{2-a}{1+a}}\right)\cdot m. \end{align*} \end{lemma} \begin{proof} Let $c_2$ be the parameter in Lemma~\ref{lem:general}. We may assume without loss of generality that $c'\le 1$. Let $G$ be a $d$-degenerate $H$-free graph. Let $\varepsilon\defeq c'd^{-\frac{2-a}{1+a}} < d^{-1/2}$ and $c\defeq \min(c'c_2, \frac{c'}{48})$. Applying Lemma~\ref{lem:general} with parameter $\varepsilon$, either~\eqref{e:cond1} or~\eqref{e:cond2} holds. If~\eqref{e:cond1} holds, then, as desired, $$f(G)\ge \left(\frac{1}{2} + c_2\varepsilon \right)m \ge \left(\frac{1}{2} + cd^{-\frac{2-a}{1+a}} \right)m.$$ Else~\eqref{e:cond2} holds. Let $G_1,\dots,G_{k+1}$ be the $H'$-free induced subgraphs satisfying the properties in Lemma~\ref{lem:general}, so that \begin{align*} f(G) \ &\ge \ \frac{m}{2} + \sum_{i=1}^{k} \left(f(G_i) - \frac{m(G_i)}{2}\right) \nonumber\\ \ &\ge \ \frac{m}{2} + \sum_{i=1}^{k} c'\cdot m(G_i)^{a}. \end{align*} For all $i$, we have \begin{align*} c' \cdot m(G_i)^{a} \overset{(*)}\ge \frac{c'\varepsilon}{8\varepsilon^{1+a}}\cdot n(G_i)^{a} \overset{(**)}\ge \frac{\varepsilon d}{8(c')^{a}}\cdot n(G_i) \overset{(+)}\ge \frac{\varepsilon d}{8}\cdot n(G_i), \end{align*} where $(*)$ follows since $m(G_i)\ge \frac{n(G_i)}{8\varepsilon}$, $(**)$ follows since $n(G_i)^{a-1}\ge d^{a-1}$ and $\varepsilon^{1+a} = (c')^{1+a}d^{a-2}$, and $(+)$ follows since $c' \le 1$. Hence, as $n(G_1)+\cdots+n(G_k)\ge \frac{m}{6d}$, we have \begin{align*} f(G) \ &\ge \ \frac{m}{2} + \varepsilon d \sum_{i=1}^{k}\frac{n(G_i)}{8} \ \ge \ \frac{m}{2} + \frac{\varepsilon m}{48} \ \ge \ \left(\frac{1}{2} + cd^{-\frac{2-a}{1+a}}\right)\cdot m, \end{align*} as desired. \end{proof} \section{Max-Cut in $K_r$-free graphs} \label{sec:kr} In this section we specialize Lemmas~\ref{lem:general} and~\ref{lem:general-2} to the case $H = K_r$ to prove Theorem~\ref{thm:kr-lb}. Let $\chi(G)$ denote the chromatic number of a graph $G$, the minimum number of colors needed to properly color the vertices of the graph so that no two adjacent vertices receive the same color. We first obtain a nontrivial upper bound on the chromatic number of a $K_r$-free graph $G$, giving an lower bound (Lemma~\ref{lem:kr-d}) on the Max-Cut of $K_r$-free graphs. This lower bound was implicit in \cite{AL03}, but we provide a proof for completeness. The lower bound on the Max-Cut of general $K_r$-free graphs enables us to apply Lemma~\ref{lem:general} to give a lower bound on the Max-Cut of $d$-degenerate $K_r$-free graphs per Theorem~\ref{thm:kr-lb}. The following well known lemma gives a lower bound on the Max-Cut using the chromatic number. \begin{lemma}(see e.g. Lemma 2.1 of \cite{AL03})\label{lem:mcchi} Given a graph $G = (V, E)$ with $m$ edges and chromatic number $\chi(G) \le t$, we have $f(G) \ge (\frac12 + \frac{1}{2t}) m$. \end{lemma} \begin{proof} Since $\chi(G) \le t$, we can decompose $V$ into independent subsets $V = V_1,\dots, V_t$. Partition the subsets randomly into two parts containing $\floor{\frac{t}{2}}$ and $\ceil{\frac{t}{2}}$ subsets $V_i$, respectively, to obtain a cut. The probability any edge is cut is $\frac{\floor{t/2}\cdot \ceil{t/2}}{\binom{t}{2}}\ge \frac{t+1}{2t}$, so the result follows from linearity of expectation. \end{proof} \begin{lemma}\label{lem:chikr} Let $r\ge 3$ and $G = (V, E)$ be a $K_r$-free graph on $n$ vertices. Then, $$\chi(G) \le 4n^{(r-2)/(r-1)}.$$ \end{lemma} \begin{proof} We proceed by induction on $n$. For $n\le 4^{r-1}$, the statement is trivial as the chromatic number is always at most the number of vertices. Now assume $G=(V,E)$ has $n>4^{r-1}$ vertices and that $\chi(G)\le 4n_0^{(r-2)/(r-1)}$ for all $K_r$-free graphs on $n_0\le n-1$ vertices. The off-diagonal Ramsey number $R(r,s)$ satisfies $R(r,s) \le \binom{r+s-2}{s-1} \le s^{r-1}$ \cite{ES35}. Hence, $G$ has an independent set $I$ of size $s=\floor{n^{1/(r-1)}}$. The induced subgraph $G[V\setminus I]$ is $K_r$-free and has fewer than $n$ vertices, so its chromatic number is at most $4(n-s)^{(r-2)/(r-1)}$. Hence, $G$ has chromatic number at most \begin{align} 1 + 4(n-s)^{(r-2)/(r-1)} \ &= \ 1 + 4n^{(r-2)/(r-1)}\left( 1-\frac{s}{n} \right)^{(r-2)/(r-1)} \nonumber\\ \ &\overset{(*)}\le \ 1 + 4n^{(r-2)/(r-1)} - 4n^{(r-2)/(r-1)} \cdot \frac{s}{3n} \ \overset{(**)}< \ 4n^{(r-2)/(r-1)} \label{} \end{align} In $(*)$, we used that $\frac{r-2}{r-1}\ge \frac{1}{2}$, that $\frac{s}{n} \le \frac{1}{4}$, and that $(1-x)^a\le 1 - \frac{x}{3}$ for $a\ge \frac{1}{2}$ and $x\le \frac{1}{4}$. In $(**)$, we used that $s\ge 4$ and hence $\frac{3s}{4} < n^{1/(r-1)}$. This completes the induction, completing the proof. \end{proof} \begin{rem} The upper bound on the off-diagonal Ramsey number $R(r, k^{1/(r-1)})$ has an extra logarithmic factor which suggests that the upper bound on $\chi(G)$ of Lemma~\ref{lem:chikr} can be improved by a logarithmic factor with a more careful analysis. \end{rem} \begin{lemma} \label{lem:kr-d} If $G$ is a $K_r$-free graph with at most $n$ vertices and $m$ edges, then \begin{align*} f(G) \ge \left( \frac{1}{2} + \frac{1}{8n^{(r-2)/(r-1)}} \right) m \end{align*} \end{lemma} \begin{proof} This follows immediately via Lemma~\ref{lem:mcchi} and Lemma~\ref{lem:chikr}. \end{proof} The above bounds allow us to prove Theorem~\ref{thm:kr-lb}. \begin{proof}[Proof of Theorem~\ref{thm:kr-lb}] Let $G$ be a $d$-degenerate $K_r$-free graph and $\varepsilon=d^{-1 + \frac{1}{2r-4}}$. Let $c_2$ be the parameter given by Lemma~\ref{lem:general}. Let $c = \min(c_2,\frac{1}{388})$. Applying Lemma~\ref{lem:general} with parameter $\varepsilon$, one of two properties hold. If~\eqref{e:cond1} holds, then \begin{align} f(G) \ \ge \ \left(\frac{1}{2} + c_2\varepsilon\right)m \ &\ge \ \left(\frac{1}{2} + c d^{- 1 + \frac{1}{2r-4}}\right)m \end{align} as desired. If~\eqref{e:cond2} holds, there exist graphs $G_1,\dots,G_k$ that are $K_{r-1}$-free with at most $d$ vertices such that $G_i$ has at least $\frac{n(G_i)}{8\varepsilon}$ edges, $n(G_1)+\cdots+n(G_k)\ge \frac{m}{6d}$, and \begin{align*} f(G) \ &\ge \ \frac{m}{2} + \sum_{i=1}^{k} \left(f(G_i) - \frac{m(G_i)}{2}\right). \end{align*} For all $i$, we have \begin{align*} f(G_i) - \frac{m(G_i)}{2} \ &\ge \ \frac{m(G_i)}{8n(G_i)^{(r-3)/(r-2)}} \nonumber\\ \ &\ge \ \frac{n(G_i)}{64\varepsilon n(G_i)^{(r-3)/(r-2)}} \ \ge \ \frac{n(G_i)}{64\varepsilon d^{(r-3)/(r-2)}} \ = \ \frac{\varepsilon d n(G_i)}{64}. \end{align*} In the first inequality, we used Lemma~\ref{lem:kr-d}. In the second inequality, we used that $m(G_i)\ge \frac{n(G_i)}{8\varepsilon}$. In the third inequality, we used that $n(G_i)\le d$. Hence, as $d(n(G_1)+\cdots+n(G_k))\ge \frac{m}{6}$, we have as desired that \begin{align} f(G) \ &\ge \ \frac{m}{2} + \sum_{i=1}^{k}\frac{\varepsilon d n(G_i)}{64} \ \ge \ \frac{m}{2} + \frac{\varepsilon m}{388} \ \ge \ \left(\frac{1}{2} + cd^{-1 + \frac{1}{2r-4}}\right)\cdot m. \qedhere \end{align} \end{proof} \begin{rem} As we already mentioned in the introduction, we can improve the result of Theorem~\ref{thm:kr-lb} in the case that $r = 4$ using Lemma~\ref{lem:general-2} as follows. Let $H=K_4$, and $H'=K_3$. By a result of \cite{AL96}, there exists a constant $c'>0$ such that, for all triangle-free graphs $G$ with $m'\ge 1$ edges, we have $f(G)\ge \frac{m'}{2} + c'(m')^{4/5}$. By Lemma~\ref{lem:general-2} with $H$ and $H'$ and $a=4/5$, there exists a constant $c>0$ such that any $K_4$-free $d$-degenerate graph $G$ with $m\ge 1$ edges satisfies \begin{align} f(G) \ge \left( \frac{1}{2} + cd^{-\frac{2-(4/5)}{1+(4/5)}} \right)\cdot m = \left( \frac{1}{2} + cd^{-2/3} \right)\cdot m. \end{align} \end{rem} \section{Concluding Remarks} \label{sec:conclusion} In this paper we presented an approach, based on semidefinite programming (SDP), to prove lower bounds on Max-Cut and used it to find large cuts in graphs with few triangles and in $K_r$-free graphs. A closely related problem of interest is bounding the \textsf{Max-$t$-Cut} of a graph, i.e. the largest $t$-colorable ($t$-partite) subgraph of a given graph. Our results imply good lower bounds for this problem as well. Indeed, by taking a cut for a graph $G$ with $m$ edges and surplus $W$, one can produce a $t$-cut for $G$ of size $\frac{t-1}{t}m +\Omega(W)$ as follows. Let $A, B$ be the two parts of the original cut. If $t=2s$ is even, simply split randomly both $A, B$ into $s$ parts. If $t=2s+1$ is odd, then put every vertex of $A$ randomly in the parts $1, \ldots, s$ with probability $2/(2s+1)$ and in the part $2s+1$ with probability $1/(2s+1)$. Similarly, put every vertex of $B$ randomly in the parts $s+1, \ldots, 2s$ with probability $2/(2s+1)$ and in the part $2s+1$ with probability $1/(2s+1)$. An easy computation (which we omit here) shows that the expected size of the resulting $t$-cut is $\frac{t-1}{t}m +\Omega(W)$. The main open question left by our work is Conjecture~\ref{conj:opt}. Proving this conjecture will require some major new ideas. Even showing that any $d$-degenerate $H$-free graph with $m$ edges has a cut with surplus at least $m/d^{1-\delta}$ for some fixed $\delta$ (independent of $H$) is out of reach of current techniques. \vspace{0.3cm} \noindent {\bf Acknowledgements.} \, The authors thank Jacob Fox and Matthew Kwan for helpful discussions and feedback. The authors thank Joshua Brakensiek for pointing out an error in an earlier draft of this paper. The authors thank Joshua Brakensiek and Yuval Wigderson for helpful comments on an earlier draft of the paper.
30,129
For scientists and researchers, the world’s seas and oceans contain a vast plethora of objects and features that they would like to find and then relocate as part of their work. They need to be able to search for and then relocate objects or mark out features, from archaeological sites and wrecks to coral or black smokers in the deep ocean. In some circumstances, a search team may also be looking for lost black boxes, containing flight data and cockpit voice recordings, in the event of an aircraft lost over water. Sonardyne has a range of tools for searching and then relocating objects in the oceans. Our Solstice Multi Aperture Sonar (MAS) provides class leading imagery for this type of sonar, while compact acoustic transponders and Deep Water Beacons offer easy to use relocate tools, which can mark many targets at one site with individual codes for unambiguous site marking. These can then be easily found with our hand-held and ROV-deployed Homer-Pro or ROV-Homer devices. We also have systems to underpin the latest underwater laser, LiDAR and multi-beam technologies.
10,010
Once it got here I finally convinced ccokeman to let me build a system on my own with parts he had sitting around so the costs were kept under control (Yeah Right). What I have below is where I am at now, but I am sure there will be further revisions to the computer. I still plan to water cool the video cards as funds allow, make room for another radiator and possibly do some modding. As a noob overclocker I did need some help with getting a good stable overclock of 4.0GHz on the CPU. After a coolant change (It was pink RV antifreeze) to distilled water with purple dye the temperatures dropped enough to start looking for more MHz. That should be something we work on soon enough. Specs for this build include: NZXT Phantom in Pink Intel Core i7 980X @ 4.0GHz 12GB of Corsair Dominator GTX8 ASUS Rampage III Formula 2 x NVIDIA GTX 580 in SLI Patriot Pyro SE 240GB SSD for the OS Seagate 3TB HDD for storage Corsair AX1200 PSU CPU Water cooling by Swiftech and XSPC NVIDIA 3D Vision Version 1 Blu Ray Drive Sure, its a work in progress but I have the time to make it better. Here are some of the pics as I put this pink beast together. I am woman here me RAWWWWWR! Putting in the mobo and radiator Installing the reservoir/pump and CPU block Started with pink coolant then went to purple to match the front purple LED fan Beauty shot of the memory and purple coolant Finally a full build shot before we changed the coolant. Yes I know the coke sign is there but hey it will make it on the wall one day! More to come as the case and build are refined!
264,199
\begin{document} \maketitle \begin{abstract} We study an analogue of Serre's modularity conjecture for projective representations $\rhobar: \Gal(\overline{K} / K) \ra \PGL_2(k)$, where $K$ is a totally real number field. We prove new cases of this conjecture when $k = \mathbb{F}_5$ by using the automorphy lifting theorems over CM fields established in \cite{AKT}.\footnote{\textit{2010 Mathematics Subject Classification:} 11F41, 11F80.} \end{abstract} \tableofcontents \section{Introduction} Let $K$ be a number field, and consider a continuous representation \[ \rho : G_K \to \GL_2(k), \] where $k$ is a finite field. (Here $G_K$ denotes the absolute Galois group of $K$; for this and other notation, see \S \ref{subsec_notation} below.) We say that $\rho$ is of {\it Serre-type}, or $S$-type, if it is absolutely irreducible and totally odd, in the sense that for each real place $v$ of $K$ and each associated complex conjugation $c_v \in G_K$, $\det \rho(c_v) = -1$. Serre's conjecture and its generalisations assert that any $\rho$ of $S$-type should be automorphic (see for example \cite{Asterisque, Duke} in the case $K = \mathbb{Q}$, \cite{BDJ} when $K$ is totally real, and \cite{Sen18} for a general number field $ K$). The meaning of the word `automorphic' depends on the context but when $K$ is totally real, for example, we can ask for $\rho$ to be associated to a cuspidal automorphic representation $\pi$ of $\GL_2(\A_K)$ which is regular algebraic of weight 0 (see \S \ref{subsec_automorphy} below). Serre's conjecture is now a theorem when $K = \mathbb{Q}$ \cite{KW, Kha09}. For a totally real field $K$, some results are available when $k$ is `small'. These are summarised in the following theorem, which relies upon the papers \cite{Duke, Tunnell, SBT, Man, Ellenberg}: \begin{theorem}\label{introthm_known_cases} Let $K$ be a totally real number field, and let $\rho : G_K \to \GL_2(k)$ be a representation of $S$-type. Then $\rho$ is automorphic provided $| k | \in \{ 2, 3, 4, 5, 7, 9\}$. \end{theorem} One can equally consider continuous representations \[ \sigma : G_K \to \PGL_2(k), \] where again $k$ is a finite field. We say that $\sigma$ is of $S$-type if it is absolutely irreducible and totally odd, in the sense that if $k$ has odd characteristic then for each real place $v$ of $K$, $\sigma(c_v)$ is non-trivial. One could formulate a projective analogue of Serre's conjecture, asking that any representation $\sigma$ of $S$-type be automorphic. A theorem of Tate implies that $\sigma$ lifts to a linear representation valued in $\GL_2(k')$ for some finite extension $k' / k$, and by $\sigma$ being automorphic we mean that a lift of it to a linear representation is automorphic (see \S \ref{subsec_automorphy} below). Thus if $k$ is allowed to vary, this conjecture is equivalent to Serre's conjecture, since any representation $\rho$ has an associated projective representation $\Proj(\rho)$, and any projective representation $\sigma$ lifts to a representation valued in $\GL_2(k')$ for some finite extension $k' / k$; moreover, $\rho$ is of $S$-type if and only if $\Proj(\rho)$ is, and $\rho$ is automorphic if and only if $\Proj(\rho)$ is. However, for fixed $k$ the two conjectures are not equivalent: certainly if $\rho$ is valued in $\GL_2(k)$ then $\Proj(\rho)$ takes values in $\PGL_2(k)$, but it is not true that any representation $\sigma : G_K \to \PGL_2(k)$ admits a lift valued in $\GL_2(k)$, and in fact in general the determination of the minimal extension $k' / k$ such that there is a lift to $\GL_2(k')$ is somewhat subtle. It is therefore of interest to ask whether the consideration of projective representations allows one to expand the list of `known' cases of Serre's conjecture. Our main theorem affirms that this is indeed the case. Before giving the statement we need to introduce one more piece of notation. We write $\pdet : \PGL_2(k) \to k^\times / (k^\times)^2$ for the homomorphism induced by the determinant. We say that a homomorphism $G_K \to k^\times / (k^\times)^2$ is totally even (resp. totally odd) if each complex conjugation in $G_K$ is trivial (resp. non-trivial) image. \begin{theorem}\label{introthm_projective_cases} Let $K$ be a totally real number field, and let $\sigma : G_K \to \PGL_2(k)$ be a representation of $S$-type. Then $\sigma$ is automorphic provided that one of the following conditions is satisfied: \begin{enumerate} \item $|k| \in \{ 2, 3, 4 \}$. \item $|k| = 5$, $[K(\zeta_5) : K] = 4$, and $\pdet \circ \sigma$ is totally even. \item $|k| = 5$, $[K(\zeta_5) : K] = 4$, and $\pdet \circ \sigma$ is totally odd. \item $|k| = 7$ and $\pdet \circ \sigma$ is totally odd. \item $|k| = 9$ and $\pdet \circ \sigma$ is totally even. \end{enumerate} \end{theorem} We note the exceptional isomophisms $\PSL_2(\F_9)=A_6$, $\PGL_2(\F_5)=S_5$, $\PGL_2(\F_3)=S_4$, $\PGL_2(\F_2)=S_3$ which link our results to showing that splitting fields of polynomials of small degree over $K$ arise automorphically. The proof of Theorem \ref{introthm_projective_cases} falls into three cases. The first is when $|k|$ is even or $k = \bbF_3$. When $|k|$ is even, the homomorphism $\GL_2(k) \to \PGL_2(k)$ splits, so we reduce easily to Theorem \ref{introthm_known_cases}. When $k = \bbF_3$, the homomorphism $\PGL_2(\bbZ[\sqrt{-2}]) \to \PGL_2(\bbF_3)$ splits and we can use the Langlands--Tunnell theorem \cite{Tunnell} to establish the automorphy of $\sigma$. The second case is when $|k|$ is odd and $-1$ is a square in $k$ (resp. a non-square in $k$) and $\Delta \circ \sigma$ is totally even (resp. totally odd). In this case we are able to construct the following data: \begin{itemize} \item A solvable totally real extension $L / K$ and a representation $\rhobar_1 : G_L \to \GL_2(k)$ such that $\Proj(\rhobar_1) = \sigma|_{G_L}$ (by showing that $L / K$ can be chosen to kill the Galois cohomological obstruction to lifting). \item A representation $\rho_2 : G_K \to \GL_2(\overline{\bbQ}_p)$ such that $\Proj(\overline{\rho}_2)$ and $\sigma$ are conjugate in $\PGL_2(\overline{\bbF}_p)$ (by choosing an arbitrary lift of $\sigma$ to $\GL_2(\overline{\bbF}_p)$ and applying the Khare--Wintenberger method). \end{itemize} We can then use Theorem \ref{introthm_known_cases} to verify the automorphy of $\rhobar_1$, hence the residual automorphy of $\rhobar_2|_{G_L}$. An automorphy lifting theorem then implies the automorphy of $\rho_2|_{G_L}$, hence $\rho_2$ itself by solvable descent, hence finally of $\sigma$. The final case is when $k = \bbF_5$ and $\Delta \circ \sigma$ is totally odd. In this case there does not exist any totally real extension $L / K$ such that $\sigma|_{G_L}$ lifts to a representation valued in $\GL_2(k)$ (there is a local obstruction at the real places). However, it is possible to find a CM extension $L / K$ such that $\sigma|_{G_L}$ lifts to a representation valued in $\GL_2(k)$ with determinant the cyclotomic character. When $k = \bbF_5$ such a representation necessarily appears in the group of 5-torsion points of an elliptic curve over $L$ (cf. \cite{SBT}) and so we can use the automorphy results over CM fields established in \cite{AKT} together with a solvable descent argument to obtain the automorphy of $\sigma$. The main novelty in this paper is contained in our treatment of this case. \begin{remark}\label{rmk_tunnell} In the final case above of a representation $\sigma : G_K \to \PGL_2(\bbF_5)$ with non-solvable image, the residual automorphy of the lift $\rho : G_L \to \GL_2(\bbF_5)$ ultimately depends on \cite[Theorem 7.1]{AKT}, which proves the automorphy of certain residually dihedral 2-adic Galois representations. The residual automorphy of these 2-adic representations is verified using automorphic induction. In particular, our proof in this case does not depend on the use of the Langlands--Tunnell theorem. This is in contrast to the argument used in e.g.\ \cite[Theorem 4.1]{SBT} to establish the automorphy of representations $\rho' : G_K \to \GL_2(\bbF_5)$ with cyclotomic determinant. This `2-3 switch' strategy can also be used to prove the automorphy of representations $\sigma : G_K \to \PGL_2(\bbF_3)$ with $\pdet \circ \sigma$ totally odd using the 2-adic automorphy theorems proved in \cite{Allen}, see Theorem \ref{LT} of the text. This class of representations includes the projective representations associated to the Galois action on the 3-torsion points of an elliptic curve over $K$. This gives a way to verify the modulo 3 residual automorphy of elliptic curves over $K$ which does not rely on the Langlands--Tunnell theorem (and in particular the works \cite{L, Jac81}) but only on the Saito--Shintani lifting for holomorphic Hilbert modular forms \cite{Sai75}. (We note that we do need to use the Langlands-Tunnell theorem to prove the automorphy of representations $\sigma : G_K \to \PGL_2(\bbF_3)$ with $\pdet \circ \sigma$ totally even, cf. Theorem \ref{known}.) \end{remark} We now describe the structure of this note. We begin in \S \ref{sec_lifting} by studying the lifts of projective representations and collecting various results about the existence of characteristic 0 lifts of residual representations and their automorphy. We are then able to give the proofs of Theorem \ref{introthm_known_cases} and the first two cases in the proof of Theorem \ref{introthm_projective_cases} described above. In \S \ref{sec_mod_3}, we expand on Remark \ref{rmk_tunnell} by showing how the main theorems of \cite{Allen} can be used to give another proof of the automorphy of $S$-type representations $\sigma : G_K \to \PGL_2(\bbF_3)$ (still under the hypothesis that $K$ is totally real and $\Delta \circ \sigma$ is totally odd). Finally, in \S \ref{sec_mod_5} we use similar arguments, now based on the main theorems of \cite{AKT}, to complete the proof of Theorem \ref{introthm_projective_cases}. \subsection*{Acknowledgments} We would like to thank the anonymous referee for their comments and corrections. P.A. was supported by Simons Foundation Collaboration Grant 527275 and NSF grant DMS-1902155. He would like to thank the third author and Cambridge University for hospitality during a visit where some of this work was completed. Parts of this work were completed while P.A. was a visitor at the Institute for Advanced Study, where he was partially supported by the NSF. He would like to thank the IAS for providing excellent working conditions during his stay. J.T.'s work received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 714405). This research was begun during the period that J.T. served as a Clay Research Fellow. \subsection{Notation}\label{subsec_notation} If $K$ is a perfect field then we write $G_K = \Gal(\overline{K} / K)$ for the Galois group of $K$ with respect to a fixed choice of algebraic closure. If $K$ is a number field and $v$ is a place of $K$ then we write $K_v$ for the completion of $K$ at $v$, and fix an embedding $\overline{K} \to \overline{K}_v$ extending the natural embedding $K \to K_v$; this determines an injective homomorphism $G_{K_v} \to G_K$. If $v$ is a finite place of $K$ then we write $\Frob_v \in G_{K_v}$ for a lift of the geometric Frobenius, $k(v)$ for the residue field of $K_v$, and $q_v$ for the cardinality of $K_v$; if $v$ is a real place, then we write $c_v \in G_{K_v}$ for complex conjugation. Any homomorphism from a Galois group $G_K$ to another topological group will be assumed to be continuous. If $p$ is a prime and $K$ is a field of characteristic 0, then we write $\epsilon : G_K \to \bbZ_p^\times$ for the $p$-adic cyclotomic character, $\overline{\epsilon} : G_K \to \bbF_p^\times$ for its reduction modulo $p$, and $\omega : G_K \to \bbF_p^\times / ( \bbF_p^\times)^2$ for the character $\overline{\epsilon} \text{ mod } ( \bbF_p^\times)^2$. More generally, if $\rho : G_K \to \GL_n(\overline{\bbQ}_p)$ is a representation, then we write $\rhobar : G_K \to \GL_n(\overline{\bbF}_p)$ for the associated semisimple residual representation (uniquely determined up to conjugation). If $k$ is a field then we write $\Proj : \GL_n(k) \to \PGL_n(k)$ for the natural projection and $\pdet : \PGL_n(k) \to k^\times / (k^\times)^n$ for the character induced by the determinant. We will only use these maps in the case $n = 2$. If $K$ is a field of characteristic 0, $E$ is an elliptic curve curve over $K$, and $p$ is a prime, then we write $\rhobar_{E, p} : G_K \to \GL_2(\bbF_p)$ for the representation associated to $H^1(E_{\overline{K}}, \bbF_p)$ after a choice of basis. Thus $\det \rhobar_{E, p} = \overline{\epsilon}^{-1}$. \section{Lifting representations}\label{sec_lifting} In this section we study different kinds of liftings of representations: liftings to characteristic 0 (and the automorphy of such liftings) and liftings of projective representations to true (linear) representations. We begin by discussing what it means for a (projective or linear) representation to be automorphic. \subsection{Automorphy of linear and projective representations}\label{subsec_automorphy} Let $K$ be a CM or totally real number field. If $\pi$ is a cuspidal, regular algebraic automorphic representation of $\GL_2(\A_K)$ then (see e.g. \cite{Tay89, hltt}) for any isomorphism $\iota : \overline{\bbQ}_p \to \bbC$, there exists a semisimple representation $r_\iota(\pi) : G_K \to \GL_2(\overline{\bbQ}_p)$ satisfying the following condition, which determines $r_\iota(\pi)$ uniquely up to conjugation: for all but finitely many finite places $v$ of $K$ such that $\pi_v$ is unramified, $r_\iota(\pi)|_{G_{K_v}}$ is unramified and $r_\iota(\pi)|_{G_{K_v}}^{ss}$ is related to the representation $\iota^{-1}\pi_v$ under the Tate-normalised unramified local Langlands correspondence. (See \cite[\S 2]{Tho16} for an explanation of how the characterstic polynomial of $r_\iota(\pi)|_{G_{K_v}}$ may be expressed in terms of the eigenvalues explicit unramified Hecke operators.) In this paper we only need to consider automorphic representations which are of regular algebraic automorphic representations $\pi$ which are of weight 0, in the sense that for each place $v | \infty$ of $K$, $\pi_v$ has the same infinitesimal character as the trivial representation. Let $k$ be a finite field of characteristic $p$, viewed inside its algebraic closure $\overline{\bbF}_p$. In this paper, we say that a representation $\rho : G_K \to \GL_2(k)$ is automorphic if it is $\GL_2(\overline{\bbF}_p)$-conjugate to a representation of the form $\overline{r_\iota(\pi)}$, where $\pi$ is a cuspidal, regular algebraic automorphic representation of $\GL_2(\bbA_K)$ of weight 0. We say that a representation $\sigma : G_K \to \PGL_2(k)$ is automorphic if it is $\PGL_2(\overline{\bbF}_p)$-conjugate to a representation of the form $\Proj(\overline{r_\iota(\pi)})$, where $\pi$ is a cuspidal, regular algebraic automorphic representation of $\GL_2(\bbA_K)$ of weight 0. We say that a representation $\rho : G_K \to \GL_2(\overline{\bbQ}_p)$ is automorphic if it is conjugate to a representation of the form $\overline{r_\iota(\pi)}$, where $\pi$ is a cuspidal, regular algebraic automorphic representation of $\GL_2(\bbA_K)$ of weight 0. We say that an elliptic curve $E$ over $K$ is modular if the representation of $G_K$ afforded by $H^1(E_{\overline{K}}, \bbQ_p)$ is automorphic in this sense. \begin{lemma} Let $K$ be a CM or totally real number field, let $\rho : G_K \to \GL_2(k)$ be a representation, and let $\sigma = \Proj(\rho)$. Then: \begin{enumerate} \item Let $\chi : G_K \to k^\times$ be a character. Then $\rho$ is automorphic if and only if $\rho \otimes \chi$ is automorphic. \item $\sigma$ is automorphic if and only if $\rho$ is automorphic. \end{enumerate} \end{lemma} \begin{proof} If $\chi : G_K \to k^\times$ is a character then its Teichm\"uller lift $X : G_K \to \overline{\bbQ}_p^\times$ is associated, by class field theory, to a finite order Hecke character $\Xi : \bbA_K^\times \to \bbC^\times$. If $\pi$ is a cuspidal automorphic representation which is regular algebraic of weight 0 and $\overline{r_\iota(\pi)}$ is conjugate to $\rho$, then $\pi \otimes (\Xi \circ \det)$ is also cuspidal and regular algebraic of weight 0 and $\overline{r_\iota(\pi \otimes (\Xi \circ \det))}$ is conjugate to $\rho \otimes \chi$. It is clear from the definition that if $\rho$ is automorphic then so is $\sigma$. Conversely, if $\sigma$ is automorphic then there is a cuspidal, regular algebraic automorphic representation $\pi$ of $\GL_2(\A_K)$ and isomorphism $\iota : \overline{\bbQ}_p \to \bbC$ such that $\Proj(\overline{r_\iota(\pi})) = \Proj(\rho)$. It follows that there exists a character $\chi : G_K \to \overline{\bbF}_p^\times$ such that $\rho$ is conjugate to $\overline{r_\iota(\pi)} \otimes \chi$. The automorphy of $\rho$ follows from the first part of the lemma. \end{proof} \subsection{Lifting to characteristic 0} We recall a result on the existence of liftings with prescribed properties. We first need to say what it means for a representation to be exceptional. If $K$ is a number field and $\sigma : G_K \to \PGL_2(k)$ is a projective representation, we say that $\sigma$ is exceptional if it is $\PGL_2(\overline{\bbF}_p)$-conjugate to a representation $\sigma' : G_K \to \PGL_2(\bbF_5)$ such that $\sigma'(G_K)$ contains $\PSL_2(\bbF_5)$ and the character $(-1)^{\Delta \circ \sigma'} \overline{\epsilon}$ is trivial. (Here we write $(-1)^{\Delta \circ \sigma'} $ for the composition of $\Delta \circ \sigma'$ with the unique isomorphism $\bbF_5^\times / (\bbF_5^\times)^2 \cong \{ \pm 1 \}$.) We say that a representation $\rho : G_K \to \GL_2(k)$ is exceptional if $\Proj(\rho)$ is exceptional. If $K$ is totally real then this is equivalent to the definition given in \cite[\S 3]{khare-thorne-mathz17}. The exceptional case is often excluded in the statements of automorphy lifting theorems (the root cause being the non-triviality of the group $H^1(\sigma(G_K), \ad^0 \rho(1))$). \begin{theorem}\label{thm:geo-lift} Let $K$ be a totally real field, let $\rhobar : G_K \ra \GL_2(k)$ be a representation of $S$-type, and let $\psi : G_K \to \overline{\bbZ}_p^\times$ be a continuous character lifting $\det \rhobar$ such that $\psi\epsilon$ is of finite order. Suppose that the following conditions are satisfied: \begin{enumerate} \item $p > 2$ and $\rhobar|_{G_{K(\zeta_p)}}$ is absolutely irreducible. \item If $p = 5$ then $\rhobar$ is non-exceptional. \end{enumerate} Then $\rhobar$ lifts to a continuous representation $\rho : G_K \to \GL_2(\overline{\bbZ}_p)$ satisfying the following conditions: \begin{enumerate} \item For all but finitely many places $v$ of $F$, $\rho|_{G_{K_v}}$ is unramified. \item $\det \rho = \psi$. \item For each place $v | p$ of $K$, $\rho|_{G_{K_v}}$ is potentially crystalline and for each embedding $\tau : K_v \to \overline{\bbQ}_p$, $\mathrm{HT}_\tau(\rho) = \{ 0, 1 \}$. Moreover, for any $v | p$ such that $\rhobar|_{G_{K_v}}$ is reducible, we can assume that $\rho|_{G_{K_v}}$ is ordinary, in the sense of \cite[\S 5.1]{Tho16}. \end{enumerate} \end{theorem} \begin{proof} This follows from \cite[Theorem 7.6.1]{snowden2009dimensional}, on noting that the condition (A2) there can be replaced by the more general condition that $\rhobar$ is non-exceptional (indeed, the condition (A2) is used to invoke \cite[Proposition 3.2.5]{Kis09a}, which is proved under this more general condition). To verify the existence of a potentially crystalline lift of $\rhobar|_{G_{K_v}}$ for each $v | p$ (or in the terminology of \emph{loc. cit.}, the compatibility of $\rhobar|_{G_{K_v}}$ with type $A$ or $B$) we apply \cite[Proposition 7.8.1]{snowden2009dimensional} (when $\rhobar|_{G_{K_v}}$ is irreducible) or \cite[Lemma 6.1.6]{Bar12b} (when $\rhobar|_{G_{K_v}}$ is reducible). \end{proof} We next recall an automorphy lifting theorem. \begin{theorem}\label{thm_ALT} Let $K$ be a totally real number field, and let $\rho : G_K \to \GL_2(\overline{\bbZ}_p)$ be a continuous representation satisfying the following conditions: \begin{enumerate} \item $p > 2$ and $\rhobar|_{G_{K(\zeta_p)}}$ is absolutely irreducible. \item For all but finitely many finite places $v$ of $K$, $\rho|_{G_{K_v}}$ is unramified. \item For each place $v | p$ of $K$, $\rho|_{G_{K_v}}$ is de Rham and for each embedding $\tau : K_v \to \overline{\bbQ}_p$, $\mathrm{HT}_\tau(\rho) = \{ 0, 1 \}$. \item The representation $\rhobar$ is automorphic. \end{enumerate} Then $\rho$ is automorphic. \end{theorem} \begin{proof} This follows from \cite[Theorem 9.3]{khare-thorne-mathz17}. \end{proof} We now combine the previous two theorems to obtain a ``solvable descent of automorphy'' theorem for residual representations, along similar lines to \cite{K, Tay}. \begin{proposition}\label{realSolvableDescent} Let $K$ be a totally real number field and let $\rhobar : G_K \ra \GL_2(k)$ be a representation of $S$-type. Suppose that there exists a solvable totally real extension $L / K$ such that the following conditions are satisfied: \begin{enumerate} \item $p > 2$ and $\rhobar|_{G_{L(\zeta_p)}}$ is absolutely irreducible. If $p = 5$, then $\rhobar$ is non-exceptional. \item $\rhobar|_{G_L}$ is automorphic. \end{enumerate} Then $\rhobar$ is automorphic. \end{proposition} \begin{proof} Let $\psi : G_K \to \overline{\bbZ}_p^\times$ be the character such that $\psi \epsilon$ is the Teichm\"uller lift of $(\det \rhobar) \overline{\epsilon}$, and let $\rho : G_K \to \GL_2(\overline{\bbZ}_p)$ be the lift of $\rho$ whose existence is asserted by Theorem~\ref{thm:geo-lift}. Then Theorem \ref{thm_ALT} implies the automorphy of $\rho|_{G_L}$, and the automorphy of $\rho$ itself and hence of $\rhobar$ follows by cyclic descent, using the results of Langlands \cite{L}. \end{proof} We can now give the proof of Theorem \ref{introthm_known_cases}, which we restate here for the convenience of the reader: \begin{theorem}\label{thm-GL2} Let $K$ be a totally real field and let $\rhobar:G_K \ra \GL_2(k)$ be a representation of $S$-type. Suppose that $\lvert k \rvert \in \{2, 3, 4, 5, 7, 9\}$. Then $\rhobar$ is automorphic. \end{theorem} \begin{proof} Many of the results we quote here are stated in the case of $K = \bbQ$ but hold more generally for totally real fields with minor modification. We will apply them in the more general setting without further comment. If $\rhobar$ is dihedral then this is a consequence of results of Hecke (see \cite[\S5.1]{Duke}). If $k = \bbF_3$, it is a consequence of the Langlands--Tunnell theorem \cite{Tunnell} (see the discussion following Theorem~5.1 in \cite[Chapter~5]{Wiles}). We may thus assume for the remainder of the proof that $\lvert k \rvert > 3$. We may also assume that for any abelian extension $L/K$, the restriction $\rhobar|_{G_{L(\zeta_p)}}$ is absolutely irreducible (as otherwise $\rhobar$ would be dihedral). Next suppose that $k = \bbF_5$. We note that $\rhobar$ is not exceptional, by \cite[Lemma 3.1]{khare-thorne-mathz17}. Let $L/K$ be the totally real cyclic extension cut out by $(\det\rhobar)\overline{\epsilon}$. By \cite[Theorem~1.2]{SBT}, there is an elliptic curve $E$ over $L$ such that $\rhobar_{E, 5} \cong \rhobar|_{G_L}$ and $\rhobar_{E, 3}(G_L)$ contains $\SL_2(\bbF_3)$. By the $k = \bbF_3$ case of the theorem and by Theorem \ref{thm_ALT}, we see that $E$ is automorphic, hence so is $\rhobar|_{G_{L}}$. The automorphy of $\rhobar$ then follows from Proposition~\ref{realSolvableDescent}. The $k = \bbF_7$ case is similar, using \cite[Proposition~3.1]{Man} instead of \cite[Theorem~1.2]{SBT}. Next suppose that $k = \bbF_4$. We can twist $\rho$ to assume that it is valued in $\SL_2(\bbF_4)$. Then \cite[Theorem~3.4]{SBT} shows that there is an abelian surface $A$ over $F$ with real multiplication by $\cO_{\bbQ(\sqrt{5})}$ such that the $G_K$-representation on $A[2] \cong \bbF_4^2$ is isomorphic to $\rhobar$ and such that the $G_K$-representation on $A[\sqrt{5}] \cong \bbF_5^2$ has image containing $\SL_2(\bbF_5)$. By the $k = \bbF_5$ case of the theorem, Theorem \ref{thm:geo-lift}, and Theorem \ref{thm_ALT}, we see that $A$ is automorphic, hence so is $\rhobar$. Finally suppose that $k = \bbF_9$. Let $L/K$ be the totally real cyclic extension cut out by $(\det\rhobar)\overline{\epsilon}$. Then the argument of \cite[\S 2.5]{Ellenberg} shows that there is a solvable totally real extension $M / K$ containing $L / K$ and an abelian surface $A$ over $M$ with real multiplication by $\cO_{\bbQ(\sqrt{5})}$ such that the $G_{M}$-representation on $A[3] \cong \bbF_9^2$ is isomorphic to $\overline{\rho}|_{G_{M}} \otimes\overline{\epsilon}$ and such that the $G_{M}$-representation on $A[\sqrt{5}] \cong \bbF_5^2$ has image containing $\SL_2(\bbF_5)$. By the $k = \bbF_5$ case of the theorem, Theorem \ref{thm:geo-lift}, and Theorem \ref{thm_ALT}, we see that $A$ is automorphic, hence so is $\rhobar|_{G_{M}}$. The automorphy of $\rhobar$ follows from Proposition~\ref{realSolvableDescent}. \end{proof} \begin{remark} The `2-3 switch’ strategy employed in Theorem \ref{LT} below, can be used to prove automorphy of totally odd representations $\rho : G_K \to \GL_2(\bbF_3)$ without using the Langlands-Tunnell theorem. \end{remark} \subsection{Lifting projective representations}\label{subsec_lifting_projective} We now consider the problem of lifting projective representations. \begin{lemma}\label{lem_tate} Let $K$ be a number field, and let $\sigma : G_K \to \PGL_2(k)$ be a continuous homomorphism. Then there exists a finite extension $k' / k$ such that $\sigma$ lifts to a homomorphism $\rho : G_K \to \GL_2(k')$. \end{lemma} \begin{proof} The obstruction to lifting a continuous homomorphism $\sigma : G_K \to \PGL_2(k)$ to a continuous homomorphism $\rho : G_K \to \GL_2(\overline{k})$ lies in $H^2(G_K, \overline{k}^\times) $. Tate proved that $H^2(G_K, \overline{k}^\times) = 0$ (see \cite[\S 6.5]{Ser77}) so a lift always exists. \end{proof} \begin{lemma}\label{lem_lifting_over_solvable_extension} Suppose that $p > 2$, let $K$ be a number field, and let $\sigma : G_K \to \PGL_2(k)$ be a homomorphism. Let $S$ be a finite set of places of $K$ such that for each $v \in S$, there exists a lift of $\sigma|_{G_{K_v}}$ to a homomorphism $\rho_v : G_K \to \GL_2(k)$. Then we can find the following data: \begin{enumerate} \item A solvable $S$-split extension $L / K$. \item A homomorphism $\rho : G_L \to \GL_2(k)$ such that $\Proj(\rho) = \sigma|_{G_L}$ and for each $v \in S$ and each place $w | v$ of $L$, $\rho|_{G_{L_w}} = \rho_v$. \end{enumerate} Moreover, if $K$ is a CM field we can choose $L$ also to be a CM field. \end{lemma} \begin{proof} Let $H$ denote the $2$-Sylow subgroup of $k^\times$, of order $2^m$, and let $H' \leq k^\times$ denote its prime-to-2 complement. If $0 \leq k \leq m$, we write $G_k = \GL_2(k) / (2^{m-k} H \times H')$, which is an extension \[ 1 \to H / 2^{m-k} H \to G_k \to \PGL_2(k) \to 1. \] We show by induction on $k \geq 0$ that we can find a solvable, $S$-split extension $L_k / K$ and a homomorphism $\rho_k : G_{L_k} \to G_k$ lifting $\sigma|_{G_{L_k}}$ and such that for each $v \in S$ and each place $w | v$ of $L_k$, $\rho_k|_{G_{L_{k, w}}} = \rho_v \text{ mod }2^{m-k} H \times H'$. The case $k = 0$ is the existence of $\sigma$. The case $k = m$ implies the statement of the lemma, since $\GL_2(k) = G_m \times H'$. (Note \cite[Ch. X, Theorem 5]{Art09} implies that any collection of characters $\chi_v : G_{K_v} \to H'$ can be globalised to a character $\chi : G_K \to H'$.) For the induction step, suppose the induction hypothesis holds for a fixed value of $k$. We consider the obstruction to lifting $\rho_k$ to a homomorphism $\rho_{k+1} : G_{L_k} \to G_{k+1}$. This defines an element of $H^2(G_{L_k}, \bbZ / 2 \bbZ)$ which is locally trivial at the places of $L_k$ lying above $S$. We can therefore find an extension of the form $L_{k+1} = L_k \cdot E_{k+1}$, where $E_{k+1} / K$ is a solvable $S$-split extension, such that the image of this obstruction class in $H^2(G_{L_{k+1}}, \bbZ / 2 \bbZ)$ vanishes and so there is a homomorphism $\rho'_{k+1} : G_{L_{k+1}} \to G_{k+1}$ lifting $\rho_k|_{G_{L_{k+1}}}$. If $v \in S$ and $w | v$ is a place of $L_{k+1}$ then there is a character $\chi_w : G_{L_{k+1, w}} \to \bbZ / 2 \bbZ$ such that $\rho'_{k+1}|_{G_{L_{k+1, w}}} = (\rho_v \text{ mod }2^{m-(k+1)} H \times H')\cdot \chi_w$. We can certainly find a character $\chi : G_{L_{k+1}} \to \bbZ / 2 \bbZ$ such that $\chi|_{G_{L_{k+1, w}}} = \chi_w$ for each such place $w$. The induction step is complete on taking $\rho_{k+1} = \rho'_{k+1} \cdot \chi$. It remains to explain why we can choose $K$ to be CM if $L$ is. Since the extensions $E_k$ in the proof are required only to satisfy some local conditions, which are vacuous if $K$ is CM, we can choose the fields $E_k$ to be of the form $KE_k’$ where $E_k’$ is a totally real extension, in which case the field $L$ constructed in the proof is seen to be CM. \end{proof} \begin{remark}\label{utility} We remark that if $v$ is a real place of $K$ and $\sigma(c_v) \neq 1$, then there exists a lift of $\sigma|_{G_{K_v}}$ to $\GL_2(k)$ if and only if either $-1$ is a square in $k^\times$ and $\Delta \circ \sigma(c_v) = 1$, or $-1$ is not a square in $k^\times$ and $\Delta \circ \sigma(c_v) \neq 1$. We also note the utility of the `$S$-split' condition: we can add any set of places at which $\sigma$ is unramified to $S$, and in this way ensure that the $S$-split extension $L / K$ is linearly disjoint from any other fixed finite extension of $K$. \end{remark} Here is a variant. \begin{lemma}\label{lem_liftings_with_prescribed_determinant} Suppose that $p > 2$. Let $K$ be a number field, let $\sigma : G_K \to \PGL_2(k)$ be a homomorphism, and let $\chi : G_K \to k^\times$ be a character. Suppose that the following conditions are satisfied: \begin{enumerate} \item $\Delta \circ \sigma = \chi \text{ mod }(k^\times)^2$. \item For each finite place $v$ of $K$, $\sigma|_{G_{K_v}}$ and $\chi|_{G_{K_v}}$ are unramified. \item For each real place $v$ of $K$, $\sigma(c_v) \neq 1$ and $\chi(c_v) = -1$. \end{enumerate} Then there exists a homomorphism $\rho : G_K \to \GL_2(k)$ such that $\Proj(\rho) = \sigma$ and $\det(\rho) = \chi$. \end{lemma} \begin{proof} We consider the short exact sequence of groups \[ 1 \to \{ \pm 1 \} \to \GL_2(k) \to \PGL_2(k) \times_{\Delta} k^\times \to 1, \] where the last group is the subgroup of $(g, \alpha) \in \PGL_2(k) \times k^\times$ such that $\Delta(g) = \alpha \text{ mod }(k^\times)^2$. By hypothesis the pair $(\sigma, \chi)$ defines a homomorphism $\Sigma : G_K \to \PGL_2(k) \times_\Delta k^\times$ such that for every place $v$ of $K$, $\Sigma|_{G_{K_v}}$ lifts to $\GL_2(k)$ (see Remark~\ref{utility}). The subgroup of locally trivial elements of $H^2(G_K, \{ \pm 1 \})$ is trivial, by class field theory, so $\Sigma$ lifts to a homomorphism $\rho : G_K \to \GL_2(k)$, as required. \end{proof} We now prove an analogue of Proposition \ref{realSolvableDescent} for projective representations. \begin{proposition}\label{prop_projective_solvable_descent} Let $K$ be a totally real number field and let $\sigma : G_K \to \PGL_2(k)$ be a representation of $S$-type. Suppose that there exists a solvable totally real extension $L / K$ satisfying the following conditions: \begin{enumerate} \item $p > 2$ and $\sigma|_{G_{L(\zeta_p)}}$ is absolutely irreducible. If $p = 5$, then $\sigma$ is non-exceptional. \item $\sigma|_{G_L}$ is automorphic. \end{enumerate} Then $\sigma$ is automorphic. \end{proposition} \begin{proof} By Lemma \ref{lem_tate}, we can lift $\sigma$ to a representation $\rhobar : G_K \to \GL_2(\overline{k})$. Then $\rhobar|_{G_L}$ is automorphic and we can apply Proposition \ref{realSolvableDescent} to conclude that $\rhobar$ is automorphic, hence that $\sigma$ is automorphic. \end{proof} We are now in a position to establish a large part of Theorem \ref{introthm_projective_cases}. \begin{theorem}\label{known} Let $K$ be a totally real number field and let $\sigma : G_K \to \PGL_2(k)$ be a representation of $S$-type. If one of the following conditions holds, then $\sigma$ is automorphic: \begin{enumerate} \item $|k| \in \{ 2, 3, 4 \}$. \item $|k| = 5$ or $9$ and $\pdet \circ \sigma$ is totally even. If $|k| = 5$, then $\sigma$ is non-exceptional. \item $|k| = 7$ and $\pdet \circ \sigma$ is totally odd. \end{enumerate} \end{theorem} \begin{proof} When $k = \F_2$ or $\F_4$, the map $\SL_2(k) \to \PGL_2(k)$ is an isomorphism, so $\sigma$ trivially lifts to a $\GL_2(k)$ representation and we can apply Theorem~\ref{thm-GL2}. The case when $|k|=3$ follows from \cite{Tunnell}. In the other cases, we can assume that $\sigma|_{G_{K(\zeta_p)}}$ is absolutely irreducible (as otherwise $\sigma$ lifts to a dihedral representation). Let $S_\infty$ be the set of infinite places of $K$ and choose a finite set $S'$ of finite places of $K$ at which $\sigma$ is unramified such that $\Gal(\overline{K}^{\ker(\sigma|_{G_{K(\zeta_p)}})}/K)$ is generated by $\{\Frob_v\}_{v \in S'}$. We can apply Lemma \ref{lem_lifting_over_solvable_extension}, see also Remark~\ref{utility}, with $S = S_\infty \cup S'$ to find a solvable, totally real extension $L / K$ such that $\sigma$ lifts to a representation $\rhobar : G_L \to \GL_2(k)$ such that $\rhobar|_{G_{L(\zeta_p)}}$ is absolutely irreducible and $\rhobar$ is not exceptional if $p = 5$. Then Theorem~\ref{thm-GL2} implies the automorphy of $\rhobar$ and Proposition \ref{prop_projective_solvable_descent} implies the automorphy of $\sigma$, as desired. \end{proof} \section{Modularity of mod $3$ representations}\label{sec_mod_3} In this section, which is a warm-up for the next one, we give a proof of the following theorem that does not depend on the Langlands--Tunnell theorem: \begin{theorem}\label{LT} Let $K$ be a totally real number field, and let $\sigma : G_K \to \PGL_2(\bbF_3)$ be a representation of $S$-type such that $\Delta \circ \sigma$ is totally odd. Then $\sigma$ is automorphic. \end{theorem} \begin{proof} We can assume that $\sigma$ is not dihedral; by the classification of finite subgroups of $\PGL_2(\bbF_3)$, we can therefore assume that $\sigma(G_K)$ contains $\PSL_2(\bbF_3)$. By Proposition \ref{prop_projective_solvable_descent}, we can moreover assume, after replacing $K$ by a solvable totally real extension, that $\sigma$ is everywhere unramified and that for each place $v | 2$ of $K$, $q_v \equiv 1 \text{ mod }3$ and $\sigma|_{G_{K_v}}$ is trivial. \begin{lemma} There exists a solvable totally real extension $L / K$ and a modular elliptic curve $E$ over $L$ satisfying the following conditions: \begin{enumerate} \item $\sigma(G_L)$ contains $\PSL_2(\bbF_3)$. In particular, $\sigma|_{G_L}$ is of $S$-type. \item The homomorphism $\Proj(\rhobar_{E, 3})$ is $\PGL_2(\overline{\bbF}_3)$-conjugate to $\sigma|_{G_L}$. \end{enumerate} \end{lemma} \begin{proof} The character $(\Delta \circ \sigma) \omega : G_K \to \bbF_3^\times$ is totally even, so cuts out a totally real (trivial or quadratic) extension $L / K$, and $\sigma(G_{L})$ contains $\PSL_2(\bbF_3)$ and satisfies $\Delta \circ \sigma|_{G_{L}} = \omega$. Using Lemma \ref{lem_liftings_with_prescribed_determinant}, we can find a lift $\rhobar : G_L \to \GL_2(\bbF_3)$ of $\sigma|_{G_L}$ satisfying the following conditions: \begin{itemize} \item $\det \rhobar = \overline{\epsilon}^{-1}$. \item For each place $v | 2$ of $L$, $\rhobar|_{G_L}$ is trivial. \item $\rhobar(G_L)$ contains $\SL_2(\bbF_3)$. In particular, $\rhobar|_{G_{L(\zeta_3)}}$ is absolutely irreducible. \end{itemize} We can then apply \cite[Lemma 9.7]{AKT} to conclude that there exists an elliptic curve $E / L$ satisfying the following conditions: \begin{itemize} \item There is an isomorphism $\rhobar_{E, 3} \cong \rhobar$. \item For each place $v | 2$ of $L$, $E$ has multiplicative reduction at $v$ and the valuation at $v$ of the minimal discriminant of $E$ is 3. \item $\rhobar_{E, 2}(G_L) = \SL_2(\bbF_2)$. \end{itemize} Then \cite[p. 1237, Corollary]{Allen} implies that $E$ is modular, proving the lemma. \end{proof} We see that $\sigma|_{G_L}$ is automorphic. We can then apply Proposition \ref{prop_projective_solvable_descent} to conclude that $\sigma$ itself is automorphic, as required. \end{proof} \section{Modularity of mod $5$ representations}\label{sec_mod_5} In this section we complete the proof of Theorem \ref{introthm_projective_cases} by proving Theorem \ref{thm_mod_5} below. \begin{theorem}\label{thm_mod_5} Let $K$ be a totally real field, and let $\sigma : G_K \to \PGL_2(\bbF_5)$ be a representation of $S$-type which is non-exceptional, and such that $\Delta \circ \sigma$ is totally odd. Then $\sigma$ is automorphic. \end{theorem} \begin{proof} By the classification of subgroups of $\PGL_2(\bbF_5)$, automorphic induction, and the Langlands--Tunnell theorem, we can assume that $\sigma(G_K)$ contains $\PSL_2(\bbF_5)$. By Theorem \ref{thm:geo-lift}, Lemma \ref{lem_tate}, and Proposition \ref{prop_projective_solvable_descent} we can assume, after possibly replacing $K$ be a solvable totally real extension, that the following conditions are satisfied: \begin{itemize} \item There exists a representation $\rhobar : G_K \to \GL_2(\overline{\bbF}_5)$ such that $\Proj(\rhobar)$ is $\PGL_2(\overline{\bbF}_5)$-conjugate to $\sigma$. Moreover, $\rhobar$ is everywhere unramified. \item There exists a representation $\rho : G_K \to \GL_2(\overline{\bbQ}_5)$ lifting $\rhobar$, which is unramified almost everywhere. \item For each place $v | 5$ of $K$, $\zeta_5 \in K_v$, $\rhobar|_{G_{K_v}}$ is trivial, and $\rho|_{G_{K_v}}$ is ordinary, in the sense of \cite[\S 5.1]{Tho16}. \item Let $\chi = \det \rho$. Then $\chi \epsilon$ has finite order prime to $5$ and for each finite place $v$ of $K$, $\chi \epsilon|_{G_{K_v}}$ is unramified. In particular, $\overline{\chi}$ is everywhere unramified. \end{itemize} Let $K' / K$ denote the quadratic CM extension cut out by the character $(\Delta \circ \sigma) \omega$. \begin{lemma} The representation $\rhobar|_{G_{K'}}$ is decomposed generic in the sense of \cite[Definition 4.3.1]{10authors}. \end{lemma} \begin{proof} It is enough to find a prime number $l$ such that $l$ splits in $K'$ and for each place $v | l$ of $K'$, $q_v \equiv 1 \text{ mod }5$ and the eigenvalues of $\rhobar(\Frob_v)$ are distinct. The argument of \cite[Lemma 7.1.5, (3)]{10authors} will imply the existence of such a prime $l$ if we can show that if $M = K'(\zeta_5)$ and $\widetilde{M} / \bbQ$ is the Galois closure of $M/\bbQ$, then $\sigma(G_{\widetilde{M}})$ contains $\PSL_2(\bbF_5)$. To see this, first let $\widetilde{K} / \bbQ$ be the Galois closure of $K / \bbQ$. Then $\widetilde{K}$ is totally real, and so $\sigma(G_{\widetilde{K}}) = \sigma(G_K) = \PGL_2(\bbF_5)$ because $\Delta \circ \sigma$ is totally odd. The extension $M \widetilde{K} / \widetilde{K}$ is abelian, so $\widetilde{M} / \widetilde{K}$ is abelian and $\sigma(G_{\widetilde{M}})$ must contain $\PSL_2(\bbF_5)$. \end{proof} By construction, $\Delta \circ \sigma|_{G_{K'}} = \overline{\epsilon}^{-1} \text{ mod }(\bbF_5^\times)^2$, so by Lemma \ref{lem_liftings_with_prescribed_determinant}, $\sigma|_{G_{K'}}$ lifts to a continuous homomorphism $\tau : G_{K'} \to \GL_2(\F_5)$ such that $\det \tau = \overline{\epsilon}^{-1}$. In particular, there is a character $\overline{\psi} : G_{K'} \to \overline{\F}_5^\times$ such that $\tau = \rhobar|_{G_{K'}} \otimes \overline{\psi}$. Let $\psi$ denote the Teichm\"uller lift of $\overline{\psi}$; then the determinant of $\rho|_{G_{K'}} \otimes \psi$ equals $\epsilon^{-1}$. \begin{lemma} The representation $\tau$ satisfies the following conditions: \begin{enumerate} \item $\tau|_{G_{K'(\zeta_5)}}$ is absolutely irreducible and $\tau$ is non-exceptional. \item $\tau$ is decomposed generic. \end{enumerate} \end{lemma} \begin{proof} The representation $\tau|_{G_{K'(\zeta_5)}}$ is absolutely irreducible because its projective image contains $\PSL_2(\bbF_5)$. If $\zeta_5 \in K'$ then $\sqrt{5} \in K$ and so $K' = K(\Delta \circ \sigma) = K(\zeta_5)$; this possibility is ruled out because $\sigma$ is non-exceptional. It follows that $\tau$ is non-exceptional. The representation $\tau$ is decomposed generic because $\rhobar|_{G_{K'}}$ is (and this condition only depends on the associated projective representation). \end{proof} Thanks to the lemma, we can apply \cite[Lemma 9.7]{AKT} and \cite[Corollary 9.13]{AKT} to conclude the existence of a modular elliptic curve $E$ over $K'$ such that $\rhobar_{E, 5} \cong \tau$ and for each place $v | 5$ of $K'$, $E$ has multiplicative reduction at the place $v$. We can then apply the automorphy lifting theorem \cite[Theorem 8.1]{AKT} to conclude that $\rho|_{G_{K'}} \otimes \psi$ is automorphic, hence that $\rho|_{G_{K'}}$ is automorphic. It follows by cyclic descent \cite{L} that $\rho$ and hence $\sigma$ are also automorphic, and this completes the proof. \end{proof} \bibliographystyle{alpha} \bibliography{ModLiftBib} \end{document}
171,005
Stay on target Sega’s bright blue rodent mascot is finally making his way to the silver screen after a number of false starts, and we couldn’t be happier for the little guy. Sure, the movie looks like it might be… not so great, but Sonic is still an icon in the video game universe and any additional exposure can only lead to better Sonic games, which is the end goal here. As the star of comics, cartoons and more, Sonic has amassed quite a collection of tie-in toys, and here are eleven of our favorites. Sonic Nendoroid Frequent readers of this column know how often we lean on Good Smile’s line of Nendoroid figures, because they somehow managed to capture just about every pop culture license known to man for their poseable little guys and gals. Their take on Sonic is solid, with his trademark red sneakers and active posture delivered with aplomb. These things always come with a bunch of accessories, and the hedgehog is no different – you get a ring, one of the Chaos Emeralds, an item box, and even a stand that lets you pose him in dynamic running stances. Get it ($133) at Amazon.com Eggman Plush Eggman – or Dr. Robotnik, if you’re nasty – has always been a hard nemesis to wrap our heads around. Sure, he wants to transform all of Mobius’s animal buddies into robots, or collect Chaos Emeralds, but why? What’s the dude’s motivation beyond just general evilness? We’re sure we could sit down and read like 80 Sonic comics and get an answer, but there are some things even we can’t survive. Take the bad guy home with this extremely funny plush that renders the mustachioed bad guy in SD style for extra huggability. Get it ($18) at Amazon.com Sonic Crash Course Board Game Transferring the frantic speed of the Sonic franchise to the domain of board games seems like it would be a fool’s errand, but Crash Course from IDW actually pulls it off. Up to four players compete in this madcap race that sees the track grow as players lay down cards, creating hazards for the others to overcome as they try to get the Chaos Emeralds before the rest of the pack. Every game is different as the raceway grows and changes, giving this one tons of replayability. It’s a solid addition to your groaning game shelf. Get it ($26) at Amazon.com Kidrobot Sonic Action Diorama Capturing the sheer speed of Sonic in a toy is a fool’s errand, but we have to give props to the artisans at Kidrobot for giving it a shot with this cool desktop diorama. Measuring eight inches high, it has the hedgehog in full spin dash down a gentle slope of Green Hill Zone, about to rocket into a line of rings that will give him protection from whatever bad news lurks down the line. The vibrant colors look like they were ripped straight from the Genesis. Get it ($52) at Amazon.com Shadow The Hedgehog Figure First appearing in Sonic Adventure 2, Shadow was Sega’s attempt to introduce some grim and gritty 21st century flavor to their franchise. A brooding antihero, he was originally only supposed to make one appearance but proved so popular with fans that he was even given his own spin-off game in 2005, which gave us the bizarre image of an anthropomorphic hedgehog wielding a submachine gun. He’s fallen out of favor a little in recent years, but Shadow still has a devoted fanbase. If that describes you, this dope five-inch Japanese statuette will look great on your shelf. Get it ($23) at Amazon.com Tails Plush One of the things that the Sonic franchise has been very dedicated to is creating platforming action that feels different when the player picks a different character. In Sonic 2 Tails played the same as his hedgehog friend, but with 1993’s Sonic Chaos we had the opportunity to use his flight ability for the first time, and that has featured heavily in levels designed for him in myriad games. This 18-inch-plush take on the hovering two-tailed fox will be a boon companion to you on your adventures as well. Get it ($29) at Amazon.com Green Hill Zone Pinball Track One thing that distinguished Sonic from the competition early on was the segments where you’d rocket off a bumper and into a sequence of loops and launches that essentially took all control away from the player, letting you briefly just luxuriate in the wild speeds that Blast Processing could provide. It reached back through the arcade DNA to pinball, a comparison that has featured in numerous Sonic sequels. This dope little setup lets you grab that feeling anew, with a plunger that jets a Sonic Sphere through a loop, to a bumper and beyond. Get it ($25) at Amazon.com Super Sonic Statue Much like his cousin Goku (please do not fact check this), Sonic is capable of attaining a more powerful form in which his hair turns blonde and sticks straight up in the air. Super Sonic first appeared in Sonic 2 after our hero collected all seven Chaos Emeralds and unlocked his true ability, and he’s shown up in numerous games since. Grab the power of the Emeralds for yourself with this First 4 statue that stands over a foot tall and is painted with a special technique that gives Super Sonic a unique reflective yellow sheen. Get it ($330) at Amazon.com Sonic Funko Pop With Ring Has Sega ever offered an explanation for exactly what the iconic gold rings that fuel Sonic and his pals are? They appear everywhere, seemingly out of nowhere, and give the bearer the ability to survive nearly fatal injuries as long as they carry even one of them. But where do they come from? Is there an alternate universe version of Sauron on Mobius who got really ambitious or something? The world may never know, but at least you can own this sweet Funko rendition of our blue speedster holding on to a ring to keep himself safe. Get it ($15) at Amazon.com Knuckles Action Figure One of the most popular of Sonic’s early supporting cast, Knuckles was introduced in Sonic 3 as an antagonist who got bamboozled by Eggman into opposing our heroes, and then became playable in Sonic & Knuckles shortly afterwards. He’s a great illustration of how Sega was already tweaking the formula, because he was a slower character more focused on smashing chumps with his mighty fists than feats of speed. This dope action figure has over a dozen points of articulation for ultra-dramatic posing. Get it ($13) at Amazon.com Amy Rose Plush Wielding a mighty hammer, Amy Rose was introduced as a love interest to Sonic as a result of a request from Sega’s licensing department, who wanted a character that would bring girl appeal to the franchise. She was first introduced in the manga series, where her crush on our hero was front and center. Her first video game appearance, in Sonic CD, was the traditional “damsel in distress” role, but as time went by Amy became more confident, capable and playable. This plush rendition of the pink hedgehog stands over a foot tall and is ready to rocket off for adventure. Get it ($29) at Amazon.com Speak Your Mind
273,483
Dire Threat Posted in Weirdness on May 17, 2006 | No Comments » Humanity faces a dire threat. There is a sinister presence among us. It wants to convert the human race to a source of lean protein. It’s very important that those at risk stop running and start smoking (More on this at Signs of the Times). I’m Nash Wood, a recovered runner, and I want to [...]
349,216
Here Is Fucked Up's "Secret" Comp 'Raise Your Voice Joyce' You can stream the 'Dose Your Dreams' spinoff release now Published Oct 24, 2018While Fucked Up only just released their Dose Your Dreams, the Toronto punk crew also "secretly" recorded a series of spinoff releases to go along with it. One of those is apparently the comp Raise Your Voice Joyce: Contemporary Shouts from Contemporary Voices, which you can now stream online. While Fucked Up have remained relatively mum about the comp, its product description — which makes zero mention of the band — lays it all out like this: Raise Your Voice Joyce: Contemporary Shouts from Contemporary Voices charts a small piece of the limitless history of women in revolt. Eight tracks are presented here; each one is a history, a story, a biography, told through disparate styles, from early anarcho-punk and UK '82 to DIY snapshots of goth crossover, all bound together to cement an activated and radical disruption vita. "Joyce" is the revolting woman; a constant of history. Contributions from members of today's UK and European punk scenes including Nekra, Good Throb, Arms Race, Sauna Youth, Terrible Feelings, and more. This compilation is a command, a demand and a well-spoken message for tomorrow. Features: Bryony Beynon, Manuela Iwansson, Spooky, Crom-Lus / Poppy Edwards, Jen Calleja, Ola Herbich, Ruby Mariani, and Kaila Stone. While the comp is getting a physical release on Static Shock Records, you can now stream all of Raise Your Voice Joyce: Contemporary Shouts from Contemporary Voices down below. Revisit Exclaim!'s recent Exclaim! cover story on Fucked Up over here.
232,968
TITLE: Prove if $G:V \to V^*$ is an isomorphism then $G \circ f = f^*\circ G$ QUESTION [1 upvotes]: Let $V$ be finite dimensional linear space over $\mathbb{R}$ with dot product $\langle - , - \rangle:V \times V \to \mathbb{R}$ and let $f:V\to V$ be linear transformation such that $\forall \alpha, \beta \in V, \langle f(\alpha), \beta\rangle=\langle\alpha, f(\beta)\rangle$ Prove if $G:V \to V^*$ is an isomorphism with condition: $G(\alpha)=\psi \iff \forall \beta\in V \ \ \psi(\beta)=\langle\alpha, \beta\rangle$ then $G \circ f = f^*\circ G$ REPLY [0 votes]: Take the direct approach! If $v\in V$, $G(f(v)) = (G\circ f)(v)$. By assumption, for all $w\in V$, $((G\circ f)(v))(w) = \langle f(v),w \rangle$. How about $f^\ast \circ G$? For $v,w\in V$, we have $((f^\ast \circ G)(v))(w) = (f^\ast (G(v)))(w) = (G(v))(f(w)) = \langle v,f(w) \rangle$.
84,533
We’re sorry. The information you have requested is temporarily unavailable. CURT Receiver Hitches - 120502 Class II #:120502 - Comes with the receiver hitch, 2" Euro ball mount, pin and clip - Install time: 20 minutes - GTW: up to 3,500 lbs - No drilling required - Download installation instructions Popular Searches - trailer hitches curt cheap - 2012 nissan xterra tow hitch - 2013 glk trailer hitch - hidden hitch 87593 - lincoln trailer hitch covers - smart car hitches - trailer wiring chevy truck - 2012 escape trailer hitch - nissan frontier trailer wiring - hitch for jeep grand cherokee - car liners - roof rack carriers - window air deflectors
391,379
TITLE: An example of a space which is locally relatively contractible but not contractible? QUESTION [1 upvotes]: A space $X$ is called locally contractible it it has a basis of neighbourhoods which are themselves contractible spaces. CW complexes and manifolds are locally contractible. On the other hand, the path fibration $PX \to X$ space of based paths with evaluation at the endpoint as projection) admits local sections iff $X$ is $\infty$-well-connected (or locally relatively contractible, or semi-locally contractible), that is, has a basis of neighbourhoods $N$ such that the inclusion maps $N\hookrightarrow X$ are null homotopic. Another use of this concept is by Dold, when he proves a Dold fibration (a map with the Weak Covering Homotopy Property) over an $\infty$-well-connected space is locally homotopy trivial. What, then, is an example of a space which is $\infty$-well-connected but not locally contractible? Edit: Note that the 1-dimensional version of this is a space that is semilocally 1-connected (or 1-well-connected, in my revisionist terminology), but not locally 1-connected. REPLY [7 votes]: The same counterexample as for semilocally 1-connected works: namely, you can take the cone on the Hawaiian earring space. The space itself is contractible, but no sufficiently small neighborhoods of the "bad" point at the base of the cone are 1-connected (hence not contractible).
25,735
TITLE: A set contains $\{1,2,3,4,5....n\}$ where $n$ is a even number. how many subsets that contain only even numbers are there$?$ QUESTION [10 upvotes]: A set contains $\{1,2,3,4,5....n\}$ where $n$ is a even number. how many subsets that contain only even numbers are there for the set$?$ This is my solution, is this valid$?$ since number of single element subset that contain only a even number is: $n/2$ a element is either in or not in the subset, hence $2$ choices. Hence $2^{(n/2)}$ would give us all possible combinations of subset that contains only even number, including the empty set. Hence my answer is given by $2^{(n/2)} - 1$. subtracting the $1$ because of the empty set $C(n,0)=1$. REPLY [0 votes]: Most of your argument is correct, but your answer is wrong. The empty set is a subset that contains only even numbers, so it should not be excluded from the count. In fact, my approach would be to recognise that one is asking for any subset of the set of even numbers from $\{1,2,\ldots,n\}$ (the condition of only being allowed to choose even numbers mean the odd numbers from that set can be ignored), and that set of even numbers has $n/2$ elements, and therefore has $2^{n/2}$ subsets.
204,537
\section{Success exponent} \label{sec:success-exponent} From Lemma~\ref{lem:E}, to obtain a lower bound of the achievable\footnote{Achievable here does \emph{not} refer to achievable by Eve, but achievable by Alice as defined in Definition~\ref{def:achievable}.} success exponent $S_e$ \eqref{eq:S_e}, it suffices to compute a lower bound $`g(V)$ on the exponent of the expected average fraction $`b(V,`Q,`J)$ for any $`J$ satisfying the guessing rate \eqref{eq:R_`l}. Consider first some realization $`q$ of the random code $`Q$ in Definition~\ref{def:random}. \begin{align*} `b(V,`q,`J) &= \frac1{J\abs*{T_{V}(\Mc_{111})}} \Avg_{l,m} \sum_{j\in J} \abs*{`J(l) \cap T_{V}(\Mc_{jlm})} && \text{by \eqref{eq:`b}} \end{align*} since $\abs*{T_{V}(\Mc_{jlm})}$ depends on $\Mc_{jlm}$ only through its type $Q$ (and $n$). The fraction can be made small if $\sum_j \abs*{`J(l) \cap T_{V}(\Mc_{jlm})}$ on the R.H.S.\ is made small for each $l$ and $m$. Imagine $`J(l)$ as a net that Eve uses to cover the shells $\Set{T_V(\Mc_{jlm}):j\in J}$ owned by Alice as much as possible. Roughly speaking, since the net cannot be too large due to the list size constraint, Alice should spread out the shells as much as possible to minimize her loss. We will refer to this heuristically desired property of $`q$ that the $V$-shells $\Set{T_V(\Mc_{jlm}):j\in J}$ spread out for every $V$, $m$ and $l$ as the \emph{overlap property}.\footnote{Though not explicitly stated, this notion of overlap property is also evident in \cite{csiszar1978} for the typical case when $V$ is close to $W_e$. (See Lemma~2 of \cite{csiszar1978}) For the purpose of computing the exponent, we extend it to the atypical case of $V$ and relax the extent that the shells have to spread out by allowing subexponential amount of overlap.} This is illustrated in \figref{fig:hotspot}, in which the configuration on the left has $\sum_{j=1}^3 \abs{`J(1)\cap T_V(\Mc_{j11})}$ three times larger than the one on the right. \begin{figure} \centering \input{fig_hotspot} \caption{Effectiveness of stochastic encoding} \label{fig:hotspot} \end{figure} Intuitively, random code has the \emph{overlap property} on average since it uniformly spaces out the codewords. This is made precise with the following \emph{Overlap Lemma}. \begin{lem}[Overlap] Let $\RMX_j\;(j=1,\dots,J)$ be an $n$-sequence uniformly and independently drawn from $T_Q^{(n)}\subset \setX^n$. For all $J\in `Z^+$, $`d>0$, $n\geq n_0(`d,\abs\setX\abs\setZ)$, $\Mz\in \setZ^n$, $Q\in\rsfsP_n(\setX)$, $V\in\rsfsV_n(Q,\setZ)$ such that $\lfloor\exp\Set{nI(Q,V)}\rfloor\geq J$, we have, \begin{align*} \Pr\Set*{\sum_{j\in J} \ds1\Set{\Mz\in T_V(\RMX_j)}\geq \exp(n`d)} \leq \exp(-\exp(n`d)) \end{align*} where $\ds1$ is the indicator function and $n_0$ is some integer-valued function that depends only on $`d$ and $\abs\setX\abs\setZ$. \end{lem} In words, the lemma states that the chance of having exponentially ($\exp(n`d)$) many shells (from $\Set{T_V(\RMX_j):j\in J}$) overlapping at a spot ($\Mz$) is doubly exponentially decaying ($\exp(-\exp(n`d))$), provided that the shells are not enough to fill the entire space ($T_{QV}\subset \setZ^n$) they can possibly reside. (i.e.\ $J\leq \lfloor\exp\Set{nI(Q,V)}\rfloor$) For the case of interest, we will prove the following more general form of the lemma with conditioning. \begin{lem}[Overlap (with conditioning)] \label{lem:overlap} Let $Q:=Q_0\circ Q_1\;(Q_0\in \rsfsP_n(\setU), Q_1\in\rsfsV_n(Q_0,\setX))$ be a joint type, $\RMU$ be a random variable distributed over $T_{Q_0}$, and $\RMX_j\;(j=1,\dots,J)$ be an $n$-sequence uniformly and independently drawn from $T_{Q_1}(\RMU)\subset \setX^n$. For all $J\in `Z^+$, $`d>0$, $n\geq n_0(`d,\abs\setU\abs\setX)$, $\Mz\in \setZ^n$, $Q:=Q_0\circ Q_1$, $V\in\setV_n(Q,\setZ)$ such that $\lfloor\exp\Set{nI(Q_1,V|Q_0)}\rfloor\geq J$, we have, \iftwocolumn \begin{multline} \Pr\Set*{\sum_{j\in J} \ds1\Set*{\Mz\in T_V(\RMU\circ \RMX_j)}\geq \exp(n `d)} \\ \leq \exp\Set{-\exp(n`d)} \label{eq:overlap} \end{multline} \else \begin{align} \Pr\Set*{\sum_{j\in J} \ds1\Set*{\Mz\in T_V(\RMU\circ \RMX_j)}\geq \exp(n `d)} \leq \exp\Set{-\exp(n`d)} \label{eq:overlap} \end{align} \fi where $\circ$ denotes element-wise concatenation~\eqref{eq:circ}, and \begin{equation} \label{eq:cI} \begin{aligned} I(Q_1,V|Q_0)&:=H(Q_1|Q_0)-H(V|Q_0\circ Q_1)\\ &=H(Q_1|Q_0)-H(V|Q) \end{aligned} \end{equation} denotes the \emph{conditional mutual information}. (cf.\ \eqref{eq:I}) \end{lem} \begin{proof} For notational simplicity, consider the case when $\exp(n`d)$ and $\exp\Set{nI(Q_1,V|Q_0)}$ are integers.\footnote{The case when $\exp(n`d)$ and $I(Q_1,V|Q_0)$ are not integers can be derived by taking their ceilings or floors and grouping the fractional increments into some dominating terms.} Consider some subset $\setJ$ of $\Set{1,\dots,J}$ with $\abs\setJ=\exp(n`d)$. Since the events $\Mz\in T_V(\RMU\circ\RMX_j)\;(j=1,\dots,J)$ are conditionally mutually independent given $\RMU=\Mu\in T_{Q_0}$, \iftwocolumn \begin{multline*} \Pr\Set*{\Mz\in \bigcap\nolimits_{j\in \setJ} T_V(\RMU\circ\RMX_j)}\\ \begin{aligned} &= \sum_{\Mu\in T_{Q_0}} P_{\RMU}(\Mu) \Pr\Set{\Mz\in T_V(\Mu\circ\RMX_j)}^{\exp(n`d)}\\ &\leq \exp\Set*{-n[I(Q_1,V|Q_0)-\frac{`d}2]\exp(n`d)} \end{aligned} \end{multline*} \else \begin{align*} \Pr\Set*{\Mz\in \bigcap\nolimits_{j\in \setJ} T_V(\RMU\circ\RMX_j)} &= \sum_{\Mu\in T_{Q_0}} P_{\RMU}(\Mu) \Pr\Set{\Mz\in T_V(\Mu\circ\RMX_j)}^{\exp(n`d)}\\ &\leq \exp\Set*{-n[I(Q_1,V|Q_0)-\frac{`d}2]\exp(n`d)} \end{align*} \fi for $n\geq n'_0(`d,\abs\setU\abs\setX)$, where the last inequality is by Lemma~\ref{lem:rcwd} using the uniform distribution of $\RMX_j$ and Lemma~1.2.5 of \cite{csiszar1981} on the cardinality bounds of conditional type class. Since $\exp\Set{nI(Q_1,V|Q_0)}\geq J$, the number of distinct choices of $\setJ$ is, \begin{align*} {J \choose \exp(n`d)} &\leq {\exp(nI(Q_1,V|Q_0)) \choose \exp(n`d)}\\ &\leq \exp\Set*{[\log e + n(I(Q_1,V|Q_0)-`d)]\exp(n`d)} \end{align*} where the last inequality is by Lemma~\ref{lem:choose}. By the union bound, L.H.S.\ of \eqref{eq:overlap} is upper bounded by the product of the last two expressions, i.e. \begin{align*} {J \choose \exp(n`d)} \Pr\Set*{\Mz\in \bigcap_{j\in \setJ} T_V(\RMU\circ\RMX_j)} \end{align*} Substituting the previously derived bounds for each term gives the desired upper bound $\exp(-\exp(n`d))$ when $n\geq n_0(`d,\abs\setU\abs\setX)$. \end{proof} Consider now a sequence of random codes $`Q^{(n)}$ defined in Definition~\ref{def:randomseq}. The desired bound on the exponent of $`b(V,`Q,`J)$ can be computed as follows using the Overlap Lemma. \begin{lem}[Success exponent] \label{lem:woverlap} Consider the random code sequence $`Q$ defined in Definition~\ref{def:randomseq}. For any sequence of list decoding attack $`j$ satisfying the guessing rate $R_{`l}$ \eqref{eq:R_`l}, \iftwocolumn \begin{multline*} \liminf_{n\to`8} -\frac1n \log `b(V,`Q,`J)\\ \geq \abs{R_L - R_{`l} + \abs{R_J - I(Q_1,V|Q_0)}^-}^+ \end{multline*} \else \begin{align*} \liminf_{n\to`8} -\frac1n \log `b(V,`Q,`J)) \geq \abs{R_L - R_{`l} + \abs{R_J - I(Q_1,V|Q_0)}^-}^+ \end{align*} \fi where $\abs{a}^+:=\max\Set{0,a}$ and $\abs{a}^-:=\min\Set{0,a}$. \end{lem} \begin{proof} By the Overlap Lemma~\ref{lem:overlap}, for any $`d>0$ and $n\geq n_0(`d)$, \iftwocolumn \begin{multline*} \Pr\Set*{\extendvert{\sum_{j\in \setJ_k(V)} \ds1\Set*{\Mz\in T_V(\RMC_{jlm})}\geq \exp(n `d) | `Q_0=`q_0}}\\ \leq \exp\Set{-\exp(n`d)} \end{multline*} \else \begin{align*} \Pr\Set*{\extendvert{\sum_{j\in \setJ_k(V)} \ds1\Set*{\Mz\in T_V(\RMC_{jlm})}\geq \exp(n `d) | `Q_0=`q_0}} \leq \exp\Set{-\exp(n`d)} \end{align*} \fi where $`Q_0$ is the codebook $\Set{\RMU_m}_{m\in M}$, $`q_0$ is an arbitrary realization, and $\Set{\setJ_k(V)}_{k\in K_V}$ is a partitioning of $\Set*{1,\dots,J}$ defined as, \begin{align*} \setJ_k(V) &:= \Set*{(k-1)J_V+1,\dots,\min\Set{k J_V,J}}\\ J_V &:= `1\lfloor \exp\Set{nI(Q_1,V|Q_0)}`2\rfloor\\ K_V &:= `1\lceil J/J_V `2\rceil \end{align*} The expectation of the sum of indicators on the left can then be bounded as follows, \iftwocolumn \begin{multline*} \opE`1(\extendvert{\sum\nolimits_{j\in \setJ_k(V)} \ds1\Set*{\Mz\in T_V(\RMC_{jlm})} | `Q_0=`q_0} `2)\\ \begin{aligned} &\leq \exp(n`d)\cdot 1 + J \cdot \exp\Set{-\exp(n`d)}\\ &\leq \exp(n2`d) \end{aligned} \end{multline*} \else \begin{align*} \opE`1(\extendvert{\sum\nolimits_{j\in \setJ_k(V)} \ds1\Set*{\Mz\in T_V(\RMC_{jlm})} | `Q_0=`q_0} `2) &\leq \exp(n`d)\cdot 1 + J \cdot \exp\Set{-\exp(n`d)}\\ &\leq \exp(n2`d) \end{align*} \fi where the last inequality is true for $n\geq n_0(`d,R_J,\abs\setU\abs\setX)$ by \eqref{eq:R_J}. Since $T_V(\RMC_{jlm})$ is contained by $T_{Q_1V}(\RMU_m)$, \begin{align*} &\sum_{\Mz\in `J(l)} \opE`1(\extendvert{\sum\nolimits_{j\in \setJ_k(V)} \ds1\Set*{\Mz\in T_V(\RMC_{jlm})} | `Q_0=`q_0 }`2)\\ &= \sum_{\Mz\in `J(l)\cap T_{Q_1 V}(\Mu_m)} \opE`1(\extendvert{\sum_{j\in \setJ_k(V)} \ds1\Set*{\Mz\in T_V(\RMC_{jlm})} | `Q_0=`q_0} `2)\\ &\leq \exp(n2`d) \abs*{`J(l)\cap T_{Q_1 V}(\Mu_m)} \end{align*} By linearity of expectation, \iftwocolumn \begin{multline*} \opE`1( \extendvert{\sum\nolimits_{j\in \setJ_k(V)} \abs*{ `J(l)\cap T_V(\RMC_{jlm})} | `Q_0=`q_0} `2)\\ \leq \exp(n2`d) \abs*{`J(l)\cap T_{Q_1 V}(\Mu_m)} \end{multline*} \else \begin{align*} \opE`1( \extendvert{\sum\nolimits_{j\in \setJ_k(V)} \abs*{ `J(l)\cap T_V(\RMC_{jlm})} | `Q_0=`q_0} `2) \leq \exp(n2`d) \abs*{`J(l)\cap T_{Q_1 V}(\Mu_m)} \end{align*} \fi Summing both sides over $k\in K_V$, \iftwocolumn \begin{multline*} \opE`1( \extendvert{\sum\nolimits_{j\in J} \abs*{ `J(l)\cap T_V(\RMC_{jlm})} | `Q_0=`q_0} `2)\\ \leq \exp(n2`d) K_V \abs*{`J(l)\cap T_{Q_1 V}(\Mu_m)} \end{multline*} \else \begin{align*} \opE`1( \extendvert{\sum\nolimits_{j\in J} \abs*{ `J(l)\cap T_V(\RMC_{jlm})} | `Q_0=`q_0} `2) \leq \exp(n2`d) K_V \abs*{`J(l)\cap T_{Q_1 V}(\Mu_m)} \end{align*} \fi Summing both sides over $l\in L$ and applying the list size constraint on $`J$ in Lemma~\ref{lem:list} to the R.H.S., \iftwocolumn \begin{multline*} \opE`1( \extendvert{\sum\nolimits_{j\in J,l\in L} \abs*{ `J(l)\cap T_V(\RMC_{jlm})} | `Q_0=`q_0} `2) \\ \leq \exp(n2`d) K_V `l \abs*{T_{Q_1 V}(\Mu_m)} \end{multline*} \else \begin{align*} \opE`1( \extendvert{\sum\nolimits_{j\in J,l\in L} \abs*{ `J(l)\cap T_V(\RMC_{jlm})} | `Q_0=`q_0} `2) \leq \exp(n2`d) K_V `l \abs*{T_{Q_1 V}(\Mu_m)} \end{align*} \fi Averaging both sides over $m\in M$, dividing by the constant $JL\abs*{T_V(\RMC_{jlm})}$ and taking the expectation over all possible realizations of $`q_0$ gives, \begin{align*} `b(V,`Q,`J) & \leq \exp(n2`d) \frac{K_V `l}{JL} \frac{\abs*{T_{Q_1 V}(\Mu_1)}}{\abs*{T_{V}(\Mc_{111})}} \end{align*} To compute the desired exponent from the last inequality, denote the inequality in the exponent $\dotleq$ as follows, \begin{align} a_n \dotleq b_n \iff \limsup_{n\to`8} \frac1n \log a_n \leq \liminf_{n\to`8} \frac1n \log b_n \end{align} Then, $K_V\dotleq \exp\Set{n\abs{R_J-I(Q_1,V|Q_0)}^+}$, $J\dotleq \exp\Set{nR_J}$ by \eqref{eq:R_J}, $L\dotgeq \exp\Set{nR_L}$ by \eqref{eq:R_L}, $`l\dotleq \exp\Set{nR_{`l}}$ by \eqref{eq:R_`l}, and $\abs*{T_{Q_1 V}(\Mu_1)}/\abs*{T_{V}(\Mc_{111})}$ is $\dotleq \exp\Set{nI(Q_1,V|Q_0)}$. Combining these, $`b(V,`Q,`J) $ is $\dotleq$ the following expression, \iftwocolumn \begin{multline*} \exp\Set{n[R_L-R_{`l}+[R_J-I(Q,V|Q_0)]\\ -\abs{R_J-I(Q,V|Q_0)}^+]} \end{multline*} \else \begin{align*} \exp\Set{n[R_L-R_{`l}+[R_J-I(Q,V|Q_0)]-\abs{R_J-I(Q,V|Q_0)}^+]} \end{align*} \fi To obtain the desired bound, simplify this with the identity $\abs{a}^-\equiv a-\abs{a}^+$, and the fact that $`b(V,`Q,`J)\leq 1$. \end{proof} \endinput Consider first the list size constraint $\norm {`j} \leq `l$. It translates to the constraint on the decision region $`J$ as follows, which is illustrated in \figref{fig:list}. \begin{lem}[List size constraint] \label{lem:list} Given positive integers $`l$ and $L$ with $`l\leq L$, list decoder $`j:\setZ^n\mapsto 2^{\Set{1,\dots,L}}$ and its decision region map $`J$ with $l\in `j(\Mz) \iff \Mz \in `J(l)$ for all $l\in L$ and $\Mz\in \setZ^n$, then $\norm{`j}=`l$ (i.e. $\abs*{`j(\Mz)}=`l$ for all $\Mz\in\setZ^n$) iff for any $\setS \subset \setZ^n$ and $\setL \subset \Set{1,\dots,L}$, \begin{align} \sum_{l\in\setL} \abs*{`J(l) \cap \setS} \leq \abs{\setS} \min\Set{`l,\abs{\setL}} \label{eq:list} \end{align} with equality to $`l$ if $\setL=\Set{1,\dots,L}$. \end{lem} \begin{proof} Consider proving the `if' part. \begin{align*} \sum_{l\in\setL} \abs*{`J(l) \cap \setS} &= \sum_{\Mz \in \setS } \sum_{l\in\setL} \ds1\Set{ l \in `j(\Mz) }\\ &\leq \sum_{\Mz \in \setS } \min\Set{`l,\abs\setL} \\ &= \abs\setS \min\Set{`l,\abs\setL} \end{align*} where we have swapped the summation in the first equality, and applied $\norm{`j}=`l$ in the second inequality. If $\setL=\{1,\dots,L\}$, then $\sum_{l\in\setL} \ds1\{ l \in `j(\Mz)\}=\norm{`j}=`l$ instead, which gives the desired equality. In \figref{fig:listb}, $\sum_{l\in\setL} \abs*{`J(l) \cap \setS}$ is the sum over all elements in the indicator matrix $[\ds1\Set{l_k\in `j(\Mz_i)}]_{i\in\abs\setS,k\in \abs\setL}$. The $i$-th row indicates a subset of guesses in the guessing list for $\Mz_i$. Thus, by the list size constraint, the row sum cannot exceed $`l$. Since there are at most $\abs\setL$ columns, the row sum cannot exceed $\abs\setL$ either. The sum of the matrix is therefore at most $\abs\setS \min\Set{`l,\setL}$ as desired. For the equality case, consider \figref{fig:lista}. Each row now indicates all the guesses in the guessing list for the corresponding observation on the left. Hence, each row sums to $`l\leq L$, the entire matrix sums to $\abs{S}`l$ as desired. Consider proving the `only if' part. Choose $\setS$ to be a singleton set $\Set{\Mz}$ and $\setL$ to be $\Set{1,\dots,L}$. Then, by the premise \eqref{eq:list}, \begin{align*} \sum_{l\in L} \abs*{`J(l) \cap \Set{\Mz}} &= `l \end{align*} But the L.H.S.\ is $\sum_{l\in L} \ds1\Set{\Mz\in `J(l)}$ or equivalently $\sum_{l\in L} \ds1\Set{l\in `j(\Mz)}$, which is $\norm{`j}$ as desired. In \figref{fig:lista}, consider $\setS=\Set*{\Mz_1}$. Then, the indicator matrix reduces to a row vector, which indicates all the guesses in the guessing list for $\Mz_1$. Since the matrix sum $`l$ is also the row sum, there must be $`l$ guesses in the guessing list. This is true for every $\Mz\in\setZ^n$ since the choice of $\Mz_1$ is arbitrary. This gives the desired list constraint $\norm{`j}=`l$. \end{proof} \begin{figure} \centering \input{fig_list} \caption{Illustration of the list size constraint} \label{fig:list} \end{figure} The list size constraint means at least that $`J(l)$ cannot be very large for all $l\in L$ because one of the implied constraints is that $\sum_{l}\abs{`J(l)}\leq \abs{\setZ}^n`l$. Since Eve would like to have $\sum_j \abs*{`F(l)\cap T_V(\Mc_{jlm})}$ as large as possible, she would recruit $\Mz$ in $`J(l)$ if it is contained by many $V$-shells from $\Set{T_V(\Mc_{jlm})}_j$. If $\Mz$ is not contained in any $V$-shells, recruiting it in $`J(l)$ may violate the list constraint without any benefit in increasing Eve's success probability. Roughly speaking, Eve would target $`J(l)$ at the hotspots where many $V$-shells from $\Set{T_{V}(\Mc_{jlm})}_j$ overlap. This is illustrated on the left of \figref{fig:hotspot}. Eve targets $`J(1)$ to the hotspots in the grey area where many $V$-shells overlap. This makes the sum $\sum_{j=1}^3 \abs{`J(1)\cap T_V(\Mc_{j11})}$ large as desired while keeping the area $\abs{`J(1)}$ small to satisfy the list size constraint. \begin{figure} \centering \input{fig_hotspot} \caption{Effectiveness of stochastic encoding} \label{fig:hotspot} \end{figure} Alice, on the other hand, should spread $T_{V}(\Mc_{jlm})$ out uniformly for different $j$ so that there will be very few hotspots Eve can target $`J(l)$ at. To put it another way, recall that Alice will randomize over $j$ by the random variable $\RJ$ uniformly distributed over $\Set{0,\dots,J}$ in the transmission phase. If the $V$-shells overlap significantly for different $j$, randomizing over $j$ gives a random $V$-shell that covers almost the same deterministic set. Thus, randomizing is not much different from not randomizing, which render the transmission of junk data approach ineffective in providing secrecy. We will call the desired property of spreading $V$-shells uniformly as the \emph{overlap property}, which will be made precise in the sequel. The general idea is illustrated in \figref{fig:hotspot}. With the same area covered by $`J(1)$, the configuration on the right leads to a factor of $3$ reduction in the sum $\sum_{j=1}^3 \abs{`J(1)\cap T_V(\Mc_{j11})}$. If the $V$-shells on the left completely overlap, then randomizing over $j$ by $\RJ$ uniformly distributed over $\Set*{1,2,3}$ is the same as not randomizing because the resulting $V$-shell $T_V(\Mc_{\RJ11})$ is deterministic. Imagine the ideal case\footnote{Ideal here does not refer to optimal. Note that the overlap property is only argued as a heuristics rather an optimal solution, using only one implied constraint $\sum_j \abs*{`F(l)}\leq `l\abs{\setZ}^n$ instead of the complete set in Lemma~\ref{lem:list}} when the $V$-shells from $\Set*{T_V(\Mc_{jlm})}_j$ are uniformly spread. Since $T_V(\Mc_{jlm})\subset T_{Q_1V}(\Mu_m)$, the $V$-shells cannot spread outside $T_{Q_1V}(\Mu_m)$. Then, there will be at least $`1\lfloor J\frac{\abs{T_{V}(\Mc_{0lm})}}{\abs{T_{Q_1V}(\Mu_m)}}`2\rfloor$ and at most $`1\lceil J\frac{\abs{T_{V}(\Mc_{0lm})}}{\abs{T_{Q_1V}(\Mu_m)}} `2\rceil$ $V$-shells containing $\Mz$ for every $\Mz\in T_{Q_1V}(\Mu_m)$. Thus, for the ideal case, \begin{subequations} \label{eq:i1} \begin{align} \sum_{j\in J} \abs*{`J(l)\cap T_V(\Mc_{jlm})} &\geq `1\lfloor J\frac{\abs*{T_{V}(\Mc_{1lm})}}{\abs*{T_{Q_1V}(\Mu_m)}} `2\rfloor \abs*{`J(l)\cap T_{Q_1V}(\Mu_m)}\label{eq:i1l}\\ \sum_{j\in J} \abs*{`J(l)\cap T_V(\Mc_{jlm})} &\leq `1\lceil J\frac{\abs*{T_{V}(\Mc_{1lm})}}{\abs*{T_{Q_1V}(\Mu_m)}} `2\rceil \abs*{`J(l)\cap T_{Q_1V}(\Mu_m)}\label{eq:i1u} \end{align} \end{subequations} Using the last inequality \label{eq:i1}, we can compute the average fraction $\bar{`b}$ for the ideal case as follows. \begin{description} \item[Case i.] Consider the case when the fraction $ J\frac{\abs*{T_{V}(\Mc_{1lm})}}{\abs*{T_{Q_1V}(\Mu_m)}}$ is smaller than one. i.e.\ $I(Q_1,V|Q_0) \leq \frac1n \log J$ by Lemma~1.2.5 of \cite{csiszar1981}. Then, \eqref{eq:i1} simplifies to, \begin{align} 0\leq \sum_{j\in J} \abs*{`J(l)\cap T_V(\Mc_{jlm})} &\leq \abs*{`J(l)\cap T_{Q_1V}(\Mu_m)} \label{eq:i1i} \end{align} Thus, the desired average fraction is, \begin{align} \bar{`b}(V,`q,`F) &= \Avg_{l,m} \frac1J \sum_{j\in J}\frac{\abs*{`J(l)\cap T_V(\Mc_{jlm})}}{\abs*{T_{V}(\Mc_{jlm})}} && ,\text{by def. \eqref{eq:a`b}}\\ &\leq \Avg_{m} \frac1L \sum_{l\in L} \frac{\abs*{`J(l)\cap T_{Q_1V}(\Mu_m)}}{\abs*{T_{Q_1V}(\Mu_m)}}\exp\Set{n o(1)} && ,\text{by \eqref{eq:i1i}}\notag\\ &= \Avg_{m} \frac{`l}L \frac{\abs*{T_{Q_1V}(\Mu_m)}}{\abs*{T_{Q_1V}(\Mu_m)}}\exp\Set{n o(1)} && ,\text{by \eqref{eq:list}}\notag\\ &= \frac{`l}L \exp\Set{n o(1)} \label{eq:i2} \end{align} The first inequality follows from \eqref{eq:i1i} with $\abs{T_V(\Mc_{1lm})}=\abs{T_V(\Mc_{jlm})}$. The first equality is by the equality case of the list size constraint in Lemma~\ref{lem:list} by setting $\setS$ in \eqref{eq:list} to $T_{Q_1V}(\Mu_m)$. \item[Case ii.] Consider the remaining case when $I(Q_1,V|Q_0)>\frac1n \log J$. The bounds in \eqref{eq:i1} matches in the exponent, \begin{align} \sum_{j\in J} \abs*{`J(l)\cap T_V(\Mc_{jlm})} &= J\frac{\abs*{T_{V}(\Mc_{1lm})}}{\abs*{T_{Q_1V}(\Mu_m)}} \abs*{`J(l)\cap T_{Q_1V}(\Mu_m)} \exp\Set{n o(1)}\label{eq:i1ii} \end{align} then, the desired average fraction $\bar{`b}(V,`q,`F)$ is equal to, \begin{align} \Avg_{l,m} \frac1J \sum_{j\in J}\frac{\abs*{`J(l)\cap T_V(\Mc_{jlm})}}{\abs*{T_V(\Mc_{jlm})}} & = \Avg_{m} \frac1{JL}\sum_{l} \abs*{`J(l)\cap T_{Q_1V}(\Mu_m)}\notag\\ & = \Avg_{m} \frac{`l}{JL} \frac{\abs*{T_{Q_1V}(\Mu_m)}}{\abs*{T_V(\Mc_{11m})}}\notag\\ & = \frac{`l}{JL}\exp\Set{n[I(Q_1,V|Q_0)+o(1)]}\label{eq:i3} \end{align} The first equality follows from \eqref{eq:i1ii}. The second equality follows from the equality case of the list size constraint in Lemma~\ref{lem:list}. The last equality follows from Lemma~1.2.5 of \cite{csiszar1981}. \end{description} Putting the two cases \eqref{eq:i2} and \eqref{eq:i3} together, the exponent of the average fraction $\bar{`b}(V,`q,`F)$ for the sequence of ideal codes with rate tuple $\MR=(R_M,R_L,R_J,R_{`l})$ defined in \eqref{eq:MR} is, \begin{align} -\liminf_{n\to`8} \frac1n \log \bar{`b}(V,`q,`F) &\geq \begin{cases} R_L - R_{`l} & ,I(Q_1,V|Q_0)\leq R_J\\ R_L+ R_J - I(Q_1,V|Q_0) - R_{`l} & ,\text{otherwise} \end{cases}\label{eq:i4} \end{align} Note that together with the fact that $`b\leq 1$ ($`g\geq 0$), the expression on the R.H.S.\ can be succintly written as $\abs{R_L - `l - \abs{R_J - I(Q_1,V|Q_0)}^-}^+$, where $\abs*{a}^+$ denotes $\max\Set{0,a}$ and $\abs*{a}^-$ denotes $\min\Set{0,a}$. Hence, we have the desired lower bound summarized as follows, \begin{pro}[Exponent bound for the ideal case] Suppose there exists an ideal code $(`q,`F)$ with the desired overlap property that leads to \eqref{eq:i1}, then the exponent of the average fraction $\bar{`b}$ in \eqref{eq:a`b} can be bounded as follows, \begin{align*} -\liminf_{n\to`8} \frac1n \log \bar{`b}(V,`q,`F) &\geq `g^*(V,\MR) \end{align*} where \begin{align} `g^*(V,\MR)&:=\abs{R_L - R_{`l} - \abs{R_J - I(Q_1,V|Q_0)}^-}^+ \label{eq:`g*} \end{align} \end{pro} \begin{proof} The proof follows from \eqref{eq:i1} to \eqref{eq:i4}. \end{proof} \subsection{Overlap Lemma for random code} \label{sec:overlap-lemma-random} Now, Alice's objective is to achieve the ideal case with a random code $`Q$ of the same rate tuple $\MR$ such that, \begin{align*}[left={\textbf{Objective:}\qquad}] -\liminf_{n\to`8}\frac1n \log E`1(\bar{`b}(V,`Q,`J) `2) &\geq `g^*(V,\MR) \end{align*} with $`g^*(V,\MR)$ defined in \eqref{eq:`g*}. As one would expect, the ideal case may not be achievable simultaneously for every polynomially many $V\in\rsfsV(Q)$, exponentially many $m\in M$, and $l\in L$. We should somehow relax the overly strong assumption of the ideal code without hurting the exponent. To do so, the key is to notice that, as far as the exponent is concerned, one could tolerate hotspots that are subexponentially denser than average. i.e.\ in \figref{fig:hotspot}, if the hotspots on the left is within a constant factor, $\leq 3$ in this case, denser than average as $n$ increases, then the sum of interest (indicated in the figure) would be within a constant factor from the ideal case on the right, and therefore leads to the same desired exponent. More specifically, as far as the exponent of the success probability is concerned, it is good enough for Alice to achieve the ideal case \eqref{eq:i1} up to the first order of the exponent. i.e. \begin{align} \sum_{j\in J} \abs*{`J(l)\cap T_V(\Mc_{jlm})} &\leq J\frac{\abs*{T_{V}(\Mc_{1lm})}}{\abs*{T_{Q_1V}(\Mu_m)}} \abs*{`J(l)\cap T_{Q_1V}(\Mu_m)}\exp\Set*{n o(1)} \end{align} To achieve this objective, consider fixing $(m,l,V)$. Partition $\Set*{1,\dots,J}$ into some subsets of size $J_V:=\lfloor \exp\Set*{I(Q_1,V|Q_0)}\rfloor$ except for the last one, which can have a smaller size. More precisely, define the partitioning $\Set{\setJ_k(V)}_{k\in K_V}$ as follows, \begin{align*} \setJ_k(V):=\Set*{(k-1)J_V+1,\dots,\min\Set*{kJ_V,J}}\;\;\subset J \end{align*} for $k\in\Set*{1,\dots,K_V}$ where $K_V:=\lceil J/J_V\rceil$. Since $\abs*{T_{Q_1V}(\Mu_1)} = \exp\Set*{n[H(Q_1V|Q_0)-o(1)]}$ and $\abs*{T_V(\Mc_{jlm})} = \exp\Set*{n[H(V|Q)-o(1)]}$ by Lemma~1.2.5 of \cite{csiszar1981}, we have $\lceil J/J_V\rceil$ equal to $J\frac{\abs*{T_{V}(\Mc_{0lm})}}{\abs*{T_{Q_1V}(\Mu_m)}}$ in the first order of the exponent. Thus, we have, \begin{align*} \sum_{j\in J} \abs*{`J(l)\cap T_V(\Mc_{jlm})} &= \sum_{k\in K_V}\sum_{j\in \setJ_k(V)} \abs*{`J(l)\cap T_V(\Mc_{jlm})}\\ &\leq \lceil J/J_V \rceil \max_{k\in K_V} \sum_{j\in \setJ_k(V)} \abs*{`J(l)\cap T_V(\Mc_{jlm})}\\ &\leq J\frac{\abs*{T_{V}(\Mc_{0lm})}}{\abs*{T_{Q_1V}(\Mu_m)}} \max_{k\in K_V} \sum_{j\in \setJ_k(V)} \abs*{`J(l)\cap T_V(\Mc_{jlm})} \exp\Set*{n o(1)} \end{align*} Where the first equality is by the definition that $\Set*{\setJ_k(V)}_{k\in K_V}$ is a partitioning of $\Set*{1,\dots,J}$; the second inequality is due to the fact that maximum is no smaller than average; and the last inequality is due to the equality in the first order of the exponents of $\lceil J/J_V \rceil$ and $J\frac{\abs*{T_{V}(\Mc_{0lm})}}{\abs*{T_{Q_1V}(\Mu_m)}}$ argued earlier. Comparing the last inequality with the R.H.S.\ of \eqref{eq:i1ii}, it suffices to have for every $k\in K_V$ such that, \begin{align}[left={\text{Objective}:\quad}] \label{eq:overlapobj2} \sum_{j\in \setJ_k(V)} \abs*{`J(l)\cap T_V(\Mc_{jlm})}\leq \abs*{`J(l)\cap T_{Q_1V}(\Mu_m)} \exp\Set*{n o(1)} \end{align} Consider any $\Mz\in\setZ^n$. If it is contained by sub-exponentially many $V$-shells from $\Set*{T_V(\Mc_{jlm})}_{j\in\setJ_k(V)}$, i.e.\ \begin{align} \sum_{j\in\setJ_k(V)} \ds1\Set*{\Mz\in T_V(\Mc_{jlm})} &\leq \exp\Set*{n o(1)}\label{eq:overlapobj3a} \end{align} for every $\Mz\in\setZ^n$, then we have, \begin{align*} \sum_{j\in \setJ_k(V)} \abs*{\Set{\Mz}\cap T_V(\Mc_{jlm})} &\leq \exp\Set{n o(1)} \end{align*} Since $T_V(\Mc_{jlm})\subset T_{Q_1V}(\Mu_m)$, the above inequality implies that \begin{align*} \sum_{j\in \setJ_k(V)} \abs*{\Set{\Mz}\cap T_V(\Mc_{jlm})} &\leq \abs*{\Set{\Mz}\cap T_{Q_1V}(\Mu_m)} \exp\Set{n o(1)} \end{align*} Summing both sides over $\Mz\in `J(l)$ gives inequality \eqref{eq:overlapobj2}, which then gives \eqref{eq:overlapobj} as desired. Thus, it sufficies to show \eqref{eq:overlapobj3a} for all $(m,l,V,k,\Mz)$, or equivalently that, for all $`d>0$, $n\geq n_0(`d)$ and for all $(m,l,V,k,\Mz)$, \begin{align}[left={\text{Objective:}\quad}] \ds1\Set*{\sum_{j\in\setJ_k(V)} \ds1\Set*{\Mz\in T_V(\Mc_{jlm})}\geq \exp\Set*{n `d}} =0 \label{eq:overlapobj3} \end{align} We shall achieve this objective with the following random coding argument. Now, consider using the random code $`Q$ instead of the ideal code $`q$. Suppose, as a wishful thinking, that the expectation of the L.H.S.\ of \eqref{eq:overlapobj3} over the random code ensemble $`Q$ is doubly exponentially decaying with $\exp\Set{-\exp(n`d)}$. i.e. \begin{align} \Pr\Set*{\sum_{j\in\setJ_k(V)} \ds1\Set*{\Mz\in T_V(\Mc_{jlm})}\geq \exp\Set*{n `d}} &\leq \exp\Set{-\exp(n`d)} \label{eq:overlappre} \end{align} Then, the expectation of the sum of the the L.H.S.\ of \eqref{eq:overlapobj3} over $(m,l,V,\Mz)$, the set of which contains at most exponentially many terms, remains doubly exponentially decaying. Since minimum is no larger than expectation, there exists at least one deterministic code in the ensemble such that this `giant' sum is diminishingly small, decaying doubly exponentially. Since the sum of indicator variables must be a non-negative integer while the decaying bound eventually drops below $1$, we have the desired property that the sum is exactly $0$ for $n$ large enough. The key now is to prove the doubly exponential behavior \eqref{eq:overlappre}, which is described more generally in the following Overlap Lemma. \begin{lem}[Overlap] \label{lem:overlap} Consider some joint type $Q:=Q_0\circ Q_1\;(Q_0\in \rsfsP_n(\setU), Q_1\in\rsfsV_n(Q_0,\setX))$. A random code $`Q_1$ is generated by selecting $J$ codewords $\RMX_j$ of length $n$ uniformly randomly and independently from the shell $T_{Q_1}(\RMU)$, for some random variable $\RMU$ distributed over $T_{Q_0}$. Then, for all $`d>0$, $n\geq n_0(`d)$, we have \begin{align} \Pr\Set*{\sum_{j\in J} \ds1\Set*{\Mz\in T_V(\RMU\odot \RMX_j)}\geq \exp(n `d)} &\leq \exp\Set{-\exp(n`d)} \label{eq:overlap} \end{align} for all $\Mz\in \setZ^n$ and $V\in\rsfsV(Q)$ with $\lfloor exp\Set{nI(Q_1,V|Q_0)}\rfloor \geq J$. \end{lem} \begin{proof} Since every codeword in the random code are conditionally independently generated given $\RMU=\Mu\in T_{Q_0}$, the event $\Mz\in T_V(\RMU\odot \RMX_j)$ for different $j\in J$ are conditionally mutually independent. In particular, for any exponentially large subset $\setJ$ of $J$ with $\abs\setJ=\exp(n`d)$, the probability that $\Mz$ is contained by every $V$-shells from $\Set*{T_V(\RMU\odot \RMX_j)}_{j\in\setJ}$ is, \begin{align*} \Pr\Set*{\Mz\in \bigcap_{j\in\setJ} T_V(\RMC_j)} &= \sum_{\Mu\in T_{Q_0}} \Pr(\RMU=\Mu) \Pr\Set*{\Mz \in T_V(\Mu\odot\RMX_j)}^{\exp(n`d)}\\ &= \exp\Set*{-n`1[I(Q_1,V|Q_0)\pm \frac{`d}2`2]\exp(n`d)}\qquad ,\text{by Lemma~\ref{lem:rcwd}} \end{align*} which is doubly exponentially decaying. The last equality obtained from Lemma~\ref{lem:rcwd} in the Appendix uses the fact that the codewords are uniformly random over $T_{Q_1}(\RMU)$. Since there are ${J \choose \exp(n`d)}$ possible choices for $\setJ\subset \setJ_k(V)$ with $\abs\setJ=\exp(n`d)$, then, by the union bound, the probability that there are at least $\exp(n`d)$ $V$-shells from $\Set{T_V(\RMC_j)}_{j\in J}$ containing $\Mz\in T_{QV}$ is, \begin{multline*} \Pr\Set*{\sum_{j\in J} \ds1\Set*{ \Mz \in T_V(\RMU\odot\RMX_j)} \geq \exp(n`d)}\\ \leq {J \choose \exp(n`d) } \exp\Set*{-n`1[I(Q_1,V|Q_0)\pm o(1)`2]\exp(n`d)} \end{multline*} By Corollary~\ref{cor:choose}, we can upper bound ${J \choose \exp(n`d) }$ by $\exp\Set{[\log e+n(I(Q_1,V|Q_0)-`d)]\exp(n`d)}$. Thus, the probability can be upper bounded as follows, \begin{align*} &\Pr\Set*{\sum_{j\in J} \ds1\Set*{ \Mz \in T_V(\RMU\odot\RMX_j)} \geq \exp(n`d)}\\ & \leq \exp\Set{[\log e+n(I(Q_1,V|Q_0)-`d)]\exp(n`d)} \exp\Set*{-n`1[I(Q_1,V|Q)- \frac{`d}{2}`2]\exp(n`d)}\\ &\leq \exp\Set*{-\exp(n`d)} \end{align*} where the last inequality is obtained for $n$ large than some $n_0(`d)$. \end{proof} \endinput As a conclusion, we have illustrated an overlap property in Lemma~\ref{lem:overlap} that the expected number (over the random code ensemble) of observations contained in exponentially many $V$-shells of $\lfloor \exp\Set*{n[I(Q,V)]}\rfloor$ uniformly randomly and independently generated codewords from the type class $T_Q$ is doubly exponentially decaying. To apply it to the wiretap channel, we sum up exponentially many similar expectations that decay doubly exponentially. By linearity of expectation, this giant sum of exponentially many doubly exponentially decaying expectations is just another doubly exponentially decaying expectation of the giant sum. By the argument that minimum is no larger than expectation, there exists a code such that the giant sum is doubly exponentially decay. With the fact that each summand must be a non-negative integer, every term in the sum must eventually drop to zero. Another key point that will be useful later is this approach proving existence of good code. In general, we want the good code to have certain number of quantities decay to zero. If we can ensure that the expectation of each quantity decays to zero fast enough, then expectation of the sum will also decay to zero by linearity of expectation provided that there are not too many of them. In this case, we have shown quantities that decay doubly exponentially fast. Thus, expectation of a sum of exponentially many of them still decays to zero. As we will consider next, there will be two other fractions, namely $`b(V,`q,`F_b^c)$ and $`b(V,`q,`F_b^c)$, that should decay to zero for polynomially many $V\in\rsfsV(Q)$ to give the desired error exponents. The goal is then to show that the expectation of each fraction is decaying exponentially fast, so that the overal expectation of the sum of all polynomially many terms, together with the exponentially many doubly exponentially decay terms here, decays to zero. This would show the existence of a good code that achieves all the desired exponents.
207,132
Mark C. Knoth Attorney Profile Top Rated Business Litigation Attorney in Detroit, MI Detroit, MI 48226 - Business Litigation (10%), - Employment & Labor: Employer (70%), - Employment Litigation: Defense (20%) Mark C. Knoth is an attorney at Kerr, Russell and Weber, PLC and helps clients address Business Litigation legal issues. He also assists clients regarding Employment & Labor: Employer and Employment Litigation: Defense issues. Super Lawyers is a designation of top-rated practicing attorneys selected through extensive evaluation. He was awarded this distinction for 2017 - 2018. Mark Knoth graduated in 1992 from University of Detroit Mercy School of Law. Mark Knoth was admitted to practice law in 1991. He represents clients in the Detroit area. Additional Sources of Information About Mark C. Knoth Visit my FindLaw® profile Mark C. Knoth: Last Updated: 6/14/2017 Additional Lawyers You May Be Interested In Business Litigation Maddin, Hauser, Roth & Heller, P.C. Business Litigation Business Litigation Business Litigation Business Litigation Business Litigation Attorneys in Detroit, MI »
296,516
He must become greater; I must become less.~John 3:30 NIV For us to really be servants of Christ we have to yield our own wills to His. We can’t just do this Sunday morning during church service. We have to do it daily, when or before our feet even hit the floor we must determine that today will be a day that I will surrender my will to His. This means in the home, at work, on the bus, in the store and anywhere else we may find ourselves. This means we may find ourselves being put in some difficult or uncomfortable situations for our flesh. In fact it is very likely that we will from time to time. We have been called to be FOLLOWERS of Christ. Not LEADERS of Christ. He knows us and loves us more than we are capable of truly loving ourselves. He is a trustworthy leader and earned our dedication on the cross of Calvary. Let’s use this day for Him! Let’s determine to allow Him to lead us this day to do good works for His Kingdom. Let’s be willing to share our love for Him & others in practical ways today. Intercede for those who need prayer! Be a support to those who need it! Wherever He puts you and whatever He puts in your path today, do it in submission to Him and do it with excellence! Less of us and our fleshly wisdom and MORE of Him and His infinite love!
92,176
People who viewed this item also viewed: 128891, 109948, 130979, 107607 Brief History: Item Links: We found: 1 different collections associated with Cho Yang - Container Logistics - Collection N Scale Model Trains: 2 different items. Item created by: gdm on 2017-12-06 20:34:03. Last edited by gdm on 2017-12-06 20:37:28 If you see errors or missing data in this entry, please feel free to log in and edit it. Anyone with a Gmail account can log in instantly.
58,511
TITLE: Find all $p, q$ (coprime) such that $\frac{(p-1)(q-1)}{2}$ is a prime number QUESTION [1 upvotes]: Find all $p, q$ (coprime) such that $\frac{(p-1)(q-1)}{2}$ is a prime number. I found that $(p,q) = (2,7),(7,2),(2,15),(3,8)$ gave prime numbers for $\frac{(p-1)(q-1)}{2}$, but how do we find all? REPLY [0 votes]: If $p, q$ are coprime then at most one is even. Wolog $q$ is odd and $q-1$ is even. So $(p-1)\frac{q-1}2$ is prime. So either $p-1 = \pm 1$ or $\frac {q-1}2 = \pm 1$. Case 1: $p-1 = -1$ If $p-1 = -1$ then $p = 0$ and $0$ is not coprime to anything except $\pm 1$. (Everything divides 0) so $q = \pm 1$. $\frac {1-1}2 = 0$ is not prime so $q = -1$ and $\frac {q-1}2 = -1$ and ... $ (p-1)\frac{q-1}2=1$ is not prime. Case 2: $p-1 = 1$ So $p = 2$ and $\frac {q-1}2 $ is prime. So $q = 2k + 1$ for any prime. i.e. Let $k$ be prime, the $\gcd(2, 2k+1) =1$ and $\frac{(2-1)(2k+1 -1)}2 = k$ is prime. Case 3: $\frac {q-1}2 = -1$ $q = -1$ and $p= k + 1$ for any prime. i.e. $\gcd(k+1,-1)=1$ and $\frac{(k+1-1)(-1 -1)}2 = -k$ is prime. Case 4: $\frac{q-1}2=1$ $q = 3$ and $p = k +1$ for any prime $k=3$ or $k \equiv 1 \mod 3$ So options are $(2,2k+1)$ for any prime $k$. $(-1, k+1)$ for any prime. $(3,4)$ and $(3,k+1)$ for any prime of the form $k \equiv 1 \mod 3$.
213,536
NN8420 : Woodland track at Knock Mary taken 13 years ago, near to The Balloch, Perth And Kinross, Great Britain Woodland track at Knock Mary Knock Mary is a rounded tree-clad hill southwest of Crieff. The western slopes are mainly beech. TIP: Click the map for Large scale mapping Change to interactive Map > Change to interactive Map > - Grid Square - NN8420, 11 images (more nearby ) - Photographer - Lis Burke (find more nearby) - Date Taken - Saturday, 19 November, 2005 (more nearby) - Submitted - Saturday, 19 November, 2005 - Category - Woodland > Woodland (more nearby) - Subject Location - OSGB36: NN 845 200 [100m precision] WGS84: 56:21.4909N 3:52<<
141,657
One Hyde Park, Knightsbridge London 100 Knightsbridge, London SW1X 7LJ, United Kingdom Description: “A 5th floor lateral apartment with superb city views and expansive open plan entertaining space. An exceptional lateral apartment of approximately 2,916 sq ft that has been extensively improved and upgraded to the highest standards, creating an outstanding open plan living space. Located in Pavilion D, adjacent to the Mandarin Oriental Hotel, this 5th floor apartment is the highest of its kind in the building and one of the few at One Hyde Park with a southerly aspect and superb open vistas along Hyde Park Corner, Sloane Street and Brompton Road, towards Harrods. Designed by Rogers Stirk Harbour + Partners, One Hyde Park in Knightsbridge is commonly regarded as the most prestigious residential development in London, if not the world. Managed by the Mandarin Oriental Hotel Group, One Hyde Park provides residents with bespoke services and amenities more commonly associated with the best hotels in the world – all from the comfort of their own home. Without equal in central London, an apartment at One Hyde Park offers residents a truly unique lifestyle with unsurpassed amenities and legendary Mandarin Oriental hotel services.” - Bedrooms: 2 - Bathrooms: 3 - Square feet: 2,916 - Address: 100 Knightsbridge, London SW1X 7LJ, United Kingdom - Listing agent: James Gilbert-Green No profile found See more properties by subscribing to our newsletter. Click here to list a property or upgrade this listing. Website: Listed price: $24,924,556
220,950
Recipe Details Reports Download Weight (Per Portion) One portion of this dish weighs approximately 609.00 Grams Recipe Ingredients Special Dietary Requirements Suitability Allergen Warnings Nutritional Information (Per Portion) Traffic Lights Recipe Method 1. Preheat the oven to 200C/400F/Gas Mark 6. 2. Steam the whole aubergines over a pan of simmering water for 30 minutes, then scoop out the flesh and cut it up roughly. Slowly fry the aubergine, garlic (peeled and sliced), thyme (leaves picked) and chilli (crumbled) in the olive oil for around 10 minutes. 3. Add the tins of tomatoes, chopping them up roughly with a wooden spoon, then add the balsamic vinegar and most of the basil leaves (leaves picked and stalk chopped). 4. Bring to the boil and simmer for around 10 minutes until the sauce has reduced and thickened. 5. Spread a layer of aubergine sauce in a large, shallow dish. Sprinkle over some Cheddar and a handful of Parmesan, then spread over a layer of lasagne sheets. Repeat once or twice more, until your dish is full. 6. Finish with a final sprinkling of Parmesan, a scattering of basil leaves and a drizzle of olive oil. 7. Place in the oven for 25 to 30 minutes until bubbling and golden.
418,072
TITLE: How big is the continuum? QUESTION [3 upvotes]: How big is the continuum? If you take $\mathbb{R}$ and take all the naturals from it, you are still at $2^{\aleph_0}$. If you take all the integers, you are still at $2^{\aleph_0}$. If you take all the rationals, you are still $2^{\aleph_0}$. If you take all the algebraic numbers, you haven't even tickled the continuum, and are still at $2^{\aleph_0}$. It seems that this thing is gigantic, and basically a real monster. I was wondering, are there any broader classes of numbers (broader in the sense that are less restrictive in their demands of memberships) that can be "taken" out of the reals and still make it uncountable? What is the "boundary" if such a notion exist? And, is there a way of determining how "big" is the continuum? REPLY [7 votes]: This is a great question, but (understandably) a little unclear - let me address one possible interpretation of it. What is the cardinality of the continuum? Assuming the axiom of choice, the sizes of sets - the cardinalities - have a nice structure: any two sets are comparable (either $\vert A\vert\le \vert B\vert$ or $\vert B\vert<\vert A\vert$), and the set of cardinalities is well-ordered. Well-orderedness is a bit technical - what this really means is that the sizes of infinite sets are $$\aleph_0, \aleph_1, \aleph_2, . . ., \aleph_\alpha, . . .$$ with nothing in between (here $\alpha$ is an ordinal, so e.g. $\aleph_{\omega^2+\omega\cdot 3+17}$ is a size of infinite set). What this means is that $2^{\aleph_0}=\aleph_\alpha$ for some ordinal $\alpha$. For instance, maybe $2^{\aleph_0}=\aleph_1$, that is, there is no set of reals which is uncountable but not as large as the continuum! (This is the continuum hypothesis.) Or, maybe $2^{\aleph_0}=\aleph_2$ (this is actually much less ad-hoc than it may seem!). Or perhaps $2^{\aleph_0}=\aleph_\omega$, the "infinityth infinite cardinal". And in this context, we can ask: can we narrow it down a bit? It turns out the answer is, "not much." Specifically, essentially the only thing we can prove in ZFC is that $2^{\aleph_0}$ has uncountable cofinality; this is a strengthening of Cantor's diagonal argument, due to Konig, and rules out $2^{\aleph_0}=\aleph_\omega$. However, beyond that practically anything is possible: for instance, it is consistent with ZFC that $2^{\aleph_0}=\aleph_{\omega^2+7}$. This was proved by Solovay, and later drastically extended by Easton, building on the discovery of forcing by Paul Cohen. Note that this doesn't entirely match the shape of your question. For instance, let's say I ask you "How many reals are left over after I remove continuum many of them?" The answer is, you have no way of knowing! Maybe I removed all of them, or just the interval $(0, 1)$; or all but 17 of them! You may also be interested in other notions of sizes of sets of reals, such as measure and category (the latter in the sense of Baire category, not in the sense of category theory :P). For example, the Baire category theorem implies that if you remove countably many sets, each of which is nowhere dense, then there will still be continuum-many reals left over; and similarly if you remove countably many measure zero (or "null") sets. Questions like "How many null sets can I remove without affecting the cardinality?" can lead you to cardinal characteristics of the continuum. But that's another (long) story.
212,588
We are the notable and highly established organization in the market today. We have worked hard to reach at this position. We provide the best health supplements to our customer in order to make their life happy and healthy in all the way. Our main motto is to provide you the quality products and services. That is why, all the products that we advertise on our website are obtained from the prominent brands only. Thus, all the products are completely reliable and trustworthy. We assure you that we won’t let you feel any kind of inconvenience with our services. So, enjoy your online shopping for dietary supplement with us by giving us the chance to serve you.
307,596
I remember doing this growing up when it was my birthday. Having a summer birthday sometimes made me feel like the day came and went and was just another hot, sticky, long July day. The thought that tomorrow would be another, nearly identical day, with nothing much to distinguish it from my Big Day, made me kind of pensive. I kept a notebook where, among other things, I would start a fresh page for each birthday I remembered to do it, and would write something to the (teenagerish) effect of: I'm 14 today. Happy 14th Birthday to me! Some of the things I'm thinking about this birthday are . . . Some things I hope to have done by my 15th birthday are . . . just in an attempt to capture something tangible. Time has a very fleeting and sad quality to it. Well. This is turning into a Very Intense Blog Post for me. Sorry. A case in point for withered inspiration is the way that nearly a month ago I visited one of the local firehouses with Ann's class. I came home full of resolve to mail a thank you card with a picture to the firefighters, and possibly donate a small amount to whatever their equivalent of a benevolent fund is. Well, of course, I have yet to do it. Not that it's too late. But these things are better when they're done "fresh" in my opinion. Anyhow, let me get back to the main subject of this post. Today I had to take both of my kids to a relatively routine follow-up appointment related to a specialist they see. At the start of this calendar year, we switched to a new health insurance provider, and part of the fun is that the specialist's office does not do the procedure in-house with our current insurance. Instead, I have to take the kids to the hospital to have it done there, and bring/have sent the resulting films and reports to the specialist appointment. I'm sure it's trite, but today was my mega-dose of hakaras hatov. I've been to this hospital before, and my reaction is the same each time. There is nothing, NOTHING, like going to a childen's hospital to make you realize that all of the worrying, the shlepping, the complaining, the stress that you have in your life is so blessedly minute compared to the thoughts, fears, and concerns of families of kids who are Really Sick. The walk through the corridors of this institution is so somber and humbling as one scans the various names of the departments, unimaginable to a parent regarding their own child, until, for some, they suddenly are not only imaginable, but horrifyingly real. It takes a lot of composure for me just to walk the halls, and I know that that's nothing compared to the composure and resolve of the very brave doctors and braver patients. Thanks, Hashem. That's it. 8 comments: I have a dear friend who works as a Peds doc in a children's hospital. The stories that he sometimes shares rip your heart to shreds. But as you pointed out, they also make you quite thankful. Beautiful. So true. Boruch Hashem! It's times like these that you realize what is truly important and how truly blessed we are...in my opinion, hakaras hatov is one of the most important midot--as well as a key to happiness and satisfaction in life. It's always good to have a wake-up call. Nice post. Oh, I didn't know what to say for this post, but it was beautiful. You're right. Just being an hospital setting can be disquieting. Awesome as always. :-) Thanks everyone - I'm glad that so many of you echo my thoughts. We (and our kids) should all be well.
270,620
TITLE: Probability of generating the symmetric group QUESTION [3 upvotes]: The statement is simple: What is the probability that a set of n-1 transpositions generates the symmetric group, $S_n$? The motivation is that I remembered reading that this was an open problem somewhere on the internet, and then I solved it. I'm curious to see other people's solutions, because I think it's a nice problem, and don't quite believe that it is hard enough to be open. REPLY [11 votes]: A solution (assuming that all transpositions are distinct and are choosen uniformly among all ${n\choose 2}$ possible transpositions) can be given as follows: A set of $n-1$ transpositions $(a_1,b_1),\dots,(a_{n-1},b_{n-1})$ on the set $\lbrace 1,\dots,n\rbrace$ generates the whole symmetric group of $\{1,\dots,n\}$ if and only if the graph with vertices $\lbrace 1,\dots,n\rbrace$ and edges $\lbrace a_i,b_i\rbrace$ is a tree. The probability to generate $S_n$ is thus the same as the probability to get a tree with $n$ vertices $V$ when choosing randomly $n-1$ edges with endpoints in $V$. By Cayley's theorem, there are $n^{n-2}$ different trees with vertices $\lbrace 1,\dots,n\rbrace$. Since there are ${{n\choose 2}\choose n-1}$ different graphs with $n-1$ edges and vertices $\{1,\dots,n\}$, the probability is given by $n^{n-2}/{{{n\choose 2}\choose n-1}}$. If repetitions are allowed, one gets $n^{n-2}/{{n\choose 2}+n-2\choose n-1}$ (assuming uniform probability for all distinct multisets).
101,945
\begin{document} \rightline{} \vskip 1.5 true cm \begin{center} {\bigss Quarter-pinched Einstein metrics interpolating\\[.5em] between real and complex hyperbolic metrics} \vskip 1.0 true cm {\cmsslll V.\ Cort\'es and A.\ Saha } \\[3pt] {\tenrm Department of Mathematics\\ and Center for Mathematical Physics\\ University of Hamburg\\ Bundesstra{\ss}e 55, D-20146 Hamburg, Germany\\ [email protected], [email protected]}\\[1em] \vspace{2ex} {November 14, 2017} \end{center} \vskip 1.0 true cm \baselineskip=18pt \begin{abstract}We show that the one-loop quantum deformation of the universal hypermultiplet provides a family of complete $1/4$-pinched negatively curved quaternionic K\"ahler (i.e.\ half conformally flat Einstein) metrics $g^c$, $c\ge 0$, on $\bR^4$. The metric $g^0$ is the complex hyperbolic metric whereas the family $(g^c)_{c>0}$ is equivalent to a family of metrics $(h^b)_{b>0}$ depending on $b=1/c$ and smoothly extending to $b=0$ for which $h^0$ is the real hyperbolic metric. In this sense the one-loop deformation interpolates between the real and the complex hyperbolic metrics. We also determine the (singular) conformal structure at infinity for the above families. \\[.5em] {\it Keywords: quaternionic K\"ahler manifolds, Einstein deformations, negative sectional curvature, quarter pinching}\\[.5em] {\it MSC classification: 53C26.} \end{abstract} \section*{Introduction}\label{sec:Intro} Einstein deformations of rank one symmetric spaces of non-compact type have been considered by various authors, see \cite{P,L,B1,B2} and references therein. In particular, LeBrun has shown that the quaternionic hyperbolic metric on the smooth manifold $\bR^{4n}$ admits deformations by complete quaternionic K\"ahler metrics. These metrics are constructed using deformations of the twistor data and depend on functional parameters. However, the sectional curvature of the deformed metrics does not seem to have been studied. In previous work \cite{ACM,ACDM} a geometric construction of a class of quaternionic K\"ahler manifolds of negative scalar curvature was described. The manifolds in this class are obtained from projective special K\"ahler manifolds and come in one-parameter families. In string theory, such families can be interpreted as perturbative quantum corrections to the hypermultiplet moduli space metric \cite{RSV}. The one-parameter families are known as one-loop deformations of the supergravity c-map metrics. The simplest example corresponds to the case when the initial projective special K\"ahler manifold is a point. In that case one obtains the family of metrics \begin{equation}\label{eq:1ldUHmetric} \begin{split} g^c = {1\over4\rho^2}&\left[\frac{\rho + 2c}{\rho + c}\,\mathrm d\rho^2 + \frac{\rho + c}{\rho + 2c}(\mathrm d \tilde\phi + \zeta^0\mathrm d\tilde\zeta_0 - \tilde\zeta_0\mathrm d\zeta^0)^2\right.\\ &\quad \left. + 2(\rho + 2c)\left((\mathrm d\tilde\zeta_0)^2 + (\mathrm d\zeta^0)^2\right)\right], \end{split} \end{equation} where $(\rho, \tilde{\phi}, \zeta^0, \tilde\zeta_0)$ are standard coordinates on the manifold $M:=\bR^{>0}\times \bR^3\cong \bR^4$ and $c\ge 0$. This is a deformation of the complex hyperbolic metric $g^0$ (known as the universal hypermultiplet metric in the physics literature \cite{RSV}) by complete quaternionic K\"ahler\footnote{Recall that in dimension four quaternionic K\"ahler manifolds are defined as half conformally flat Einstein manifolds.} metrics, see \cite[Remark 8]{ACDM}. Using the c-map and its one-loop deformation it is also possible to deform higher rank quaternionic K\"ahler symmetric spaces and, more generally, quaternionic K\"ahler homogeneous spaces by families of complete quaternionic K\"ahler metrics depending on one or several parameters \cite{CDS,CDJL}. In this paper we prove that the metrics \re{eq:1ldUHmetric} are all negatively curved and $\frac14$-pinched, see Theorem \ref{mainthm}. By similar calculations, we also show that Pedersen's deformation of the real hyperbolic $4$-space\footnote{This deformation is induced by a deformation of the standard conformal structure of $S^3$ at the boundary of the real hyperbolic space by a rescaling along the fibres of the Hopf fibration \cite{P}.}, which depends on a parameter $m^2\ge 0$, has negative curvature if $m^2 <1$, see Theorem \ref{PedersenmetricThm}. These are presumably the first examples of non-locally symmetric complete Einstein four-manifolds of negative curvature. For the family \re{eq:1ldUHmetric}, we show in Section \ref{cinftySec} that the limit $c\ra \infty$ is well-defined after a suitable change of coordinates and parameter, and that it is given by the real hyperbolic metric. Furthermore, we perform another change of coordinates in order to analyze the conformal structure at infinity. We find in Section \ref{confSec} that the conformal structure induced by $g^c$ (for $0< c<\infty$) on the boundary sphere $S^3$ is \emph{singular} precisely at a single point $p_\infty$, which we can consider as the south pole, where it has a double pole. The point $p_\infty$ is also a special point with respect to the asymptotic behaviour of the metric. In fact, the metric $g^c$ (considered as a metric on the $4$-ball $B^4$ with boundary $S^3$) is asymptotic to the real hyperbolic metric on the complement in $B^4$ of any neighborhood of $p_\infty$ but it is not near $p_\infty$. These observations show that the family of metrics $g^c$ cannot be obtained as an Einstein deformation induced by a deformation of the conformal structure at the boundary in the spirit of \cite{B1}. \subsubsection*{Acknowledgements} We are very grateful to Alexander Haupt for checking some of our calculations. Furthermore, we thank Olivier Biquard, Gerhard Knieper and Norbert Peyerimhoff for helpful comments. This work was supported by the German Science Foundation (DFG) under the Research Training Group 1670 ``Mathematics inspired by String Theory". Finally, the authors would like to express a special thanks to the Mainz Institute for Theoretical Physics (MITP) for its hospitality and support. \section{The limit $c\ra \infty$}\label{cinftySec} We introduce a second one-parameter family of metrics given by \begin{equation} \begin{split} h^b = {1\over4\rho'^2}&\left[\frac{b\rho' + 2}{b\rho' + 1}\,\mathrm d\rho'^2 + \frac{b\rho' + 1}{b\rho' + 2}(\mathrm d \tilde\phi' + b\zeta'^0\mathrm d\tilde\zeta'_0 - b\tilde\zeta'_0\mathrm d\zeta'^0)^2\right.\\ &\quad \left. + 2(b\rho' + 2)\left((\mathrm d\tilde\zeta'_0)^2 + (\mathrm d\zeta'^0)^2\right)\right], \end{split} \end{equation} where $b>0$. This is in fact equivalent to the one-loop deformation $g^c$ for $c>0$ under the identifications $c=1/b$ and $(\rho,\tilde\phi,\zeta^0,\tilde\zeta_0) = (\rho',\tilde\phi',\sqrt b\,\zeta'^0,\sqrt b\,\tilde\zeta'_0)$. But now the family can be extended to the $b=0$ case. This implies that after the above parameter-dependent coordinate transformation the $c\rightarrow\infty$ limit of the one-loop deformation $g^c$ is indeed well-defined and is given by the metric \begin{equation} \begin{split} h^0 = {1\over4\rho'^2}&\left[2\,\mathrm d\rho'^2 + \frac{1}{2}\,\mathrm d \tilde\phi'^2 + 4(\mathrm d\tilde\zeta'_0)^2 + 4(\mathrm d\zeta'^0)^2\right], \end{split} \end{equation} which has constant curvature $-2$. \section{Asymptotics and conformal structure at infinity}\label{confSec} We would like to determine the conformal structure of the family of metrics $(g^c)_{c\ge 0}$ on the sphere at the boundary of $M$. In our coordinates, this consists of the hyperplane at $\rho = 0$, along with a point at infinity $p_\infty$. In order to be able to directly see the singularity at $p_\infty$, we consider the following change of coordinates: \begin{equation}\label{eq:changeofcoord1} \begin{split} \rho &= \Re\left(\frac{1-z_1}{1+z_1}\right) - \left\lvert\frac{z_2}{z_1 + 1}\right\rvert^2 = \frac{1-|z_1|^2 -|z_2|^2}{|z_1+1|^2} ,\\ \tilde \phi &= -\Im \left(\frac{1-z_1}{1+z_1}\right),\quad \zeta := \zeta^0 + \mathrm i \tilde\zeta_0 = \frac{\sqrt{2}\,z_2}{1+z_1}. \end{split} \end{equation} This is indeed a diffeomorphism from $M=\mathbb R_{>0} \times \mathbb \bR^3=\mathbb R_{>0} \times \mathbb R\times \mathbb C$ to the unit ball $B_{\mathbb C}^2$ in $\bC^2$, as it admits the following (smooth) inverse: \begin{equation} \begin{split} z_1 &= \frac{1-\left(\rho + |\zeta|^2/2 -\mathrm i\tilde\phi\right)}{1+\left(\rho + |\zeta|^2/2 -\mathrm i\tilde\phi\right)},\quad z_2 = \frac{\sqrt{2}\,\zeta}{1+\left(\rho + |\zeta|^2/2 -\mathrm i\tilde\phi\right)}. \end{split} \end{equation} As a result of the above change of coordinates, the boundary is mapped to the unit sphere $S^3\subset \mathbb C^2$, and $p_\infty$ is mapped to the south pole $(z_1,z_2) = (-1,0)$. \bp In the coordinates introduced in \eqref{eq:changeofcoord1}, the conformal structure at the boundary $[g^c|_{\partial M}]$, for $c>0$ is singular at $p_\infty$ ($z_1 = -1$) and away from the singularity is given by the nondegenerate conformal structure: \begin{equation} \begin{split} \left[g^{c}|_{\partial M}\right] &= \left[\left(2\, \Re\left(\mathrm d\left(\frac{1-z_1}{1+z_1}\right) - \left(\frac{2\,\overline z_2}{1+ \overline z_1}\right)\mathrm d\left(\frac{z_2}{1+z_1}\right) \right)^2\right.\right.\\ &\quad +\frac{1}{2}\, \Im\left(\mathrm d\left(\frac{1-z_1}{1+z_1}\right) - \left(\frac{2\,\overline z_2}{1+ \overline z_1}\right)\mathrm d\left(\frac{z_2}{1+z_1}\right) \right)^2\\ &\quad +\left.\left.\left. 8c\left\lvert\mathrm d \left(\frac{z_2}{1+z_1}\right)\right\rvert^2\right)\right\rvert_{\partial M}\right]. \end{split} \end{equation} Meanwhile the conformal structure for $c=0$ is supported only on the CR distribution $\mathscr D$ on $S^3$ and is given by \begin{equation} \left[g^{0}|_{{\mathscr D}\times {\mathscr D}}\right] = \left[\left.\left(\left\lvert\mathrm d \left(\frac{z_2}{1+z_1}\right)\right\rvert^2\right)\right\rvert_{{\mathscr D}\times {\mathscr D}}\right]. \end{equation} \ep \begin{proof} For any $c\ge 0$, the metric $g^c$ in the new coordinates is given by \begin{equation}\label{eq:metricnewcoord} \begin{split} g^c &= \frac{1}{4\rho^2}\left[\frac{\rho + 2c}{\rho + c}\, \Re\left(\mathrm d\left(\frac{1-z_1}{1+z_1}\right) - \left(\frac{2\,\overline z_2}{1+ \overline z_1}\right)\mathrm d\left(\frac{z_2}{1+z_1}\right) \right)^2\right.\\ &\quad\left. +\frac{\rho + c}{\rho + 2c}\, \Im\left(\mathrm d\left(\frac{1-z_1}{1+z_1}\right) - \left(\frac{2\,\overline z_2}{1+ \overline z_1}\right)\mathrm d\left(\frac{z_2}{1+z_1}\right) \right)^2+ 4(\rho + 2c)\left\lvert\mathrm d \left(\frac{z_2}{1+z_1}\right)\right\rvert^2\right], \end{split} \end{equation} where now $\rho = \frac{1-|z_1|^2 -|z_2|^2}{|z_1+1|^2}$ is considered as a function of $(z_1,z_2)$. The above metric is well-defined and nondegenerate when $|z_1|^2 +|z_2|^2 < 1$. Moreover we see that for $c>0$, the conformal structure at the boundary $[g^c|_{\partial M}]=[(4\rho^2g^c)|_{\partial M}]$ is singular at $z_1 = -1$. Away from the singularity, it may be computed to be the following: \begin{equation*} \begin{split} \left[g^{c}|_{\partial M}\right] &= \left[\left(2\, \Re\left(\mathrm d\left(\frac{1-z_1}{1+z_1}\right) - \left(\frac{2\,\overline z_2}{1+ \overline z_1}\right)\mathrm d\left(\frac{z_2}{1+z_1}\right) \right)^2\right.\right.\\ &\quad +\left.\left.\left.\frac{1}{2}\, \Im\left(\mathrm d\left(\frac{1-z_1}{1+z_1}\right) - \left(\frac{2\,\overline z_2}{1+ \overline z_1}\right)\mathrm d\left(\frac{z_2}{1+z_1}\right) \right)^2 + 8c\left\lvert\mathrm d \left(\frac{z_2}{1+z_1}\right)\right\rvert^2\right)\right\rvert_{\partial M}\right]. \end{split} \end{equation*} Meanwhile, in the case $c=0$, the (rescaled) metric in \eqref{eq:metricnewcoord} becomes: \begin{equation} \begin{split}\label{decompEq} \rho g^0 &= \frac{1}{4\rho}\left\lvert\mathrm d\left(\frac{1-z_1}{1+z_1}\right) - \left(\frac{2\,\overline z_2}{1+ \overline z_1}\right)\mathrm d\left(\frac{z_2}{1+z_1}\right) \right\rvert^2+ \left\lvert\mathrm d \left(\frac{z_2}{1+z_1}\right)\right\rvert^2. \end{split} \end{equation} The second term stays finite at the boundary but the first term blows up, except on its kernel, which may be verified to be spanned by the following two vector fields: \begin{equation*} \overline z_2\, \frac{\partial}{\partial z_1} - \left(\frac{1 - |z_2|^2 +\overline z_1}{1+z_1}\right)\frac{\partial}{\partial z_2},\quad z_2\, \frac{\partial}{\partial \overline z_1} - \left(\frac{1 - |z_2|^2 + z_1}{1+ \overline z_1}\right)\frac{\partial}{\partial \overline z_2}. \end{equation*} At the boundary, the above become vector fields spanning the CR distribution $\mathscr D$ on $S^3$: \begin{equation*} \overline z_2\, \frac{\partial}{\partial z_1} - \overline z_1\,\frac{\partial}{\partial z_2},\quad z_2\, \frac{\partial}{\partial \overline z_1} - z_1\,\frac{\partial}{\partial \overline z_2}. \end{equation*} The conformal structure at the boundary $\left[g^{0}|_{{\mathscr D}\times {\mathscr D}}\right]$ is defined as the nondegenerate conformal structure on ${\mathscr D}$ obtained by keeping only the finite term in the above decomposition \re{decompEq}, see \cite{B1}. Thus the conformal structure $\left[g^{0}|_{\partial M}\right]$ is supported only on the CR distribution ${\mathscr D}$ and is given by \begin{equation*} \left[g^{0}|_{{\mathscr D}\times {\mathscr D}}\right] := \left[(\rho g^{0})|_{{\mathscr D}\times {\mathscr D}}\right] = \left[\left.\left(\left\lvert\mathrm d \left(\frac{z_2}{1+z_1}\right)\right\rvert^2\right)\right\rvert_{{\mathscr D}\times {\mathscr D}}\right]. \end{equation*} \end{proof} We would also like to determine the conformal structure of the family of metrics $(h^b)_{b\ge 0}$ on the sphere at the boundary of $M$. As in the case of $g^c$ above, in order to directly see the singularity at the point at infinity $p_\infty$, we again carry out a change of coordinates that maps $M=\mathbb R_{>0} \times \mathbb R^3$ to the unit ball $B_{\mathbb R}^4$ in $\bR^4$: \begin{equation}\label{eq:changeofcoord2} \begin{split} \rho' &= {1- w^2 - x^2 - y^2 -z^2\over (1+ w)^2 + x^2 + y^2 + z^2},\quad \tilde\phi' = {4x\over (1+ w)^2 + x^2 + y^2 + z^2},\\ \zeta'^0 &= {\sqrt{2}\,y\over (1+ w)^2 + x^2 + y^2 + z^2},\quad \tilde\zeta_0' = {\sqrt{2}\,z\over (1+ w)^2 + x^2 + y^2 + z^2}. \end{split} \end{equation} This is indeed a diffeomorphism, with (smooth) inverse given by \begin{equation} \begin{split} w &= {1- \rho'^2 - \tilde\phi'^2/4 -2\left(\zeta'^0\right)^2 - 2\,\tilde\zeta_0'^2\over (1+ \rho')^2 + \tilde\phi'^2/4 +2\left(\zeta'^0\right)^2+ 2\,\tilde\zeta_0'^2},\quad x = {\tilde\phi'\over (1+ \rho')^2 + \tilde\phi'^2/4 +2\left(\zeta'^0\right)^2+ 2\,\tilde\zeta_0'^2},\\ y &= {2\sqrt{2}\,\zeta'^0\over (1+ \rho')^2 + \tilde\phi'^2/4 +2\left(\zeta'^0\right)^2+ 2\,\tilde\zeta_0'^2},\quad z = {2\sqrt{2}\,\tilde\zeta_0'\over (1+ \rho')^2 + \tilde\phi'^2/4 +2\left(\zeta'^0\right)^2+ 2\,\tilde\zeta_0'^2}.\\ \end{split} \end{equation} As a result of the above change of coordinates, the boundary is mapped to the unit sphere $S^3\subset \mathbb R^4$, and the point at infinity $p_\infty$ is mapped to the south pole $(w,x,y,z) = (-1,0,0,0)$. \bp In the coordinates introduced in \eqref{eq:changeofcoord2}, the conformal structure $[h^b|_{\partial M}]$ on the boundary sphere for $b>0$ is singular at $w=-1$ and away from the singularity is given by \begin{equation} \begin{split} \left[h^b|_{\partial M}\right] &= \left[\left(\frac{1}{2}\left(\mathrm d \left(\frac{2x}{1+w}\right) + \frac{b}{2}\left(\frac{y}{1+w}\right)\mathrm d\left(\frac{z}{1+w}\right) - \frac{b}{2}\left(\frac{z}{1+w}\right)\mathrm d\left(\frac{y}{1+w}\right)\right)^2\right.\right.\\ &\quad \left.\left.\left. + 2\left(\left(\mathrm d\left(\frac{y}{1+w}\right)\right)^2 + \left(\mathrm d\left(\frac{z}{1+w}\right)\right)^2\right)\right)\right\rvert_{\partial M}\right]. \end{split} \end{equation} Moreover, for $b=0$, the conformal structure $\left[h^0|_{\partial M}\right]$ is the standard conformal structure on $S^3$. \ep \begin{proof} At the boundary and away from the south pole, we have $w^2 + x^2 + y^2 + z^2 = 1$ and $w\neq -1$. So, the restrictions of the coordinate functions $\rho',\tilde{\phi}',\zeta'^0,\tilde\zeta'_0$ to $\partial M$ are given as functions of $w,x,y,z$ as follows: \begin{equation} \rho'|_{\partial M} = 0,\quad \tilde{\phi}'|_{\partial M} = \frac{2x}{1+w}, \quad \zeta'^0|_{\partial M} = \frac{y}{\sqrt 2\,(1+w)}, \quad \tilde\zeta'_0|_{\partial M} = \frac{z}{\sqrt 2\,(1+w)}. \end{equation} A straightforward substitution therefore yields \begin{equation} \begin{split} (4\rho'^2h^b)|_{\partial M} &= \left(\frac{1}{2}\left(\mathrm d \left(\frac{2x}{1+w}\right) + \frac{b}{2}\left(\frac{y}{1+w}\right)\mathrm d\left(\frac{z}{1+w}\right) - \frac{b}{2}\left(\frac{z}{1+w}\right)\mathrm d\left(\frac{y}{1+w}\right)\right)^2\right.\\ &\quad +\left.\left. 2\left(\left(\mathrm d\left(\frac{y}{1+w}\right)\right)^2 + \left(\mathrm d\left(\frac{z}{1+w}\right)\right)^2\right)\right)\right\rvert_{\partial M}. \end{split} \end{equation} The conformal structure is nondegenerate with a double pole at the south pole (i.e.~$w=-1$) for $b>0$. When $b=0$, the above becomes: \begin{equation} \begin{split} \left[h^0|_{\partial M}\right] &=\left[\left.\left(2\left(\mathrm d\left(\frac{x}{1+w}\right)\right)^2 + 2\left(\mathrm d\left(\frac{y}{1+w}\right)\right)^2 + 2\left(\mathrm d\left(\frac{z}{1+w}\right)\right)^2\right)\right\rvert_{\partial M}\right]\\ &= \left[\left.\left(\frac{2(\dif x^2 + \dif y^2 + \dif z^2)}{(1+w)^2}+\frac{2(x^2 + y^2 + z^2)\,\dif w^2 }{(1+w)^4}-\frac{4(x\,\dif x+y\,\dif y + z\,\dif z)\,\dif w}{(1+w)^3}\right)\right\rvert_{\partial M}\right]\\ &= \left[\left.\left(\frac{2(\dif x^2 + \dif y^2 + \dif z^2)}{(1+w)^2}+\frac{2(1- w^2)\,\dif w^2 }{(1+w)^4}+\frac{4(w\,\dif w)\,\dif w}{(1+w)^3}\right)\right\rvert_{\partial M}\right]\\ &= \left[\left.\left(\frac{2(\dif w^2+\dif x^2 + \dif y^2 + \dif z^2)}{(1+w)^2}\right)\right\rvert_{\partial M}\right]= \left[\left.\left(\dif w^2+\dif x^2 + \dif y^2 + \dif z^2\right)\right\rvert_{\partial M}\right]. \end{split} \end{equation} This is the standard conformal structure on $S^3$ i.e.~the conformal class to which the restriction of the Euclidean metric on $\mathbb R^4$ to $S^3$ belongs. \end{proof} \section{Computation of the curvature tensor} Our goal is to prove the following result (restated in Theorem \ref{mainthmrep}): \bt \label{mainthm} For the one-loop deformation $g^c$, $c>0$, the pinching function $p\mapsto \d_p$ defined in \re{pinching} satisfies $\frac{1}{4} < \d < 1$ and attains the boundary values asymptotically when $\tilde{\rho} =\rho/c$ approaches $0$ or $\infty$, respectively, which is to say, $M$ is everywhere (at least) ``quarter-pinched". \et In order to prove this, we first compute the curvature associated with the metric in \eqref{eq:1ldUHmetric} by making use of the Cartan formalism. In this formalism, we choose an orthonormal frame $(e_I)_{I=1,\ldots ,4}$ and denote the dual co-frame by $(\theta^I)$ so that $g^c=\sum_{I}\theta^I\otimes \theta^I$. The way we have presented the metric in \eqref{eq:1ldUHmetric} suggests an obvious choice, namely \begin{equation}\label{eq:theta} \begin{split} \theta^1&:=F(\rho)\,\mathrm d\rho,\quad\,\,\, \theta^2:=G(\rho)(\mathrm d\tilde{\phi}+\zeta^0\mathrm d\tilde\zeta_0-\tilde\zeta_0\mathrm d\zeta^0),\\ \theta^3&:=H(\rho)\,\mathrm d\tilde\zeta_0,\quad \theta^4:=H(\rho)\,\mathrm d\zeta^0. \end{split} \end{equation} where $F(\rho), G(\rho), H(\rho)$ are functions of $\rho$ given by \begin{equation}\label{eq:F} \begin{split} F(\rho)&=\frac{1}{2\rho}\,\sqrt{\frac{\rho + 2c}{\rho + c}},\quad G(\rho)=\frac{1}{2\rho}\,\sqrt{\frac{\rho + c}{\rho + 2c}},\quad H(\rho)=\frac{\sqrt{2(\rho + 2c)}}{2\rho}. \end{split} \end{equation} The $\mathfrak{so}(4)$-valued connection $1$-form $\omega = (\omega^I_J)$ and curvature $2$-form $\Omega = (\Omega^I_J)$ corresponding to the Levi-Civita connection $\nabla$ and its curvature tensor $R$ are defined by \begin{equation}\label{eq:relation} \nabla_{v}e_I = \sum_J\omega^J_I(v)e_J, \quad \Omega^J_I(v,w)=g^c(R(v,w)e_I,e_J), \end{equation} for any vector vector fields $v$ and $w$. The forms $\omega^J_I$ and $\Omega^J_I$ can be calculated through the Cartan structural equations: \begin{equation}\label{eq:Cartan} \begin{split} \mathrm d\theta^I &= \sum_J\theta^J\wedge \omega^I_J,\quad \mathrm d\omega^I_J = \Omega^I_J + \sum_K \omega^K_J\wedge\omega^I_K.\\ \end{split} \end{equation} In fact, the first equation is equivalent to the vanishing of torsion and determines the forms $\omega^J_I=-\o^I_J$ uniquely. We now gather together the results of the calculation in the following two lemmata. We omit the proofs, which consist of just checking the structure equations. \bl The connection $1$-forms $\omega^I_J$ in \eqref{eq:Cartan} are given by \begin{equation}\label{eq:omega} \begin{split} \omega^1_2 &= -\omega^2_1 = \frac{1}{F(\rho)}{2\rho^2 + 5c\rho +4c^2\over 2\rho(\rho+c)(\rho+2c)}\,\theta^2,\quad \omega^1_3 = -\omega^3_1 = \frac{1}{F(\rho)}{\rho +4c\over 2\rho(\rho+2c)}\,\theta^3,\\ \omega^1_4 &= -\omega^4_1 = \frac{1}{F(\rho)}{\rho +4c\over 2\rho(\rho+2c)}\,\theta^4,\quad\quad\quad\quad \omega^2_3 = -\omega^3_2 = -\frac{1}{F(\rho)}{1\over 2(\rho +2c)}\,\theta^4,\\ \omega^2_4 &= -\omega^4_2 = \frac{1}{F(\rho)}{1\over 2(\rho +2c)}\,\theta^3,\,\,\,\quad\quad\quad\quad \omega^3_4 = -\omega^4_3 = \frac{1}{F(\rho)}{1\over 2(\rho +2c)}\,\theta^2. \end{split} \end{equation} \el \bl\label{lem:Omega} The curvature $2$-forms $\Omega^I_J$ in \eqref{eq:Cartan} are given by \begin{equation}\label{eq:Omega} \begin{split} \Omega^1_2 &= -\Omega^2_1 = -A_{\mathrm{I}}(\rho)\,\theta^1\wedge\theta^2+2A_{\mathrm{III}}(\rho)\,\theta^3\wedge\theta^4,\\ \Omega^1_3 &= -\Omega^3_1 = -A_{\mathrm{II}}(\rho)\,\theta^1\wedge\theta^3+A_{\mathrm{III}}(\rho)\,\theta^2\wedge\theta^4,\\ \Omega^1_4 &= -\Omega^4_1 =-A_{\mathrm{II}}(\rho)\theta^1\wedge\theta^4-A_{\mathrm{III}}(\rho)\,\theta^2\wedge\theta^3,\\ \Omega^2_3 &= -\Omega^3_2 = -A_{\mathrm{III}}(\rho)\,\theta^1\wedge\theta^4-A_{\mathrm{II}}(\rho)\,\theta^2\wedge\theta^3,\\ \Omega^2_4 &= -\Omega^4_2 =~~\,A_{\mathrm{III}}(\rho)\,\theta^1\wedge\theta^3-A_{\mathrm{II}}(\rho)\,\theta^2\wedge\theta^4,\\ \Omega^3_4 &= -\Omega^4_3 =\,\,2A_{\mathrm{III}}(\rho)\,\theta^1\wedge\theta^2-A_{\mathrm{I}}(\rho)\,\theta^3\wedge \theta^4, \end{split} \end{equation} where $A_{\mathrm{I}}$, $A_{\mathrm{II}}$, and $A_{\mathrm{III}}$ are given by \begin{equation} \begin{split} A_{\mathrm{I}}(\rho) &:= \frac{4\rho^3 + 12c\rho^2 + 24c^2\rho + 16c^3}{(\rho + 2c)^3},\\ A_{\mathrm{II}}(\rho) &:= \frac{\rho^3 + 12c\rho^2 + 24c^2\rho + 16c^3}{(\rho + 2c)^3},\\ A_{\mathrm{III}}(\rho) &:= -\frac{\rho^3}{(\rho + 2c)^3}. \end{split} \end{equation} \el \section{Eigenspaces of the curvature operator} In this section we consider the curvature operator $\mathscr{R}: \Lambda^2TM \rightarrow \Lambda^2TM$ which is defined by \[ g^c(\mathscr{R}X\wedge Y, Z\wedge W) = g^c(R(X,Y)W,Z),\] where on the left-hand side $g^c$ denotes the scalar product on bi-vectors which is induced by the Riemannian metric $g^c$: \[ g^c(X\wedge Y, Z\wedge W) = g^c(X,Z)g^c(Y,W)-g^c(X,W)g^c(Y,Z).\] Identifying vector with co-vectors by means of the metric, we will consider the curvature operator as a map \be \label{curvopEq}\mathscr{R}: \Lambda^2T^*M \rightarrow \Lambda^2T^*M.\ee As such it maps $\theta^I\wedge \theta^J$ to $\Omega^I_J$. The endomorphism $\mathscr R$ is self-adjoint with respect to (the metric on $\Lambda^2T^*M$ induced by) $g^c$. It follows, that all eigenvalues are real and that there exists an orthonormal eigenbasis. \bp \label{specProp}The following (anti-)self-dual $2$-forms \begin{equation} \alpha^\pm_{JKL} = \theta^1\wedge\theta^J \pm \theta^K\wedge\theta^L, \end{equation} where $(J,K,L)$ is a cyclic permutation of $(2,3,4)$, form an eigenbasis of the curvature operator \re{curvopEq} of the one-loop deformation \eqref{eq:1ldUHmetric}. The corresponding eigenvalues $\lambda^\pm_{JKL}$ are \begin{equation} \begin{split} \lambda_{234}^+ &= -2\left[1 + 2\left(\frac{\rho}{\rho +2c}\right)^3\right],\\ \lambda_{234}^- = \lambda_{342}^- = \lambda_{423}^- &= -2,\\ \lambda_{342}^+ = \lambda_{423}^+ &= -2\left[1 - \left(\frac{\rho}{\rho +2c}\right)^3\right]. \end{split} \end{equation} In particular, when $c\neq 0$, the above depends only on the ratio $\tilde \rho:= \rho/c$: \begin{equation} \begin{split} \lambda_{234}^+ &= -2\left[1 + 2\left(\frac{\tilde\rho}{\tilde\rho +2}\right)^3\right],\\ \lambda_{234}^- = \lambda_{342}^- = \lambda_{423}^- &= -2,\\ \lambda_{342}^+ = \lambda_{423}^+ &= -2\left[1 - \left(\frac{\tilde\rho}{\tilde\rho +2}\right)^3\right]. \end{split} \end{equation} \ep \pf From Lemma \ref{lem:Omega} we see that $\mathscr R$ is block diagonal, whereby the bundle $\Lambda^2T^\ast M$ of $2$-forms decomposes into three invariant subbundles $\Lambda^2_{234}T^* M$, $\Lambda^2_{342}T^* M$, and $\Lambda^2_{423}T^* M$, where $\Lambda^2_{JKL}T^* M$ denotes the span of $\theta^1\wedge\theta^J$ and $\theta^K\wedge\theta^L$. By inspection, we may read off the two eigen-$2$-forms $\alpha^\pm_{JKL}$ in $\Lambda^2_{JKL}T^\star M$. The corresponding eigenvalues are \begin{equation} \begin{split} \lambda_{234}^+ = -A_{\mathrm{I}} + 2A_{\mathrm{III}} = -\frac{6\rho^3 + 12c\rho^2 + 24c^2\rho + 16c^3}{(\rho + 2c)^3}&=-2\left[1 + 2\left(\frac{\rho}{\rho +2c}\right)^3\right],\\ \lambda_{234}^- =-A_{\mathrm I} - 2A_{\mathrm{III}} = -\frac{2\rho^3 + 12c\rho^2 + 24c^2\rho + 16c^3}{(\rho + 2c)^3}&=-2,\\ \lambda_{342}^- = \lambda_{423}^- =-A_{\mathrm{II}} + A_{\mathrm{III}} = -\frac{2\rho^3 + 12c\rho^2 + 24c^2\rho + 16c^3}{(\rho + 2c)^3}&=-2,\\ \lambda_{342}^+ = \lambda_{423}^+ =-A_{\mathrm{II}} - A_{\mathrm{III}} = -\frac{12c\rho^2 + 24c^2\rho + 16c^3}{(\rho + 2c)^3}&=-2\left[1 -\left(\frac{\rho}{\rho +2c}\right)^3\right]. \end{split} \end{equation} \epf From the above computation, we may read off the Ricci curvature $\mathrm{Rc}:T^*M\rightarrow T^*M$ and Weyl curvature $\mathscr W:\Lambda^2T^*M\rightarrow\Lambda^2T^*M$ as follows: \begin{equation} \begin{split} \mathrm{Rc}&:=\sum_{I=1}^4 \iota(e_I)\circ\mathscr R\circ\varepsilon(\theta^I)=-6\,\mathrm{id}_{T^*M},\\ \mathscr W &:= \mathscr R -\frac{1}{2}\, \mathrm{Rc}\wedge\mathrm{id}_{T^*M}+\frac{1}{3}\,\mathrm{tr}(\mathscr R)\,\mathrm{id}_{\Lambda^2T^*M}\\ &=\left(\mathscr R + 2\,\mathrm{id}_{\Lambda^2T^*M}\right)=\frac{1}{2}\,(1+\star)\left(\mathscr R + 2\,\mathrm{id}_{\Lambda^2T^*M}\right), \end{split} \end{equation} where $\iota(e^I)$ and $\varepsilon(\theta^I)$ are the interior and exterior products respectively, $\star$ is the Hodge star operator, and $\mathrm{Rc}\wedge\mathrm{id}_{T^*M}:\Lambda^2T^*M\rightarrow\Lambda^2T^*M$ is an endomorphism given by \begin{equation*} (\mathrm{Rc}\wedge\mathrm{id}_{T^*M})(\theta^I\wedge \theta^J)=\mathrm{Rc}(\theta^I)\wedge\theta^J - \mathrm{Rc}(\theta^J)\wedge\theta^I. \end{equation*} Thus, we see that the metric $g^c$ is Einstein and its Weyl curvature $\mathscr W$ is self-dual, that is, $(M,g^c)$ is indeed quaternionic K\"ahler. \section{Sectional curvature and pinching of the one-loop deformation} Since any element of $\Lambda^2TM$ can be written as a linear combination of eigenvectors of $\mathscr R$, the sectional curvature \begin{equation*} K(\Pi) = g^c(\mathscr{R}u\wedge v, u\wedge v) \end{equation*} of a plane $\Pi \subset TM$ with orthonormal basis $(u,v)$ can be written as a convex linear combination of the eigenvalues of $\mathscr R$. So the spectrum of $\mathscr R$, determined in Lemma \ref{specProp}, shall provide bounds on $K$. In order to obtain the pointwise maximum and minimum of the sectional curvature one has to minimise and maximise $g^c(\mathscr{R} \a , \a)$ subject to the conditions $\alpha\wedge\alpha = 0$ (decomposability) and $g^c(\a , \a ) =1$. This leads us to the following lemma \bl\label{lem:maxmin} For any point $p\in M$, we have the following bounds for the sectional curvature of the one-loop deformation \eqref{eq:1ldUHmetric}{\rm :} \begin{equation} \begin{split} \max_{\Pi \subset T_pM} K(\Pi) &= {1\over 2}(\max\{\lambda^+_{234}(p),\lambda^+_{342}(p),\lambda^+_{423}(p)\}\\ &\quad + \max\{\lambda^-_{234}(p),\lambda^-_{342}(p),\lambda^-_{423}(p)\}),\\ \min_{\Pi \subset T_pM} K(\Pi) &= {1\over 2}(\min\{\lambda^+_{234}(p),\lambda^+_{342}(p),\lambda^+_{423}(p)\}\\ &\quad + \min\{\lambda^-_{234}(p),\lambda^-_{342}(p),\lambda^-_{423}(p)\}). \end{split} \end{equation} \el \begin{proof} We consider a general $2$-form $\alpha$ written in terms of the eigen-$2$-forms as follows \begin{equation}\label{eq:alpha} \alpha = \sum_{\epsilon, (J,K,L)}a^\epsilon_{JKL}\alpha^\epsilon_{JKL}, \end{equation} where $(J, K, L)$ runs over the cyclic permutations of $(2,3,4)$, and $\epsilon$ runs over the values $\pm$. By decomposing $\a$ into its self-dual and anti-self-dual parts, we see that two equations $\alpha\wedge\alpha = 0$ and $g^c(\a , \a ) =1$ are together equivalent to \begin{equation}\label{constraintEq} \begin{split} (a^+_{234})^2 + (a^+_{342})^2 + (a^+_{423})^2 = \frac{1}{4},\\ (a^-_{234})^2 + (a^-_{342})^2 + (a^-_{423})^2 = \frac{1}{4}. \end{split} \end{equation} On plugging \eqref{eq:alpha} into $g^c(\mathscr{R} \a , \a)$ , we find that \begin{equation} \begin{split} K(\Pi) &= \frac12 \left[4(a^+_{234})^2\lambda^+_{234} + 4(a^+_{342})^2\lambda^+_{324} + 4(a^+_{423)})^2\lambda^+_{423}\right]\\ &\quad + \frac12\left[4(a^-_{234})^2\lambda^-_{234} + 4(a^-_{342})^2\lambda^-_{324} + 4(a^-_{423})^2 \lambda^-_{423}\right]. \end{split} \end{equation} Under the constraint \re{constraintEq}, the expressions within the square brackets are each convex combinations of three eigenvalues of $\mathscr R$. Therefore in order to maximise or minimise $K(\Pi)$ we need to respectively maximise or minimise these convex combinations separately. \end{proof} In the limit $\tilde \rho\rightarrow 0$, all the eigenvalues become $-2$ as for the real hyperbolic space $\bR\mathbf H^4$ with constant negative sectional curvature $-2$. Meanwhile, in the limit $\tilde \rho\rightarrow \infty$, the pointwise maximum of the sectional curvature is $-1$ and the pointwise minimum is $-4$, giving a pinching of $1/4$ as for the complex hyperbolic plane $\bC\mathbf H^2$. The interpolation of the pinching between these two limits is described in the following proposition. \bp The pointwise pinching of the metric $g^c$ for $c>0$ at a point $p=(c\tilde\rho,\tilde\phi,\tilde\zeta_0,\zeta^0)\in M$ is given by \begin{equation} \label{pinching} \d_p := {\max \{ K(\Pi) \mid \Pi \subset T_pM\} \over \min \{ K(\Pi) \mid \Pi \subset T_pM\} } =\frac{\tilde\rho^3 +12\tilde\rho^2 + 24 \tilde\rho + 16 }{4\tilde\rho^3 +12\tilde\rho^2 + 24 \tilde\rho + 16 }. \end{equation} \ep \begin{proof} We note that we have $\lambda^+_{234} < \lambda^-_{234} = \lambda^-_{342} = \lambda^-_{423} < \lambda^+_{342)} = \lambda^+_{423}$ for all $\tilde{\rho}>0$. So, we have for all $p\in M$ \begin{equation*} \begin{split} \max\{\lambda^+_{234}(p),\lambda^+_{342}(p),\lambda^+_{423}(p)\} &= \lambda^+_{342}(p) = \lambda^+_{423}(p),\\ \min\{\lambda^+_{234}(p),\lambda^+_{342}(p),\lambda^+_{423}(p)\} &= \lambda^+_{234},\\ \max\{\lambda^-_{234}(p),\lambda^-_{342}(p),\lambda^-_{423}(p)\} &= \lambda^-_{234}(p) = \lambda^-_{342}(p)= \lambda^-_{423}(p),\\ \min\{\lambda^-_{234}(p),\lambda^-_{342}(p),\lambda^-_{423}(p)\} &= \lambda^-_{234}(p) = \lambda^-_{342}(p)= \lambda^-_{423}(p). \end{split} \end{equation*} It now follows from Lemma \ref{lem:maxmin} that the pointwise pinching at $p=(c\tilde\rho,\tilde\phi,\tilde\zeta_0,\zeta^0)$ is given by \begin{equation*} \d_p = \frac{\lambda^+_{342}(p) + \lambda^-_{234}(p)}{\lambda^+_{234}(p)+\lambda^-_{234}(p)} =\frac{\tilde\rho^3 +12\tilde\rho^2 + 24 \tilde\rho + 16 }{4\tilde\rho^3 +12\tilde\rho^2 + 24 \tilde\rho + 16 }, \end{equation*} as was to be shown. \end{proof} Now that we have a concrete expression for the pointwise pinching, we can derive our main result (stated in Theorem \ref{mainthm} and restated below): \bt \label{mainthmrep} For the one-loop deformation $g^c$, $c>0$, the pinching function $p\mapsto \d_p$ defined in \re{pinching} satisfies $\frac{1}{4} < \d < 1$ and attains the boundary values asymptotically when $\tilde{\rho} =\rho/c$ approaches $0$ or $\infty$, respectively, which is to say, $M$ is everywhere (at least) ``quarter-pinched". \et \begin{proof} For any $\tilde\rho > 0$, we see that \begin{equation} 1> \d_p = \frac{1}{4} + \frac{9\tilde\rho^2 + 18\tilde\rho + 12}{4\tilde\rho^3 +12\tilde\rho^2 + 24 \tilde\rho + 16 } > \frac{1}{4}, \end{equation} and that both boundary values are attained asymptotically. \end{proof} \section{Pedersen metric}\label{sec:Pedersen} We now consider the Pedersen metric defined on the unit ball $B^4_\bR$ as discussed in \cite{P}: \begin{equation} \label{Pedersenmetric} \begin{split} \k^m = \frac{1}{(1-\varrho^2)^2}\left(\frac{1+m^2\varrho^2}{1+m^2\varrho^4}\,\dif\varrho^2+ \varrho^2 (1+m^2\varrho^2)\,(\sigma_1^2+\sigma_2^2)+\frac{\varrho^2(1+m^2\varrho^4)}{1+m^2\varrho^2}\,\sigma_3^2\right), \end{split} \end{equation} where the boundary is the sphere at $\varrho=1$ and $\sigma_1,\sigma_2,\sigma_3$ are the three left-invariant 1-forms on $S^3$ satisfying $\dif\sigma_i=\sum_{j,k}\varepsilon_{ijk}\sigma_j\wedge\sigma_k$. As in the case of the 1-loop deformed universal hypermultiplet metric, there is an obvious choice of an orthonormal co-frame $(\theta^I)$, given by \begin{equation} \begin{split} \theta^1 &= \frac{\varrho}{(1-\varrho^2)}\sqrt{1+m^2\varrho^2}\,\sigma_1,\quad \theta^2 = \frac{\varrho}{(1-\varrho^2)}\sqrt{1+m^2\varrho^2}\,\sigma_2,\\ \theta^3 &= \frac{\varrho}{(1-\varrho^2)}\sqrt{\frac{1+m^2\varrho^4}{1+m^2\varrho^2}}\,\sigma_3,\quad \theta^4 = \frac{1}{(1-\varrho^2)}\sqrt{\frac{1+m^2\varrho^2}{1+m^2\varrho^4}}\,\dif\varrho. \end{split} \end{equation} The steps in the previous sections for the calculation of the eigenvalues and an eigenbasis of the curvature operator $\mathscr R:\Lambda^2T^*M\rightarrow \Lambda^2T^*M$ may be repeated for the Pedersen metric. We summarize the results in the next proposition. \bp The following (anti-)self-dual 2-forms \be \beta^\pm_{IJK}:=\theta^I\wedge\theta^J \pm \theta^K\wedge\theta^4 \ee where $(I,J,K)$ is a cyclic permutation of $(1,2,3)$, form an eigenbasis of the curvature operator $\mathscr R$ of the Pedersen metric \eqref{Pedersenmetric}. The corresponding eigenvalues $\nu^\pm_{IJK}$ are \be \begin{split} \nu^+_{123}=\nu^+_{231}=\nu^+_{312}&=-4,\\ \nu^-_{123}&= -4\left(1-\frac{2m^2\left(1-\varrho^2\right)^3}{{\left(m^{2} \varrho^{2} + 1\right)^3}}\right),\\ \nu^-_{231}=\nu^-_{312}&= -4\left(1+\frac{m^2\left(1-\varrho^2\right)^3}{{\left(m^{2} \varrho^{2} + 1\right)^3}}\right). \end{split} \ee \ep In order to obtain the pointwise maximum and minimum of the sectional curvature one has to minimise and maximise $\k^m(\mathscr{R} \b , \b)$ subject to the conditions $\b\wedge\b = 0$ (decomposability) and $\k^m(\b , \b ) =1$. Again, this calculation proceeds exactly as earlier and so we just summarise the result in the following proposition. \bp The pointwise maximum and pointwise minimum of the sectional curvature of the Pedersen metric is given by \begin{equation} \begin{split} \max_{\Pi \subset T_pM} K(\Pi) &= -4\left(1-\frac{m^2\left(1-\varrho^2\right)^3}{{\left(m^{2} \varrho^{2} + 1\right)^3}}\right),\\ \min_{\Pi \subset T_pM} K(\Pi) &= -4\left(1+\frac{m^2\left(1-\varrho^2\right)^3}{{2\left(m^{2} \varrho^{2} + 1\right)^3}}\right). \end{split} \end{equation} \ep In particular, a straightforward rearrangement shows that the pointwise maximum $\max_{\Pi \subset T_pM} K(\Pi)$ becomes nonnegative when the following condition holds: \begin{equation} \varrho^2 \le \frac{\sqrt[3]{m^2}-1}{m^2 + \sqrt[3]{m^2}}. \end{equation} Note that this condition cannot hold if $m^2<1$. As a consequence we have the following result. \bt \label{PedersenmetricThm} The Pedersen metric \eqref{Pedersenmetric} has negative sectional curvature if and only if $m^2<1$. For $m^2> 1$ (respectively $m^2=1$) there are negative as well as positive (respectively zero) sectional curvatures near (respectively at) the origin $\varrho=0$. \et
186,347
Albert Baker Photo, Al Bello Getty Images LAS VEGAS, NV. In the most anticipated middleweight boxing match in recent history Saul Canelo Alvarez 49-1-2 34KO’s and Gennady GGG Golovkin 37-0-1 33KO’s met after a series of circumstances that made the public believe they were in for a roller coaster ride of negotiations mirroring Mayweather vs Pacquiao. Canelo vs Golvkin would not be as troubling for the fistic loyalists and in front of 22,358 screaming fans Canelo and Golovkin squared off at the T-Mobile arena in Las Vegas. During the fight week build up for the event, predictions were split right down the middle among media and fans. Golovkin has been masterfully marketed by K2 promoter Tom Loefller with his Mexican Style branding to siphon a major portion of the boxing rabid Hispanic fanbase of the southwestern United States and the crowd showed up for both. Opening the fight Canelo boxed masterfully using his upper body defense making Golovkin miss to return fire with three, four, and five punch combinations to give him a clear edge early. In the fourth round Golovkin began to take over using double and triple jabs to keep Canelo at distance to setup right hands and stay out of range of Canelo’s counter punches. In the fifth round both fighters exchanged punches near the corner that sent the crowd into a frenzy with both Canelo and Golovkin nodding their heads and smiling at each other to recognize that they knew they were giving the fans the big drama show. Golovkin appeared to take control over a tiring Alvarez until the twelfth and final round when both fighters traded big shots, fighting until the final bell. Judge Dave Moretti scored 115-113 for Golovkin Don Trella scored 114-114 a draw and Adelaide Byrd brought the scorecard she filled out at home before the fight that read 118-110 for Alvarez ending the most anticipated fight of this post-Mayweather generation with a draw. “I want to thank all of my fans from Mexico” “he has experience I wanted true fight” “Look I still have all my belts, I am still the champion. Yes I want a rematch”. Said Golovkin to HBO’s Max Kellerman. “If the fans demand a rematch then we will do it” Said Canelo setting up a sequel for next May. .” South El Monte fighter Joseph “Jojo” Diaz 25-0 13KO’s didn’t face the fearsome Jorge Lara in the barn burner everyone was expecting after Lara mysteriously injured his back three days before the fight. Rafael Rivera answered the call to step in and face Diaz and probably earned himself a return to a future card. Diaz controlled the fight, punching off his jab and firing combinations on Rivera. Rivera would have his moments and land hard shots of his own but the night belonged to Diaz. On scores of 119-109 twice and 120-108 Diaz sets up a title shot against Gary Russell Jr. that possibly may never take place because of the sports politics unfortunately. .” Said Diaz after the fight Diego De La Hoya 20-0 9KO’s faced former Champion Randy “El Matador” Caballero 24-1 14KO’s in a local feud that gained interest after a scrum at yesterday’s weigh in. The fight wouldn’t be as competitive as the weigh in, De La Hoya was the clear sharper and faster of the two and boxed his way to a near complete shutout victory over the game Caballero. Caballero has had a rough streak since winning the IBF bantamweight title over Stuart Hall in 2015, first having surgery and cancelling his first defense; then failing to make weight for a defense with interim titlist Lee Haskins and losing his title on the scale without ever defending or throwing a punch. Health issues have plagued Caballero who has only fought three times now in the last two years. De La Hoya not only checked the box in winning a step up in competition, he won impressively. Landing check left hooks, counter right hands and uppercuts the young fighter from Baja is starting to separate himself as a fighter from famous cousin Oscar. On scores of 100-90 and 98-92 twice De La Hoya rolled to victory and places himself in the picture for a title shot in 2018. “I trained for this fight, knowing it was going to be a really great battle. All my sacrifice, I fueled into this fight.” Said De La Hoya after the fight. Opening the nights action Ryan “Blue Chip” Martin 20-0 11KO’s faced tough Francisco Rojo 19-3 12KO’s of Mexico City. Martin is a banner prospect for promoter K2 and continues to win but still hadn’t had a high-profile opportunity to shine until getting the opening act assignment for Canelo vs GGG. Martin used his height and length as expected but the reluctance to throw more than one or two shots at a time left the door open for a very game Rojo. Rojo pressed forward, attacking the long body of Martin and looked to be in control of the tempo down the stretch. Martin was warned for low blows by referee Russel Mora throughout the fight until Mora deducted a point from Martin in the ninth round. At the sound of the final bell official scores read 96-93, 95-94 for Martin and 98-91 for Rojo to give Martin the split decision victory. .” Said Martin after the fight.
138,362
Hi, friends! Happy release day at The Greetery! You can shop for all of this month's new releases starting at 10am EDT, and can click any link from here (supply lists included) to take you right over, when you want... Read more → « June 2021 | Main | August 2021 »
158,847
TITLE: What is meant by "Nothing" in Physics/Quantum Physics? QUESTION [9 upvotes]: I am not a phycisist, so please forgive my ignorance. This is related to my posts and this. I am trying to understand what is meant by the term "Nothing" in physics or Quantum Field Theory (QFT) since it seems to me that this term is not used in the way we understand it in everyday language. So QFT seems to suggest (in a nutshell) that "things pop out of nothing". But from wiki I see the following quote: "According to quantum theory, the vacuum contains neither matter nor energy, but it does contain fluctuations, transitions between something and nothing in which potential existence can be transformed into real existence by the addition of energy.(Energy and matter are equivalent, since all matter ultimately consists of packets of energy.) Thus, the vacuum's totally empty space is actually a seething turmoil of creation and annihilation, which to the ordinary world appears calm because the scale of fluctuations in the vacuum is tiny and the fluctuations tend to cancel each other out. So what is "Nothing" in QFT? If this quote is correct, I can interpret it only as follows: The "Nothing" is not in the way used in everyday speech but is composed of "transitions" i.e. something that is "about to become" Is this correct? If yes, why is this defined as "Nothing"? Something that is "about to become" is not nothing but there is something prerequisite. In very lame terms: Einstein was born a non-physicist but became a physicist, so if this is a correct analogy, then there there is something underlying that was non-something that became something A non-something came into something because something else (not nothing) permitted it to become. E.g. Einstein's talent (or Mozart's) would have been lost had he been born in Africa or in a country with no educational facilities. So he would not become a physicist (but the required talent would be present but not come into reality) Could someone please help me understand this (perhaps trivial to you) concept? REPLY [0 votes]: I'm not a physicist but based on studies I did : all types of elementary particles and forcrs also have fields in the whole universe. and fields always are there so even if we don't see any particle in a place it does't mean that there is "nothing" in that place because fields are always there. based on the uncertainty principle Since we can't accurately calculate the energy of a specific system in a specified time , the conclusion is that the energy of system can not be absolutely zero.so changes happens in the fields even in places we think that is empty. in this case virtual particles and virtual antiparticles borrow energy from system to come in existence and then after a short time they collide togheter and give back this energy to system. So actually "nothing" does not exist. Forgive me for my poor english.
50,778
A.UD LECTURE SERIES: BYRON MERRITT EVENTS BYRON MERRITT Nike, Senior Creative Director and leader of Nike's North America Brand design lead the development of many Nike first. He was creative director for Nike's first video game Nike+ Kinect Training designed for the Xbox Kinect platform as well as new to Nike, customization experiences.
56,873
Tuesday, May 11, 2004Tuesday, May 11, 2004 :: THE VOTE THAT WAS :: Pardon if I have to make another political post but I just can't resist - with everyone around me so into it. My mom and dad just got here from my Tita's house to "check on things" (read: if we are winning). Well, my uncle who's running for Councilor is a shoo-in for the #1 or #2 slot, but the brother-in-law of my tita who's running for Governor of Laguna is still unsure. Election Day is so bloody. So, I've heard. The guns, goons and golds are back! HAY! I dunno what is happening! The Philippines is really something. *sigh* I voted around 1pm yesterday with the whole family (except for Dad since he voted earlier, but he still accompanied us in the precinct), even Little Gabbie went with us. She sat beside me with her own pen and paper as if she's voting. She also brought her pink pouch with my uncle's campaign fliers. Aba, balak pa mangampanya! Reminds me of me (I started campaigning for my Tito at age 9!). Here are some bits and pieces of the vote that was: ~ I have so many blanks = VP, 5 Senators, Mayor and a handful of Councilors. ~ VP - Loren was my choice but when she allied with FPJ...no way! She has great ideals but she's TOO ambitious for words. Enough said. ~ Senators - With the exception of Mr. Palengke, Mar Roxas, I made the decision of voting the other six (6) senators just this morning. A quick run-through of their profile made me decide who to vote and not to vote. My list included: Dick Gordon, Bobby Barbers and Rodolfo Biazon. Other senatoriables who I voted for like Pia Cayetano and Heherson Alvarez were requested by mom and dad. I guess the desired effect of the "Maalaala Mo Kaya" life story of Companero Rene last Thursday was achieved since Pia is running at the #6 slot (despite the fact that she never made it to the top of any of the SWS surveys) at the partial and unofficial count. If you are intently reading this post and realized that I only gave six names (whereas I said I voted for seven senators), well, let's just say that the 7th name will remain a secret since I'm still unsure if I did the right thing in voting for him. ~ Mayor - There are three (3) mayoralty candidates here in Cabuyao - the wife of the incumbent mayor and two siblings who took their "sibling rivalry" to the limits by running against each other. If there's anyone from Cabuyao (or from my OC political clan) who's reading this, despair not! This was a personal choice. The whole family except for yours truly voted for someone...however, I will not tell WHO! ~ NO single votes for moi (for councilor). A single vote is one of the most controversial type of vote there is. It's one of the strategies that can elect a candidate to a post - especially in our 'all-for-one-one-for-all-clannish town. A single vote automatically denies several votes to the other candidates. THE INDELIBLE INK One of Comelec's counter-measure to the expected massive cheating and flying voters is the use of a stronger indelible ink. Awwww! I can't remove it! True enough, it's indeed strong! Well, mine that is. After numerous liquid soap washing...I still have it! :( I heard Bill Luz, Namfrel's head honcho, said that it's actually a case to case basis. Some rubs off immediately while others really do stays in your index finger for a long time! Aaarrgh! THE SONY LIFE Though temporarily distracted by the election, I still have this nagging feeling for Wednesday! It will definitely be the DAY of DAYS. I just hope that I will pass in flying colors. Lord, please help me! Thank you for your comment.
185,889
TITLE: In how many ways can two different colored balls be chosen? QUESTION [2 upvotes]: I have a statement that says: In how many ways, can I choose $2$ different colored balls, if I have $3$ red, $4$ blue and $7$ yellow balls? So, the order does not matter, because choose a red ball and a blue ball is the same thing that choose a blue ball and red ball, and I need a subgroup of $2$ elements of $14$ elements in total, also the balls must be of different colors. According to this, I will use a combination $\frac{n!}{(n - k)!k !}$, then I will replace: $$= \frac{14!}{12! \cdot 2!} = \frac{14 \cdot 13}{2} = 91$$ But my problem, is that the correct result must be $61$, and I would like to know, where my logic failed and how should it be done. REPLY [2 votes]: Another approach from my comment: There are $7$ ways to choose a yellow ball and $7$ ways to choose the second ball, $4$ ways to choose a blue ball and $10$ ways to choose the second ball. $3$ ways to choose a red ball and $11$ ways to choose the second ball. This counts every choice twice. Thus, we get $$ \frac{7\cdot7+4\cdot10+3\cdot11}2=61 $$
39,542
North Shore Heritage (NSH) was created in June 2005 from the amalgamation of two groups. The North Shore Heritage Network was an online service launched in early 2005 by two North Vancouver homeowners, providing information on renovations, events and general heritage issues. The Hodgson House Society was created around the same time by a group of West Vancouver residents in a bid to save and restore the former home of early architect Hugh Hodgson as a public facility dedicated to the Craftsman style and the evolution of local architecture. While that effort failed — the house had been shipped to Nanaimo, and was quickly sold to another buyer for renovation — the group decided to press on and form a society to protect and preserve other heritage buildings. The two groups joined forces and the North Shore Heritage was born. Our vision: NSH has a broad aim of inspiring, facilitating and promoting the preservation, rehabilitation and restoration of historic and distinctive buildings. Our mandate is to educate and raise awareness in the community of the merits of such buildings, and how they can embody a sense of history, serve to preserve qualities of craftsmanship, enhance the spirit and character of the community, and provide aesthetic pleasure. What we aim to do: - Promote awareness through special events, such as lectures, workshops and open houses - Provide an information resource for residents of the North Shore and others - Monitor and provide community input to local government policy - Act when buildings are under threat
401,870
TITLE: Can $AB = \gamma BA$ for matrices $A$ and $B$ QUESTION [5 upvotes]: For what values of $\gamma \in \mathbb{C}$ do there exist non-singular matrices $A , B \in \mathbb{C}^{n \times n}$ such that $$AB = \gamma BA \,?$$ So far what I have done shown that $\gamma$ must be an nth root of unity, by considering the determinant. $$det(AB) = det(\gamma BA)$$ $$det(A)det(B) = \gamma ^n det(B)det(A)$$ Now since both $A$ and $B$ are non singular we have $det(A) \neq 0$ and $det(B) \neq 0$ So: $$\gamma ^n =1$$. I also know that $$tr(AB - \gamma BA)=0$$ $$tr(AB) - \gamma tr(BA)=0$$ $$tr(AB)\big(1-\gamma \big) = 0$$ Clearly we can assume $\gamma \neq 1$ since surely we can find $A$ and $B$ such that they commute so we conclude that $tr(AB) = tr(BA) = 0$ Now i'm thinking that we can find matrices $A$ and $B$ for any $\gamma = e^{\frac{2 \pi i}{n}}$ such that $AB = \gamma BA$ but I cannot think of a way of constructing them. Does anyone have any ideas? thanks in advance! REPLY [4 votes]: For any $n$, define $$B = \text{diag}(e^{2\pi i k / n} : 0 \leq k \leq n-1)$$ and then for any $0 \leq k \leq n-1,$ we can define $A = [a_{i,j}]$ by $$a_{i,j} = \begin{cases}1, & i \equiv j+k\pmod{n} \\ 0, & \text{otherwise}\end{cases}$$ Now just consider the actions of $A$ and $B$ on the standard basis vectors.
116,718
Ingredients: 1/2 cup milk 1 egg, lightly beaten 2 cups grated cheese, a sharp cheddar is best 2 medium onions, finely chopped 3 slices bacon, finely chopped 1 cup self-raising flour 1 teaspoon grainy mustard (or French mustard) Directions: 1 Preheat oven to 180°C/350°F. 2 Combine egg and milk, and stir into remaining ingredients. 3 Drop by rounded teaspoon onto a lightly greased cookie sheet. 4 Bake until golden (about 20 minutes). 5 Cool on wire racks.
20,045
TITLE: Notation Mistake in Game Theory Textbook? QUESTION [3 upvotes]: I am currently reading Fudenberg and Tirole's "Game Theory" and I wondering if the following is a mistake in notation. I am currently reading about multistage games with observable actions, the author defines a strategy to be a sequence of functions $\{s_i^k\}_{k=0}^K$ where the first stage has an "empty history" and so each strategy must begin with a function $s_i^0$ whose domain is $\emptyset$. As we know there is a unique map from the empty set into any other set, so this doesn't make any sense mathematically as it doesn't allow for what the author intended wherein each player should be able to start with whatever action they want for the first stage of the game. Is there something I am missing here? REPLY [4 votes]: Looking at Fudenberg and Tirole's book, there is no contradiction in their notation. Here are the relevant quotes from section 3.2.1 (if I missed anything, please let me know): We let $h_0=\varnothing$ be the "history" at the start of the play. [...] Continuing iteratively, we define $h^{k+1}$, the history at the end of stage $k$, to be the sequence of actions in the previous periods, $h^{k+1}=(a^0,a^1,\dots,a^k)$. [...] If we let $H^k$ denote the set of all stage-$k$ histories, and let $A_i(H^k)=\bigcup_{h^k\in H^k}A_i(h^k)$, a pure strategy for player $i$ is a sequence of maps $\{s^k_i\}_{k=0}^K$ where each $s^k_i$ maps $H^k$ to the set of player $i$'s feasible actions, $A_i(H^k)$ [...] . Now, let us unpack these definitions. We see that $s^0_i$ is a function from $H^0$ to $A_i(H_0)$. What is $H^0$? It is the set of stage-zero histories. There is only one possible zero-stage history; in the first sentence I quoted, the only possible initial history is defined to be $\varnothing$. Therefore, $H^0=\{h^0\}=\{\varnothing\}$, the set whose unique element is $\varnothing$. Finally, we conclude that $s^0_i$ is simply a function from a set of size one, $H^0$, to the set $A_i(H^0)$. This is exactly what is intended; choosing a function from a set of size one to a $A_i(H^0)$, is equivalent to choosing a particular element of $A_i(H^0)$, which is just choosing an initial action. Basically, it seems like you confused $H^0$, the set of zero-stage histories, with $h^0$, the unique element of the set of zero-stage histories. $\varnothing$ is not the domain of $s^0_i$, it is the input into $s^0_i$.
177,836
18 august Waking up in pouring rain. Not ideal for fieldwork. But the Swiss team should be prepared as it was snowing during the Swiss finals in Bern. The weather forces our team to find creative solutions. During the night Yannik glued his umbrella with Scotch to his backpack. This keeps him dry … [Read more...] iGeo Beijing, day 4 Our team has a lot to do. To save time we just rely another time on photographical impressions. … [Read more...] iGeo Beijing, day 3 only pictures for now. … [Read more...]
407,871
February 18th, 2008 by Manoj Jasra. Original Post: Web Analytics World February 11th, 2008 by Manoj Jasra De. November: October 22nd, 2007 by Manoj Jasra Feed September 4th, 2007 by Manoj Jasra Tips to Grow a Successful Facebook Group: Tracking Your Group's Success Here are some metrics and tactics you can use to measure the success of your Facebook Group: Original Post: Web Analytics World August 2nd, 2007 by Gord HotchkissathonFirst, let's cover Granovetter's work. In an oversimplified version, it states that social networks are not uniformly dense in their make up. spread widely, it has to be passed through the "weak ties". Otherwise, it will never spread outside a cluster, thus the importance of these "weak ties" in the structure of the social network. But there is another factor, and that is the cooperation of those "weak ties". Are they motivated to pass on the information? In the words of Frenzen and Nakamoto: "Insteadof" In Frenzen and Nakamoto's study, they introduced two variables: value of information and moral hazard. In this case, they used the framework of an exclusive sale. The value of information varied with the size of discount on the prices.. They also varied the structure of the network by assigning different "tie strengths" to the linkages within the group. The results were striking. In the low moral hazard scenario, where there was maximal cooperation to pass along information, everyone in a 100 member social network, composed of 5 loosely linked clusters, received the information in a maximum of 7 8, where they only have 6 it's. Originally published in Mediapost's Search Insider, August 2, 2007 July 24th, 2007 by Manoj Jasra Have you looked at your blog's stats lately and noticed that your blog's traffic is starting to plateau. Blogging is not simple, it takes a lot of hard work to continuously come up with new ideas for content and blog promotion. I remember last year at this time there were some excellent blog promotion strategies, however even some of those suggestions have been over exhausted. Some of these useful but sometimes over used suggestions included: Thinking outside the box these days becomes challenging because the box is SO big. If you're looking for something new, give some of the blog promotion strategies below a try. They may not lead to instantaneous traffic nor are they necessarily easy to implement, but they will help you build your brand and help grow your long term traffic: *BONUS: People absolutely love widgets and apps, so I am going to give you a suggestion for an app to build: FeedBurner offers an API which allows you to retrieve a given blog's subscribers (if it's enabled), I would love to see an app which quickly allows you to compare subscribers between bloggers or EVEN better take a page out of GoogleFight's book and develop a FeedBurner Fight application. The most fun and most difficult part of blog promotion is to be creative, although with millions of bloggers in the blogosphere it can become awfully challenging. To be successful, a blogger has to find their 'zone' and motivation, mine often comes when I start scribbling ideas on a piece of paper rather than typing them directly into my laptop. Sounds a little old school but it works.
82,339
\chapter{Adjoinable Homology and Adjoinable-Hochschild Duality} \section*{Foreword} The abstract theory has been introduced and the relevant coalgebraic notions have been reviewed. The goal of this section is to introduce a cohomology theory on coalgebras which, is derived dual to the Hochshild cohomology thereon. Since the Hochschild cohomology of a coalgebra C was seen to be identified with the derived functors $cotor_{C^e}(C,-)$. The central issue here is that the $h_C(C,-)$ functors may not exist. This concern is completly non-existent when C is quasi-finite as a $C^e$-bicommodule. The first objective is to define a homology theory on \textit{all} coalgebras regardless of any quasi-finiteness property. Moreover, given reasonably manageable coalgebras it is desired that this homology theory behaves dual, in some sense to their Hochschild cohomology. This construction will be undertaken in 2 steps, first the homology theory will be introduced for coalgebras which are quasi-finite as bicomodulues over their enveloping coalgebra. Then, those particularly nice coalgebras will be shown to be abundant enough so that the homology theory in question may be extended to the entire category of coalgebras by the concept of pseudo-derived functors and the derived functor extension theorem. This section will close with the consideration of a very manageable class of coalgebras, which are first something like smooth and show that the derived-duality theorem hinted towards earlier applies thereon. The paper will then close following the next chapter, wherein a particularly convenient type of coalgebra is introduced, the autoenvelopes. These will greatly ease computations and will provide very concrete links to the continuous Hochschild cohomology of the profintie dual algebra of the coalgebra in question. \section{Introducing: a new homology theory} The idea will be glanced at briefly and then properly extended, so that it is usable. \begin{defn} \textbf{Adjoined Homology of a quasi-finite coalgebra} For any coalgebra C, which is quasi-finite and admits an injective resolution $I_C^{\star}$ of $C^e$-bicomodules each $I_C^i$ being quasi-finite; its \textbf{Adjoined Homology} with coefficients in the left $C^e$-comodule M, denoted $H\ad_{\star}(C,M)$ is defined as: $H\ad_{\star}(C,M):=coext_{C^e}(C\square_{C^e}C^o,M)$, wher $C^o$ is the opposite $R$-coalgebra of $C$. \end{defn} This may seem a slightly unusual construction and nameing at first and the natural first question that should come to mind is \textit{"do such coalgebras even exist?"}. Then answer is \textit{"yes, in fact there are enough of them"}. \section{Pseudo-Derived Functors, Quasi-finite Modules and coext} This small technical interlude, is fundamental to a more complete theory. That is, though the adjoined homology theory may and has been described in a straightforward manner on quasi-finite $C$-bicomodules, for a $R$-coalgebra $C$. It would seem more, useful and interesting if were applicable to any $R$-algebra. The theory of pseudo-derived functors allows for the theory to transfer over to the general setting in a rather natural way. The idea will be as follows, first, to establish that all free comodules are quasi-finite, then to show that there is enough of them. In turn, the \textit{Derived functor extension theorem} then extends the theory to the entire category. Following, this extension, a more general definition of adjoinable homology of a coalgebra will then be given, for an arbitrary coalgebra, in a way consistent with and more applicable than the aforementioned presentation. \subsection{Free bicomodules are quasi-finite} It is again noted that $R$ is always to be an arbitrary unital associative ring, and $C$ to be an arbitrary $R$-coalgebra, until further mention. \begin{lem} All free $C$-bicomodules are quasi-finite. \end{lem} \begin{proof} Let I be a set, M and N be $C$-bicomodules and $\mathfrak{U}<I>$ be the free $C$-bicomodule generated on I. Then without loss of generality $\mathfrak{U}<I>$ may be identified with the $C$-bicomodule $\underset{i \in I}{\bigoplus} C$. The left adjoint of $\mathfrak{U}<I> \square_C -$ is constructed as follows: $Hom(M,\mathfrak{U}<I> \square_A N) \cong Hom(M,\underset{i \in I}{\bigoplus} C \square_C N)$. Since $\square_C$ is an additive bifunctor then $Hom(M,(\underset{i \in I}{\bigoplus} C) \square_C N) \cong Hom(M,\underset{i \in I}{\bigoplus} (C \square_C N)) \cong Hom(M,\underset{i \in I}{\bigoplus} N) \cong \underset{i \in I}{\prod} Hom(M, N) \cong Hom(\underset{i \in I}{\prod} M,N)$. Therefore, the left adjoint of $\mathfrak{U}<I> \square_A -$ is identified with $\underset{i \in I}{\prod}$. Hence, all free $C$-bicomodules, are quasi-finite. \end{proof} \subsection{The abundance of quasi-finite bicomodules} It has been establish that free $C$-bicomodules are quasi-finite, now it will be shown that any $C$-bicomodule is in fact \textit{"included"} in a free $C$-bicomodule. \begin{lem} There are enough free $C$-bicomodules, moreover, these are injective $C$-bicomodules. \end{lem} \begin{proof} For any $R$-coalgebra $C$ the category $^C\mathscr{M}^C$ is dual to the category of rational ${A_C}$-bimodules $_{A_C}Mod_{A_C}^{rat}$ on the profintie dual $A_C$ of $C$ \textit{(This will formally be reviewed in a later chapter, it sufficies to say that $_{A_C}Mod_{A_C}^{rat}$ is a full subcategory of $_{A_C}Mod_{A_C}$ and so an object is projetive in the former, only if it is projective in the latter)}. Moreover, since any free ${A_C}$-bimodule is projective then \textit{the duality principle} implies that its dual $C$-bicomodule is injective. In a category of modules over a ring, there are always enough free objects, since any free module is rational then inparticular there are enough rational ${A_C}$-bimodules. Moreover, dualisation preserves freeness, then there are enough free $C$-bicomodules. \end{proof} \begin{cor} Any $C$-bicomodule admits an injective resolution of quasi-finite $C$-bicomodules. Moreover, this resolution may be chosen such that each injective is a free $C$-bicomodule. \end{cor} \begin{proof} Any $C$-module admits an injective resolution of free $C$-bicomodules, moreover any free $C$-bicomodule is quasi-finite. \end{proof} \subsection{Adjoined Homology} The two final steps are now discussed culminating the discussion to date. \begin{prop} For any $C$-bicomodule M, there the right pseudo-derived functors $PR^{\star}h_C(M,-): ^C\mathscr{M}^C \rightarrow ^C\mathscr{M}^C$ exist, and coincide with the right derived functors $R^{\star}h_C(M,-): ^C\mathscr{M}^C \rightarrow ^C\mathscr{M}^C$ when M is quasi-finite as a $C$-bicomodule. \end{prop} \begin{proof} Since any module admits an injective resolution by quasi-finite $C$-bicomodules, then the conditions for the \textit{Derived functor extension theorem} are satisfied with $\tilde{\mathscr{A}}$ beng the full subcategory of free $C$-bicomodules. \end{proof} Immediately from this, to \textit{any} $R$-coalgebra a homology theory may be adjoined. The definition, of the \textit{adjoined homology} of a $R$-coalgebra is now generalised. \begin{defn} \textbf{Adjoined homology of a coalgebra} Let $R$ be an arbitrary unital associative ring, and $C$ be an $R$-coalgebra and $N$ be a $C^e$-bicomodule, then the \textbf{adjoined homology} of $C$ is defined via the right pseudo-derived functors of $h_{C^e}(-,-)$ as: $H\ad_{\star}(C,-):=PR^{\star}h_{C^e}(C\square_{C^e}C^o,-) : ^{C^e}\mathscr{M}^{C^e} \rightarrow ^{C^e}\mathscr{M}^{C^e}$ where $C^o$ is $C$'s opposite $R$-coalgebra. Following convention, the $H\ad_{\star}(C,N)$ is called the \textbf{adjoinable homology of C with coefficients in $N$}. \end{defn} For completeness: from now on the pseudo-derived functors of $h_C(-,-)$ will be identified with the derived functors $coext_C^{\star}(-,-)$: \begin{defn} \textbf{The Pseudo-Coext functors $Pcoext_C^{\star}$} Let $R$ be an arbitrary unital associative ring, and $C$ be an $R$-coalgebra then the pseudo-derived bifunctors $PR^{\star}(-,-): ^{C}\mathscr{M}^{C} \times ^{C}\mathscr{M}^{C} \rightarrow ^{C}\mathscr{M}^{C}$ are \textit{(not really abusing notation)} named $coext_{C}^{\star}(-,-) :=PR^{\star}(-,-)$. \end{defn} Particularising the above definition to the case where $\star =0$ extends the definition of the cohom bifunctor: \begin{defn} \textbf{The Pseudo-Cohom functors $Ph_C^{\star}$} Let $R$ be an arbitrary unital associative ring, and $C$ be an $R$-coalgebra then the pseudo-derived bifunctor $PR^{0}(-,-): ^{C}\mathscr{M}^{C} \times ^{C}\mathscr{M}^{C} \rightarrow ^{C}\mathscr{M}^{C}$ are \textit{(not really abusing notation)} named $h_C(-,-) :=PR^{0}(-,-)$. \end{defn} \subsection{Three usual results} Two of the abstract results mentioned in the section on pseudo $-$ derived functors are now rephrased, to this context for clarity and completeness. \begin{prop} The constructions of the right pseudo-derived functors $Ph_C^{\star}$, $Pcoext_C^{\star}$ and $H\ad_{\star}(C,-)$ of an $R$-coalgebra C are independent of \textit{quasi-finite} resolution chosen. \end{prop} \begin{proof} Contextual rephrasing of an above result preceding the \textit{"Pseudo-Derived functor extension theorem"}, with $\tilde{\mathscr{A}}$ being the full subcategory $R_Coalg$ consisting of free quasi-finite $R$-coalgebras. \end{proof} \begin{prop} There is a long exact sequence of functors: $.. \rightarrow H\ad_{\star}(C,-) \rightarrow H\ad_{\star+1}(C,-) \rightarrow H\ad_{\star+2}(C,-) \rightarrow ...$. \end{prop} \begin{proof} Follows by construction of the pseudo-derived functors, as derived functors on special resolution, on which this result holds as in the classical context. \end{proof} \begin{cor} If $0 \rightarrow M \rightarrow N \rightarrow O \rightarrow 0$ is a short exact sequence of $C^e$-bicomodules then there is a long exact sequence in homology: $.. \rightarrow H\ad_{\star}(C,M) \rightarrow H\ad_{\star}(C,N) \rightarrow H\ad_{\star}(C,M) \rightarrow PR^{\star+ + 1}F(C,O) \rightarrow ...$. \end{cor} \begin{proof} Follows by construction of the pseudoderived functors, as derived functors on special resolution, on which this result holds as in the classical context. \end{proof} This last one is a direct rephrasing of construction: \begin{prop} The functors $H\ad_{\star}(-,-)$ and $Pcoext_{C}(-,-)$ are bifunctors. \end{prop} \subsection{Transitional Remarks} The \textit{adjoinable homology} theory of an $R$-colagebra has been shown to exist regardless of the $R$-coalgebras of choice, unlike what may have been believed at first glance. Therefore, a very restricted theory has been extended. Now therefore, to reap even more results the taken glaze will be again, though only slightly restricted to $R$-coalgebras which are slightly smooth. \section{Adjoinable-Hochschild Duality \\ and Dualisable Coalgebras} \subsection{Dualisable Coalgebras} The second goal of this paper will now be tackled, that is to show that Adjoinable homology is a natural concept, in many cases \textit{"dual"} to Hochschild cohomology on a fullsubcategory category of $R_Coalg$, of certain $R$-coalgebras called \textit{Dualisable}. For any coalgebra C, it was noted that the \textit{pseudo-derived functors} $Pcoext_{C^e}(C,-)$ and $H\ad_{\star}(C,-)$ were expressible via \textit{any} injective resolution, by quasi-finite $C^e$-bicomodules, independently of choice. Particularly, the existence of a certain specific such resolution admitting special properties will be be key: \begin{defn} \textbf{Dualising Resolution} A \textbf{Dualising} $A_{\star}: ... \rightarrow A^i \rightarrow ... \rightarrow A^1 \rightarrow M \rightarrow 0$, of a $C$-bicomodule M is an $Ph_{C^e}(-,-)$ - $ -\square_{C^e}-$-flipping resolution $A_{\star}:= ... \rightarrow A^i \rightarrow ... \rightarrow A^1 \rightarrow M \rightarrow 0$ of M, such that for each $coext_C(-,-)-cotor_C(-,-)$-pivot $<A_i,I^i>$, the $C^e$-bicomodules $A_i$ and $I^i$ are both quasi-finite $C^e$-bicomoduels. \end{defn} The second of the central results of this paper follows has been set up: \begin{prop} If an $R$-coalgebra C, admits a dualising resolution of finite length n and is $Ph_{C^e}(-,-)$-$-\square_{C^e}$ derived dual order n. Then for every $C^e$-bicomodule M, there are isomorphisms of right $C^e$-bicomodules: $HH_{\star}(C,M) \cong Pcoext_{C^e}^{n-\star}(C,M)$. \end{prop} \begin{proof} All that need be verified is that the functors $cotor_{C^e}(-,-)$ and $Pcoext_{C^e}(-,-)$ exist and do verify the hypothesis of the \textit{derived duality theorem}. 1) It was remarked that $-\square_{C^e}-$ is covariant left exact in both inputs. 2) Moreover, is $h_{C^e}(-,-)$ is right exact in the both imputs; being covariant in the second and contravariant in the first input, respectivly. Therefore, $Ph_{C^e}(-,-)$ is also right exact in the both inputs, with like variances and so must $Ph_{C^e}(-\square_{C^e}-^o,-)$ be in the second input. The rest of the assumptions necessary for the use of the \textit{abstract duality theorem} are assumed in the hypothesis. Therefore, the result now follows. \end{proof} Since, these $R$-coalgebras exibit very particular properties, it seems appropriate to name them: \begin{defn} \textbf{Dualising coalgebra} A C-coalgebra is said to be \textbf{dualisable} \textit{if and only if} it admits a dualising resolution of finite length $n$ and is $Ph_{C^e}(-,-)$-$-\square_{C^e}-$ derived dual order $n$. Moreover if $C$ is dualisable, the integer $n$ above is said the be the \textbf{order} of $C$. \end{defn} Now as a side note by construction: \begin{prop} The category of adjoinable R-coalgebras is a full subcategory of the category of coalgebras. \end{prop} \begin{proof} By definition, since all morphisms are admissible. \end{proof} The results and definitions to date are all pronounceable repackaged as: \begin{thrm} \textbf{The Adjoined-Hochschild Duality} For any dualisable $A$-coalgebra C of finite order n. Then for every right C comodule, there are isomorphisms of $C^e$-bicomodules: $HH^{\star}(C,M) \cong H\ad_{n-\star}(C,M)$. \end{thrm}
191,925
\begin{document} \title[Cohen-Macaulay invariant subalgebras]{Cohen-Macaulay invariant subalgebras of Hopf dense Galois extensions} \author{Ji-Wei He and Yinhuo Zhang} \address{He: Department of Mathematics, Hangzhou Normal University, Hangzhou Zhejiang 310036, China} \email{[email protected]} \address{Zhang: Department of Mathematics and Statistics, University of Hasselt, Universitaire Campus, 3590 Diepenbeek, Belgium} \email{[email protected]} \begin{abstract} Let $H$ be a semisimple Hopf algebra, and let $R$ be a noetherian left $H$-module algebra. If $R/R^H$ is a right $H^*$-dense Galois extension, then the invariant subalgebra $R^H$ will inherit the AS-Cohen-Macaulay property from $R$ under some mild conditions, and $R$, when viewed as a right $R^H$-module, is a Cohen-Macaulay module. In particular, we show that if $R$ is a noetherian complete semilocal algebra which is AS-regular of global dimension 2 and $H=\kk G$ for some finite subgroup $G\subseteq\Aut(R)$, then all the indecomposable Cohen-Macaulay module of $R^H$ is a direct summand of $R_{R^H}$, and hence $R^H$ is Cohen-Macaulay-finite, which generalizes a classical result for commutative rings. The main tool used in the paper is the extension groups of objects in the corresponding quotient categories. \end{abstract} \subjclass[2000]{Primary 16D90, 16E65, Secondary 16B50} \keywords{} \maketitle \setcounter{section}{-1} \section{Introduction} Motivated by the study of quotient singularities of noncommutative projective schemes, Van Oystaeyen and the authors introduced the concept of Hopf dense Galois extension in \cite{HVZ2}. The present paper is a further study of Hopf dense Galois extensions. In this paper, we focus on the Cohen-Macaulay properties of the invariant subalgebra of a Hopf dense Galois extension. Let $H$ be a finite dimensional semisimple Hopf algebra, and let $R$ be a left $H$-module algebra. Let $H^*$ be the dual Hopf algebra. Since $H$ is finite dimensional, $R$ is a right $H^*$-comodule algebra. Denote by $R^H(=R^{coH^*})$ the invariant subalgebra. The algebra extension $R/R^H$ is called a {\it right $H^*$-dense Galois extension} (\cite[Definition 1.1]{HVZ2}) if the cokernel of the map $$\beta:R\otimes_{R^H} R\to R\otimes H^*, r\otimes r'\mapsto (r\otimes1)\rho(r')$$ is finite dimensional, where $\rho:R\to R\otimes H^*$ is the right $H^*$-comodule structure map. The concept of a Hopf dense Galois extension is a weaker version of that of a Hopf Galois extension. The main feature of a Hopf dense Galois extension is that there is an equivalence between some quotient categories. More precisely, let $R$ be a noetheiran algebra which is a left $H$-module algebra over a finite dimensional semisimple Hopf algebra $H$. Let $\Mod R$ be the category of right $R$-modules and let $\Tor R$ be the full subcategory of $\Mod R$ consisting of torsion $R$-modules (see Section \ref{sec1}). Then we obtain a quotient category $\QMod R=\frac{\Mod R}{\Tor R}$. Set $B=R\#H$. Since $R$ is noetherian, $B$ and the invariant subalgebra $R^H$ are also noetherian. Then we also have quotient category $\QMod B$ and $\QMod R^H$. If $R/R^H$ is right $H^*$-dense Galois extension, then there is an equivalence of abelian categories $\QMod R^H\cong\QMod B$ (cf. \cite[Theorem 2.4]{HVZ2}). Under this equivalence, we show in this paper that the invariant subalgebra $R^H$ inherits many homological properties from $R$. Especially, we have the following results (Theorems \ref{thm1} and \ref{thm4.1}). The terminologies may be found in Section \ref{sec3}. \\ \noindent{\bf Theorem.} {\it Let $H$ be a finite dimensional semisimple Hopf algebra, and let $R$ be a noetherian left $H$-module algebra. Assume that the $H$-module $R$ is admissible and $\Tor B$ is stable. If $R/R^H$ is a right $H^*$-dense Galois extension, then the following statements hold. {\rm(i)} If $R$ is AS-Cohen-Macaulay of dimension $d$, so is $R^H$. {\rm(ii)} If $R$ is AS-Cohen-Macaulay of dimension $d$, then $R$, viewed as a right $R^H$-module, is Cohen-Macaulay.}\\ There are plenty of $H$-module algebras satisfying the assumptions in the theorem above. For example, if $R$ is a noetherian complete semilocal algebra and $H$ is a finite group algebra, then the $H$-module algebra $R$ is admissible and $\Tor B$ is stable (cf. Example \ref{ex1}). The main tool for proving of the theorem above is the following observation (Theorem \ref{thm0}) which gives a way to compute the extension groups of objects in the quotient category $\QMod R^H$.\\ \noindent{\bf Theorem.} {\it Assume that the $H$-module algebra $R$ is admissible and $\Tor B$ is stable. Let $N$ and $M$ be finitely generated right $B$-modules. Let $\mathcal{N}$ and $\mathcal{M}$ be the corresponding objects of $N$ and $M$ in the quotient categories. Then we have {\rm (i)} for each $i\ge0$, $\Ext^i_{\QMod R}(\mathcal{N},\mathcal{M})$ is a right $H$-module, {\rm(ii)} there are isomorphisms $\Ext^i_{\QMod B}(\mathcal{N},\mathcal{M})\cong\Ext^i_{\QMod R}(\mathcal{N},\mathcal{M})^H,\ \forall\ i\ge0.$}\\ If $R$ is further a noetherian complete semilocal algebra which is also AS-regular of global dimension 2, and $G\subseteq \Aut(R)$ is a finite subgroup, then the invariant subalgebra $R^G$ is Cohen-Macaulay-finite. Indeed, we have the following result (Theorem \ref{thm5.1}), which is a noncommutative version of a classical result for commutative rings due to Auslander \cite{A}.\\ \noindent{\bf Theorem.} {\it Let $R$ be a noetherian complete semilocal algebra, and let $G\subseteq\Aut(R)$ be a finite subgroup. Assume $\ch\kk\nmid|G|$ and $R/R^G$ is a right $\kk G^*$-dense Galois extension and $R$ is AS-regular of global dimension 2. Then the following statements hold. {\rm(i)} $R^G$ is AS-Cohen-Macaulay of dimension 2. {\rm(ii)} A finitely generated right $R^G$-module $M$ is Cohen-Macaulay if and only if $M\in \text{\rm add} (R_{R^G})$, where $\text{\rm add} (R_{R^G})$ is the subcategory of $\Mod R^G$ consisting of all the direct summands of finite direct sums of $R_{R^G}$.}\\ When the algebra $R$ is graded, the above result was obtained by several authors (see Remark \ref{rm5.1}). However, the methods applied in graded case can not be applied to nongraded case directly. Our method depends on the relations of extension groups of objects in the quotient categories over $R^G$ and $R\#\kk G$ respectively, which is different from the methods applies in the graded case. Throughout, $\kk$ is a field. All the algebras and modules are over $\kk$. Unadorned $\otimes$ means $\otimes_{\kk}$. We refer to \cite{HVZ2} for basic properties of Hopf dense Galois extensions. \section{Hom-sets of the quotient category}\label{sec1} Let $B$ be a (right) noetherian algebra. We denote by $\Mod B$ the category of right $B$-modules. Let $M$ be a right $B$-module. We say that a submodule $K$ of $M$ is {\it cofinite} if $M/K$ is finite dimensional. For an element $m\in M$, we say that $m$ is an torsion element if $mB$ is finite dimensional. Let $\Gamma_B (M)=\{m\in M|m\text{ is a torsion element}\}$. Then $\Gamma_B(M)$ is a submodule of $M$. Note that $\Gamma_B$ is in fact a functor from $\Mod B$ to itself. The functor $\Gamma_B$ can be represented as $\Gamma_B=\underset{\longrightarrow}\lim\Hom_B(B/K,-)$, where $K$ runs over all the cofinite right ideals of $B$. In the sequel, if there is no risk of confusion, we will omit the subscript in the functor $\Gamma_B$, and simply write as $\Gamma$. For a right $B$-module, if $\Gamma(M)=0$, then we say that $M$ is {\it torsion free}. If $\Gamma(M)=M$, then $M$ is called a {\it torsion module}. Let $\Tor B$ be the full subcategory of $\Mod B$ consisting of torsion modules. One sees that $\Tor B$ is a localizing subcategory of $\Mod B$. Hence we have an abelian quotient category $\QMod B:=\frac{\Mod B}{\Tor B}$. The natural projection functor is denoted by $\pi:\Mod B\to\QMod B$, which has a right adjoint functor $\omega:\QMod B\to \Mod B$. For $N,M\in \Mod B$, we write $\mathcal{N}$ and $\mathcal{M}$ for $\pi(N)$ and $\pi(M)$ respectively in the quotient category. The morphisms in $\QMod B$ is defined to be the set $$\Hom_{\Mod B}(\mathcal{N},\mathcal{M})=\underset{\longrightarrow}\lim\Hom_B(N',M/\Gamma(M)),$$ where $N'$ runs over all the cofinite submodules of $N$, and the direct system is induced by the inclusion maps of the cofinite submodules. If $M$ is a torsion free module, then its injective envelope is torsion free as well. The injective envelope of a torsion module is not torsion in general. If $\Tor B$ is closed under taking injective envelope, then we say that $\Tor B$ is {\it stable}. For example, if $B$ is a noetherian commutative local algebra, then $B$ has a stable torsion class. \begin{lemma} \label{lem0} Let $N$ be a finitely generated right $B$-module, and let $E$ be an injective torsion $B$-module. Then $\underset{\longrightarrow}\lim\Hom_B(N',E)=0$, where $N'$ runs over all the cofinite submodule of $N$. \end{lemma} \begin{proof} For every cofinite submodule $N'$ of $N$, applying the functor $\Hom_B(-,E)$ to the exact sequence $0\to N'\to N\to N/N'\to0$, we obtain the exact sequence $0\to \Hom_B(N/N',E)\to\Hom_B(N,E)\to\Hom_B(N',E)\to0$. Take the direct limit over all the cofinite submodules $N'\subseteq N$, we obtain exact sequence $$0\to \underset{\longrightarrow}\lim\Hom_B(N/N',E)\to\Hom_B(N,E)\to\underset{\longrightarrow}\lim\Hom_B(N',E)\to0.$$ For each $B$-module morphism $f:N\to E$, since $N$ is finitely generated, $\im f$ is finite dimensional. Hence $\ker f$ is cofinite. Since $f$ factors through $N/\ker f$, we see that the morphism $\underset{\longrightarrow}\lim\Hom_B(N/N',E)\to\Hom_B(N,E)$ is an epimorphism, and hence $\underset{\longrightarrow}\lim\Hom_B(N',E)=0$. \end{proof} \begin{lemma}\label{lem1} Let $B$ be a noetherian algebra. Assume that $\Tor B$ is stable. For every finitely generated module $N,M\in \Mod B$, if $N$ is finitely generated, then $\Ext^i_{\QMod B}(\mathcal{N},\mathcal{M})=\underset{\longrightarrow}\lim\Ext^i_B(N',M)$ for all $i\ge0$, where $N'$ runs over all the cofinite submodules of $N$. \end{lemma} \begin{proof} Take a minimal injective resolution of $M$ as follows \begin{equation}\label{eq1} 0\to M\to I^0\to I^1\to\cdots\to I^n\to\cdots. \end{equation} Since $\Tor B$ is stable, for each $n\ge0$, $I^n=F^n\oplus E^n$ where $F^n$ is a torsion free module and $E^n$ is a torsion module. Note that if $F$ is an injective torsion free module, then $\mathcal{F}=\pi(F)$ is an injective object in $\QMod B$. Applying the projection functor to the exact sequence (\ref{eq1}), we obtain an injective resolution of $\mathcal{M}$ in $\QMod B$: \begin{equation}\label{eq2} 0\to \mathcal{M}\to \mathcal{F}^0\to \mathcal{F}^1\to\cdots\to \mathcal{F}^n\to\cdots. \end{equation} Applying the functor $\Hom_{\QMod B}(\mathcal{N},-)$ to (\ref{eq2}), we have $$0\to\Hom_{\QMod B}(\mathcal{N},\mathcal{F}^0)\to\cdots\to \Hom_{\QMod B}(\mathcal{N},\mathcal{F}^n)\to\cdots,$$ which is equivalent to the following complex \begin{equation}\label{eq3} 0\to\underset{\longrightarrow}{\lim}{\Hom}_{B}(N',F^0)\to\cdots\to \underset{\longrightarrow}{\lim}{\Hom}_{B}(N',F^n)\to\cdots. \end{equation} By Lemma \ref{lem0}, the complex (\ref{eq3}) is equivalent to \begin{equation}\label{eq4} 0\to\underset{\longrightarrow}{\lim}\Hom_{B}(N',F^0\oplus E^n)\to\cdots\to \underset{\longrightarrow}{\lim}{\Hom}_{B}(N',F^n\oplus E^n)\to\cdots. \end{equation} Taking the cohomology of (\ref{eq4}), we obtain the desired result since the direct limit is exact. \end{proof} \section{Hom-sets of the quotient categories of smash products}\label{sec2} Throughout this section, let $R$ be a noetherian algebra, $H$ a finite dimensional semisimple Hopf algebra acting on $R$ so that $R$ is a left $H$-module algebra. Let $B=R\# H$ be the smash product. We may view $R$ as a left $B$-module by setting the left $B$-action \begin{equation}\label{eq2.1} (a\#h)\cdot r=a(h\cdot r),\ \forall\ a,r\in R, h\in H, \end{equation} where $h\cdot r$ is the left $H$-action on $R$. Similarly, we may view $R$ as a right $B$-module by setting the right $B$-action \begin{equation}\label{eq2.2} r\cdot (a\#h)=(S^{-1}h_{(1)}\cdot r)(h_{(2)}\cdot a),\ \forall\ a,r\in R, h\in H, \end{equation} where $S$ is the antipode of $H$ and $\Delta(h)=h_{(1)}\otimes h_{(2)}$. Let $M$ be a right $B$-module. For every element $x\in M$, we see that $xB$ is finite dimensional if and only if $xR$ is finite dimensional since $H$ is finite dimensional. Hence we see as a right $B$-module $\Gamma_R(M)=\Gamma_B(M)$. From this observation, we obtain the following fact. \begin{lemma}\label{lemx1} Let $M$ be a right $B$-module. Then as right $R$-modules, $R^i\Gamma_R(M)\cong R^i\Gamma_B(M)$ for all $i\ge0$. \end{lemma} \begin{proof} Let $0\to M\to I^0\to I^2\to \cdots$ be an injective resolution of $M$ in the category of right $B$-modules. By \cite[Proposition 2.6]{HVZ}, each $I^i$ is injective as a right $R$-module. Hence $R^i\Gamma_R(M)$ is the $i$th cohomology of the complex $\Gamma_R(I^\cdot)$. Since $\Gamma_R(I^i)=\Gamma_B(M)$ for all $i\ge0$, as complexes of right $R$-modules, $\Gamma_R(I^\cdot)=\Gamma_B(I^\cdot)$. Thus we have $R^i\Gamma_R(M)=R^i\Gamma_B(M)$. \end{proof} Let $N$ be a right $R$-module. We define a right $B$-module $N\# H=N\otimes H$ whose right $B$-module action is given by $(n\otimes h)(a\otimes g)=n(h_{(1)}a)\otimes h_{(2)}g$ for $n\in N, g,h\in H$. \begin{lemma}\label{lem2} The following statements hold. {\rm(i)} If $\Tor B$ is stable, then so is $\Tor R$. {\rm(ii)} Let $A$ and $A'$ be two noetherian algebras. If $F:\Mod A\longrightarrow\Mod A'$ is an equivalence of abelian categories, then the restriction of $F$ to $\Tor A$ induces an equivalence $\Tor A\longrightarrow\Tor A'$. \end{lemma} \begin{proof} (i) Let $N$ be a torsion $R$-module. Set $M=N\# H$. Then $M$ is $B$-torsion module. Let $I$ be the injective envelope of $M$. Since $\Tor B$ is stable, $I$ is a torsion $B$-module. By \cite[Proposition 2.6]{HVZ}, $I$ is injective as a right $R$-module. We view $N$ as an $R$-submodule of $M$ through the inclusion map $N\to M: n\mapsto n\#1$. Then $N$ is a submodule of the torsion injective module $I$. Hence the injective envelope of $N$ must be torsion. (ii) This is classical. \end{proof} \begin{proposition}\label{prop1} Let $H$ be a finite dimensional semisimple and cosemisimple Hopf algebra. Let $R$ be a noetherian left $H$-module algebra. Then $\Tor R$ is stable if and only if $\Tor B$ is stable. \end{proposition} \begin{proof} We only need to prove the ``only if'' part. Note that $B=R\# H$ is a right $H$-comodule algebra. It is a left $H^*$-module algebra. The invariant subalgebra $B^{H^*}=R$. Since $B$ is a right $H$-Galois extension of $R$, $R$ and $B\#H^*$ are Morita equivalent (cf. \cite[Theorem 1.2]{CFM}). Since $\Tor R$ is stable, $\Tor B\#H^*$ is stable by Lemma \ref{lem2}(ii). By Lemma \ref{lem2}(i), $\Tor B$ is stable. \end{proof} Let $N$ and $M$ be right $B$-modules. Note that $\Hom_R(N,M)$ has a right $H$-module action defined as follows: for $f\in \Hom_R(N,M), h\in H$, \begin{equation}\label{eq5} (f\leftharpoonup h)(n)=f(nS(h_{(1)}))h_{(2)},\quad \forall\ n\in N. \end{equation} Under this $H$-action on $\Hom_R(N,M)$, we have the following isomorphism \cite{CFM} \begin{equation}\label{eq6} \Hom_B(N,M)\cong \Hom_R(N,M)^H. \end{equation} The right $H$-action on $\Hom_R(N,M)$ may be extended to the extension groups by choosing a projective resolution of $N$ (or injective resolution of $M$) in $\Mod B$. Moreover, we have the following isomorphisms \cite{HVZ} \begin{equation}\label{eq7} \Ext^i_B(N,M)\cong \Ext^i_R(N,M)^H, \ \forall\ i\ge0. \end{equation} \begin{definition}\label{def2.1} We say that the left $H$-module algebra $R$ is (right) {\it admissible} if for any finitely generated right $B$-module $N$, and any cofinite $R$-submodule $K$ of $N$, there is a cofinite $B$-submodule $K'$ of $N$ such that $K'\subseteq K$. \end{definition} There are several natural classes of noetherian admissible noetherian $H$-module algebras. \begin{example}\label{ex1} Let $R$ be a noetherian semilocal algebra, that is, $R/J$ is a finite dimensional semisimple algebra where $J$ is the Jacobson radical of $R$. Let $G$ be a finite group which acts on $R$ so that $R$ is a left $G$-module algebra. Let $B=R\#\kk G$. We know that $J$ is stable under the $G$-action. Let $N$ be a finitely generated right $B$-module, and let $K$ be an $R$-submodule such that $\overline{N}:=N/K$ is finite dimensional. Assume $\overline{N}\neq0$. Note that $\overline{N}J\neq \overline{N}$. Since $\overline{N}$ is finite dimensional, there is an integer $k$ such that $\overline{N}J^k=0$. Hence $NJ^k\subseteq K$. For $x\in N$, $r\in J^k$ and $g\in G$, we have $(xr)g=(xg)(g^{-1}(r))\in NJ^k$. Hence $NJ^k$ is a $B$-submodule of $N$. Since $N/(NJ^k)$ is a finitely generated module over $R/J^k$, it is finite dimensional. Hence the $G$-module algebra $R$ is admissible. If $R$ is also complete with respect to the $J$-adic topology and $\ch \kk\nmid |G|$, then $\Tor B$ is stable. Indeed, a right $B$-module $M$ is torsion if and only if $M$ is torsion as a right $R$-module. Since $R$ is complete, by \cite[Theorem 1.1]{J}, $J$ has the Atin-Rees property which insures that the injective envelope of a torsion $R$-module is still torsion. Since $\kk G$ is both semisimple and cosemisimple, $\Tor B$ is stable by Proposition \ref{prop1}. \end{example} \begin{remark} In Definition \ref{def2.1}, if $R$ is a graded algebra and the $H$-action is homogeneous, then we say that the left graded $H$-module algebra $R$ is admissible if the above conditions hold for finitely generated graded modules. \end{remark} \begin{example} \label{ex2} Let $R=R_0\oplus R_1\oplus \cdots$ be a graded noetherian algebra such that $\dim R_i<\infty$ for all $i\ge0$. Let $H$ be a finite dimensional semisimple Hopf algebra which acts on $R$ homogeneously so that $R$ is an $H$-module algebra. Set $B=R\#H$. Let $N$ be a graded finitely generated right $B$-module, and $K$ a graded $R$-submodule of $N$ such that $N/K$ is finite dimensional. Let $\mfm=\oplus_{i\ge1}R_i$. Since $N/K$ is finite dimensional, there is an integer $k$ such that $(N/K)\mfm^k=0$. Hence $N\mfm^k\subseteq K$. Since $\mfm$ is stable under the $H$-action, $N\mfm^k$ is a $B$-submodule of $N$. As $R$ is noetherian, $R/\mfm^k$ is finite dimensional for all $k\ge0$. Hence $N/(N\mfm^k)$ is finite dimensional. It follows that the left $H$-module algebra $R$ is admissible in the graded sense. \end{example} \begin{lemma} \label{lem6} Let $R$ be an admissible $H$-module algebra. Let $N$ be a finitely generated right $B$-module, and $M$ a right $R$-module. Then $$\Hom_{\QMod R}(\mathcal{N},\mathcal{M})=\underset{\longrightarrow}{\lim}\Hom_R(K,M),$$ where $K$ runs over all the cofinite $B$-submodules of $N$. \end{lemma} \begin{proof} By Lemma \ref{lem1}, $\Hom_{\QMod R}(\mathcal{N},\mathcal{M})=\underset{\longrightarrow}{\lim}\Hom_R(K,M)$ where the limit runs over all the cofinite $R$-submodules $K$ of $N$. Since the $H$-module algebra $R$ is assumed to be admissible, the direct system formed by all the cofinite $R$-submodules of $N$ and the direct system formed by all the cofinite $B$-submodules of $N$ are cofinal. Hence $\underset{\longrightarrow}{\lim}\Hom_R(K,M)=\underset{\longrightarrow}{\lim}\Hom_R(K',M)$, where on the left hand the direct limit runs over all the cofinite $R$-submodules $K$ of $N$, and on the right hand the direct limit runs over all the cofinite $B$-submodules $K'$ of $N$. \end{proof} \begin{proposition} \label{prop2} Let $R$ be an admissible $H$-module algebra. Let $N$ and $M$ be finitely generated right $B$-modules. Then there is a natural right $H$-module action on $\Hom_{\QMod R}(\mathcal{N},\mathcal{M})$. Moreover, with this $H$-module structure we have $$\Hom_{\QMod B}(\mathcal{N},\mathcal{M})\cong\Hom_{\QMod R}(\mathcal{N},\mathcal{M})^H,$$ where the isomorphism is functorial in $M$. \end{proposition} \begin{proof} By Lemma \ref{lem6}, $\Hom_{\QMod R}(\mathcal{N},\mathcal{M})=\underset{\longrightarrow}{\lim}\Hom_R(K,M)$, where each $K$ in the direct system is a $B$-submodule of $N$. By (\ref{eq5}), $\Hom_R(K,M)$ is a right $H$-module for each cofinite $B$-submodule $K$. It is easy to see that the direct system is compatible with the right $H$-module actions. Hence $\Hom_{\QMod R}(\mathcal{N},\mathcal{M})$ is a right $H$-module. For each $B$-submodule $K$ of $N$, we have $\Hom_B(K,M)\cong \Hom_R(K,M)^H$ by (\ref{eq6}). Since $H$ is semisimple, the functor $(\ )^H$ commutates with taking direct limits. Hence \begin{eqnarray*} \Hom_{\QMod B}(\mathcal{N},\mathcal{M})&=&\underset{\longrightarrow}{\lim}\Hom_B(K,M)\\ &\cong&\underset{\longrightarrow}{\lim}\Hom_R(K,M)^H\\ &\cong&(\underset{\longrightarrow}{\lim}\Hom_R(K,M))^H\\ &\cong&\Hom_{\QMod R}(\mathcal{N},\mathcal{M})^H. \end{eqnarray*} If $M'$ is another $B$-module and there is a right $B$-module morphism $f:M\to M'$, then it is easy to see that the direct systems and the right $H$-module structures on $\Hom_R(K,M)$ and $\Hom_R(K,M')$ are compatible with the morphism $f$. Hence the isomorphism is functorial in $M$. \end{proof} Next we show that the proposition above can be extended to extension groups, that is, there are natural $H$-module actions on extension groups $\Ext_{\QMod R}^i(\mathcal{N},\mathcal{M})$ if the left $H$-module $R$ is admissible and $\Tor B$ is stable. For the rest of this section, the $H$-module algebra $R$ is admissible and $\Tor B$ is stable. Let $M$ be a finitely generated right $B$-module. Take a minimal injective resolution of $M$ as follows: \begin{equation}\label{eq8} 0\to M\to I^0\to I^1\to\cdots\to I^k\to\cdots. \end{equation} Since $\Tor B$ is stable, the injective envelope of a torsion module is still torsion. Then each injective module in the above sequence has a decomposition $I^i=E^i\oplus T^i$ such that $E^i$ is a torsion free $B$-module and $T^i$ is a torsion $B$-module. Write $I^\cdot$ for the complex obtained from (\ref{eq8}) by dropping $M$ on the left in the beginning. Similar to the discussions in \cite[Section 7, P.271]{AZ}, we see that there are complexes $E^\cdot$, $T^\cdot$ and a morphism $f:E^\cdot[-1]\to T^\cdot$ such that $I^\cdot=cone(f)$. Applying the projection functor $\pi:\Mod B\to \QMod B$ to the resolution (\ref{eq8}) and noticing that $E^i$ is torsion free injective for all $i\ge0$, we obtain an injective resolution of $\mathcal{M}$ in $\QMod B$: \begin{equation}\label{eq9} 0\to \mathcal{M}\to \mathcal{E}^0\to \mathcal{E}^1\to\cdots\to\mathcal{E}^i\to\cdots. \end{equation} Note that any right $B$-module is also a right $R$-module. Moreover, a right $B$-module $K$ is torsion free if and only if it is torsion free as an $R$-module. We see that $E^i$ is also torsion free when it is viewed as an $R$-module, and $T^i$ is torsion as an $R$-module for all $i\ge0$. By \cite[Proposition 2.6]{HVZ}, $E^i$ is also injective as a right $R$-module for each $i\ge0$. Hence we have the following observation. \begin{lemma}\label{lem7} Assume that $\Tor B$ is stable. The sequence (\ref{eq9}) is an injective resolution of $\mathcal{M}$ in $\QMod B$. If $\mathcal{M}$ and $\mathcal{E}^i$ $(i\ge0)$ are viewed as objects in $\QMod R$ (by an abuse of the notions), then (\ref{eq9}) is also an injective resolution of $\mathcal{M}$ in $\QMod R$. \end{lemma} \begin{theorem} \label{thm0} Assume that $R$ is an admissible $H$-module algebra and $\Tor B$ is stable. Let $N$ and $M$ be finitely generated right $B$-modules. Then for each $i\ge0$, $\Ext^i_{\QMod R}(\mathcal{N},\mathcal{M})$ is a right $H$-module, moreover, we have the following isomorphisms $$\Ext^i_{\QMod B}(\mathcal{N},\mathcal{M})\cong\Ext^i_{\QMod R}(\mathcal{N},\mathcal{M})^H,\ \forall\ i\ge0.$$ \end{theorem} \begin{proof} By Lemma \ref{lem7}, the sequence (\ref{eq9}) is an injective resolution of $\mathcal{M}$ when viewed as an object in $\QMod R$. Applying $\Hom_{\QMod R}(\mathcal{N},-)$ to (\ref{eq9}), we obtain {\tiny \begin{equation*} 0\to \Hom_{\QMod R}(\mathcal{N},\mathcal{E}^0)\to \Hom_{\QMod R}(\mathcal{N},\mathcal{E}^1)\to\cdots\to\Hom_{\QMod R}(\mathcal{N},\mathcal{E}^i)\to\cdots. \end{equation*}} By Proposition \ref{prop2}, each component of the complex above is a right $H$-module and the differential is compatible with the right $H$-module structures. Taking the cohomology of the complex above, we obtain that $\Ext^i_{\QMod R}(\mathcal{N},\mathcal{M})$ is a right $H$-module for each $i\ge0$. Now applying $\Hom_{\QMod B}(\mathcal{N},-)$ to the sequence (\ref{eq9}), we obtain the following complex {\tiny \begin{equation}\label{eq10} 0\to \Hom_{\QMod B}(\mathcal{N},\mathcal{E}^0)\to \Hom_{\QMod B}(\mathcal{N},\mathcal{E}^1)\to\cdots\to\Hom_{\QMod B}(\mathcal{N},\mathcal{E}^i)\to\cdots. \end{equation}} By Proposition \ref{prop2}, the complex (\ref{eq10}) is isomorphic to the following complex {\tiny \begin{equation}\label{eq11} 0\to \Hom_{\QMod R}(\mathcal{N},\mathcal{E}^0)^H\to \Hom_{\QMod R}(\mathcal{N},\mathcal{E}^1)^H\to\cdots\to\Hom_{\QMod R}(\mathcal{N},\mathcal{E}^i)^H\to\cdots. \end{equation}} By Lemma \ref{lem7}, (\ref{eq9}) is also an injective resolution of $\mathcal{M}$ in $\QMod R$. Noticing that the functor $(\ )^H$ commutes with taking the cohomology, we obtain that the $i$th cohomology of the complex (\ref{eq11}) is equal to $\Ext_{\QMod R}^i(\mathcal{N},\mathcal{M})^H$, which is isomorphic to $\Ext^i_{\QMod B}(\mathcal{N},\mathcal{M})$, the $i$th cohomology of (\ref{eq10}) since (\ref{eq9}) is also an injective resolution of $\mathcal{M}$ in $\QMod B$. Then we obtain the desired isomorphisms. \end{proof} \section{Invariant subalgebras of Hopf dense Galois extensions}\label{sec3} Throughout this section, $H$ is a finite dimensional semisimple Hopf algebra and $R$ is a noetherian $H$-module algebra. As before, set $B:=R\#H$ and $A=R^H$. \begin{setup}\label{setup} We assume that the $H$-module algebra $R$ satisfies the following conditions: (i) the $H$-module algebra $R$ is admissible and $\Tor B$ is stable (e.g. $R$ is a complete semilocal algebra, cf. Example \ref{ex1}); (ii) $R/A$ is a right $H^*$-dense Galois extension. \end{setup} As we have known in the beginning of Section \ref{sec2}, $R$ is a $B$-$A$-bimodule. Then we have a functor $$-\otimes_BR:\Mod B\longrightarrow \Mod A,$$ which induces a functor between the corresponding quotient categories: $$-\otimes_{\mathcal{B}}\mathcal{R}:\QMod B\longrightarrow \QMod A.$$ We recall a result obtained in \cite{HVZ}. \begin{lemma} \label{lem3.1} \cite[Theorem 2.4]{HVZ} Assume that Setup \ref{setup}(ii) holds. Then the functor $$-\otimes_{\mathcal{B}}\mathcal{R}:\QMod B\longrightarrow \QMod A$$ is an equivalence of abelian categories. \end{lemma} Let $S$ be a noetherian algebra. Recall from Section \ref{sec1} that we have a torsion functor $\Gamma_S:\Mod S\to\Mod S$. The $i$-th right derived functor of $\Gamma_S$ is denoted by $R^i\Gamma_S$. If there is no risk of confusion, we will drop the subscript $S$. Note that we have $\Gamma=\underset{\longrightarrow}\lim\Hom_S(R/K,-)$, and $R^i\Gamma=\underset{\longrightarrow}\lim\Ext^i_S(R/K,-)$, where $K$ runs over all the cofinite $S$-submodules of $S$, and the direct system is induced by the inclusion maps of cofinite submodules. \begin{definition}\label{def3.1} We say that a noetherian algebra $S$ is (right) {\it AS-Cohen-Macaulay} of dimension $d$ if $R^i\Gamma(S)=0$ for all $i\neq d$. Assume $S$ is AS-Cohen-Macaulay of dimension $d$. A finitely generated right $S$-module $M$ is said to be a {\it Cohen-Macaulay} module, if $R^i\Gamma(M)=0$ for all $i\neq d$ \end{definition} The definition of an AS-Cohen-Macaulay module is a generalization of the commutative version (cf. \cite{CH,Z}). The letters ``AS'' stand for Artin-Schelter, because the property in the definition is also a generalization of that of Artin-Schelter Gorenstein algebras (cf. \cite[Remark 8.5]{VdB}). \begin{lemma} \label{lem3.2} Let $S$ be a noetherian algebra such that $\Tor S$ is stable. For a finitely generated right $S$-module, we have {\rm(i)} $\Ext_{\QMod S}^i(\mathcal{S},\mathcal{M})\cong R^{i+1}\Gamma(M)$ for $i>0$, and {\rm(ii)} an exact sequence $0\to \Gamma(M)\to M\overset{\phi}\to \Hom_{\QMod S}(\mathcal{S},\mathcal{M})\to R^1\Gamma(M)\to0$, where $\phi$ is the natural map $M=\Hom_S(S,M)\overset{\pi}\to\Hom_{\QMod S}(\mathcal{S},\mathcal{M})$ induced by the projection functor $\pi:\Mod S\to\QMod S$. \end{lemma} \begin{proof} For any cofinite right $S$-submodule $K$ of $S$, we have an exact sequence $0\to K\to S\to S/K\to0$, which implies isomorphisms $$\Ext^i_{S}(K,M)\cong\Ext^{i+1}_S(S/K,M),\ \forall\ i\ge1,$$ and an exact sequence $$0\to \Hom_S(S/K,M)\to M\overset{\phi}\to \Hom_{S}(K,M)\to \Ext^1_S(S/K,M)\to0,$$ where $\phi$ is the natural map induced by the inclusion map $K\hookrightarrow S$. Taking direct limit on all the cofinite submodules of $S$, by Lemma \ref{lem1} and $R^i\Gamma(M)=\underset{\longrightarrow}\lim\Ext^i_S(S/K,-)$ for all $i\ge0$, we obtain the desired results. \end{proof} \begin{theorem} \label{thm1} Assume that $R$ and $H$ satisfy the conditions in Setup \ref{setup}. If $R$ is AS-Cohen-Macaulay of dimension $d$, then so is $R^H$. \end{theorem} \begin{proof} Set $A:=R^H$. By Lemma \ref{lem3.1}, the functor $-\otimes_{\mathcal{B}}\mathcal{R}:\QMod B\longrightarrow \QMod A$ is an equivalence of abelian categories. Under this equivalence, the object $\mathcal{R}\in\QMod B$ corresponds to $\mathcal{A}\in \QMod A$. Then we have \begin{equation}\label{eq3.1} \Ext^i_{\QMod A}(\mathcal{A},\mathcal{A})\cong \Ext^i_{\QMod B}(\mathcal{R},\mathcal{R}),\ \forall\ i\ge0. \end{equation} Combining (\ref{eq3.1}) with Theorem \ref{thm0}, we obtain \begin{equation}\label{eq3.2} \Ext^i_{\QMod A}(\mathcal{A},\mathcal{A})\cong \Ext^i_{\QMod R}(\mathcal{R},\mathcal{R})^H,\ \forall\ i\ge0. \end{equation} Assume $d\ge2$. Then $\Ext^i_{\QMod R}(\mathcal{R},\mathcal{R})=0$ for all $i\neq d-1$ and $i\ge1$ by Lemma \ref{lem3.2}(i), and hence $\Ext^i_{\QMod A}(\mathcal{A},\mathcal{A})=0$ for all $i\neq d-1$ and $i\ge1$. By Lemma \ref{lem3.2}(i) again, $R^{i+1}\Gamma_A(A)=0$ for $i\neq d-1$ and $i\ge1$. Since $R^i\Gamma_R(R)=0$ for $i\neq d$, Lemma \ref{lem3.2}(ii) implies that the natural map $R\to \Hom_{\QMod R}(\mathcal{R},\mathcal{R})$ is an isomorphism. Applying the functor $(\ )^H$ on this isomorphism, we see that the natural map $A\to \Hom_{\QMod A}(\mathcal{A},\mathcal{A})$ is an isomorphism. Hence $R^i\Gamma_A(A)=0$ for $i=0,1$. Now assume $d=1$. Similar to the above discussions, we have $R^i\Gamma_A(A)=0$ for $i\ge2$. Since $\Gamma_R(R)=0$, Lemma \ref{lem3.2}(ii) implies that the natural map $R\overset{\phi}\to \Hom_{\QMod R}(\mathcal{R},\mathcal{R})$ is injective. Applying the functor $(\ )^H$ to this map, we obtain that the natural map $A\overset{\phi}\to \Hom_{\QMod A}(\mathcal{A},\mathcal{A})$ is injective. Hence $\Gamma_A(A)=0$. The case that $d=0$ can be proved similarly. \end{proof} \section{Cohen-Macaulay property of $R$ as an $R^H$-module}\label{sec4} Keep the same notions as in Section \ref{sec3}. We recall some properties of modules over a Hopf algebra. Let $X$ be a right $H$-module. The tensor product $X\otimes H$ has two right $H$-module structures. The first one is the diagonal action of $H$, that is, $(x\otimes h)\cdot g=x\cdot g_{(1)}\otimes hg_{(2)}$ for $x\in X$, $g,h\in H$. To avoid possible confusion, the tensor product $X\otimes H$ with the diagonal action of $H$ will be denoted by $X\hat\otimes H$. The other $H$-action is the multiplication of elements of $H$ on the right of $X\otimes H$, that is, $(x\otimes h)\cdot g=x\otimes hg$ for $x\in X$ and $g,h\in H$. This right $H$-module structure will be denoted by $X\otimes H_\bullet$. The following lemma is well known. \begin{lemma} \label{lem3.3} Let $X$ be a right $H$-module. Define maps $$\varphi:x\hat\otimes H\longrightarrow M\otimes H_\bullet,\ x\otimes h\mapsto x S(h_{(1)})\otimes h_{(2)},$$ and $$\psi:X\otimes H_\bullet\longrightarrow X\hat\otimes H,\ x\otimes h\mapsto xh_{(1)}\otimes h_{(2)}.$$ Then $\varphi$ and $\psi$ are $H$-module isomorphisms which are inverse to each other. \end{lemma} Let $X$ be a right $H$-module. Recall from the beginning of Section \ref{sec2} that $R$ is a right $B$-module by the right $B$-action (\ref{eq2.2}). Then $\Hom_R(X,R)$ is a right $H$-module. Note that $B$ itself is a right $B$-module. Hence $\Hom_R(X,B)$ is a right $H$-module. \begin{lemma}\label{lem3.4} Let $X$ be a right $B$-module. Then we have an isomorphism of right $H$-modules $$\Hom_R(X,R)\hat\otimes H\cong\Hom_R(X,R\#H).$$ \end{lemma} \begin{proof} Firstly, note that the smash product may be written as $H\#R$ with the multiplication defined by $$(h\#r)(g\#r')=hg_{(2)}\#(S^{-1}g_{(1)}r)r'.$$ There is an isomorphism of algebras: $$\xi:H\#R\to R\#H,\ h\#r\mapsto h_{(1)}r\#h_{(2)}.$$ Define a linear map \begin{equation*} \zeta:\Hom_R(X,R)\hat\otimes H\longrightarrow \Hom(X,R\#H) \end{equation*} by $\zeta(f\otimes h)(x)=h_{(1)}f(x)\# h_{(2)}$ for $f\in\Hom_R(X,R)$, $x\in X$ and $h\in H$. We first show that $\zeta(f\otimes h)$ is a right $R$-module morphism for every $f\in \Hom_R(X,R)$ and $h\in H$. For $r\in R$ and $x\in X$, we have \begin{eqnarray*} \zeta(f\otimes h)(xr)&=&h_{(1)}f(xr)\# h_{(2)}\\ &=&h_{(1)}(f(x)r)\# h_{(2)}\\ &=&(h_{(1)}f(x))(h_{(2)}r)\# h_{(3)}\\ &=&(h_{(1)}f(x)\# h_{(2)})r\\ &=&\zeta(f\otimes h)(x)r. \end{eqnarray*} Hence the image of $\zeta$ is indeed in $\Hom_R(X,R\#H)$. Therefore we obtain a map (still use the same notion): \begin{equation*} \zeta:\Hom_R(X,R)\hat\otimes H\longrightarrow \Hom_R(X,R\#H). \end{equation*} We next check that $\zeta$ is a right $H$-module morphism. For $f\in \Hom_R(X,R)$, $h,g\in H$ and $x\in X$, we have \begin{eqnarray*} \zeta(f\otimes h)\leftharpoonup g(x)&=&\left(\zeta(f\otimes g)(xS(g_{(1)})\right)g_{(2)}\\ &=&\left(h_{(1)}f(xS(g_{(1)}))\# h_{(2)}\right)g_{(2)}\\ &=&h_{(1)}f(xS(g_{(1)}))\# h_{(2)}g_{(2)}. \end{eqnarray*} On the other hand, we have \begin{eqnarray*} \zeta\left((f\otimes h)\cdot g\right)(x)&=&\zeta(f\leftharpoonup g_{(1)}\otimes hg_{(2)})(x)\\ &=&h_{(1)}g_{(2)}f\leftharpoonup g_{(1)}(x)\# h_{(2)}g_{(3)}\\ &=&h_{(1)}g_{(3)}S^{-1}(g_{(2)})f(x S(g_{(1)}))\# h_{(2)}g_{(4)}\\ &=&h_{(1)}f(x S(g_{(1)}))\# h_{(2)}g_{(2)}, \end{eqnarray*} where the third equality follows from the definitions of the right $B$-module action (\ref{eq2.2}) on $R$ and the right $H$-action on $\Hom_R(X,R)$. Therefore, $\zeta$ is a right $H$-module morphism. Since $H\#R$ is a finitely generated free $R$-module, it follows that $$\theta:\Hom_R(X,R)\otimes H\longrightarrow\Hom_R(X,H\#R),\ f\otimes h\mapsto [x\mapsto h\otimes f(x)]$$ is a linear isomorphism. Thus $\zeta$, equal to $\Hom_R(X,\xi)\circ\theta$, is an isomorphism since $\xi$ is an isomorphism. \end{proof} \begin{lemma} \label{lem3.5} Let $X$ be a right $B$-module. For each $i\ge0$, we have an $H$-module isomorphism $$\Ext^i_R(X,R\#H)\cong \Ext_R^i(X,R)\hat\otimes H.$$ Moreover, the isomorphism is functorial in $X$. \end{lemma} \begin{proof} Take a projective resolution of the right $B$-module $X$ as follows: \begin{equation}\label{eq3.3} \cdots\longrightarrow P^i\longrightarrow\cdots\longrightarrow P^1\longrightarrow P^0\longrightarrow X\longrightarrow0. \end{equation} By \cite[Proposition 2.5]{HVZ}, each $P^i$ is also projective as a right $R$-module. Apply the functor $\Hom_R(-,B)$ on (\ref{eq3.3}) to obtain the following complex \begin{equation}\label{eq3.4} 0\longrightarrow \Hom_R(P^0,B)\longrightarrow\Hom_R(P^1,B)\longrightarrow\cdots\longrightarrow \Hom_R(P^i,B)\longrightarrow \cdots. \end{equation} By Lemma \ref{lem3.4}, the complex (\ref{eq3.4}) can be written in the following way {\small\begin{equation}\label{eq3.5} 0\longrightarrow \Hom_R(P^0,R)\hat\otimes H\longrightarrow\Hom_R(P^1,R)\hat\otimes H\longrightarrow\cdots\longrightarrow \Hom_R(P^i,R)\hat\otimes H\longrightarrow \cdots. \end{equation}} Comparing the cohomology of (\ref{eq3.4}) and (\ref{eq3.5}), we obtain the desired isomorphisms. It follows from the general homology theory that the isomorphism is functorial in $X$. \end{proof} \begin{proposition}\label{prop4.1} Let $N$ be a finitely generated right $B$-module. Assume that the $H$-module algebra $R$ is admissible and $\Tor B$ is stable. For each $i\ge0$, we have $$\Ext_{\QMod B}^i(\mathcal{N},\mathcal{B})\cong \Ext^i_{\QMod R}(\mathcal{N},\mathcal{R}).$$ \end{proposition} \begin{proof} By Lemma \ref{lem1} and the assumption that the $H$-module algebra $R$ is admissible, $\Ext_{\QMod R}^i(\mathcal{N},\mathcal{B})\cong \underset{\longrightarrow}\lim\Ext^i_R(N',B)$ where $N'$ runs over all the cofinite $B$-submodules of $N$. By Lemma \ref{lem3.5}, $\Ext^i_R(N',B)\cong \Ext^i_R(N',R)\hat\otimes H$ for all cofinite $B$-submodules $N'$ of $N$. Taking the direct limits over all the cofinite $B$-submodules of $N$, we obtain \begin{equation}\label{eq4.1} \Ext_{\QMod R}^i(\mathcal{N},\mathcal{B})\cong \Ext_{\QMod R}^i(\mathcal{N},\mathcal{R})\hat\otimes H. \end{equation} Combining Theorem \ref{thm0} and Lemma \ref{lem3.3}, we have \begin{equation}\label{eq4.2} \Ext_{\QMod B}^i(\mathcal{N},\mathcal{B})\cong \left(\Ext_{\QMod R}^i(\mathcal{N},\mathcal{R})\otimes H_\bullet\right)^H. \end{equation} The right hand side of (\ref{eq4.2}) is isomorphic to $\Ext_{\QMod R}^i(\mathcal{N},\mathcal{R})$ as a vector space. \end{proof} Now we arrive at our main result of this section. \begin{theorem} \label{thm4.1} Assume that $R$ and $H$ satisfy the conditions in Setup \ref{setup}. If $R$ is AS-Cohen-Macaulay of dimension $d$, then $R$, viewed as a right $R^H$-module, is Cohen-Macaulay (cf. Definition \ref{def3.1}). \end{theorem} \begin{proof} Set $A:=R^H$. Since $R/A$ is a right $H^*$-dense Galois extension, the functor $-\otimes_{\mathcal{B}}\mathcal{R}:\QMod B\to \QMod A$ is an equivalence of abelian categories (cf. Lemma \ref{lem3.1}). Note that $R$ may be viewed both as a right $A$-module and as a right $B$-module. Under the equivalence $-\otimes_{\mathcal{B}}\mathcal{R}$, $\mathcal{R}$ when viewed as an object in $\QMod A$ corresponds to $\mathcal{B}\in\QMod B$, and $\mathcal{A}\in \QMod A$ corresponds to $\mathcal{R}$ which is viewed as an object in $\QMod B$. Hence for $i\ge0$, we have \begin{equation*} \Ext_{\QMod B}^i(\mathcal{R},\mathcal{B})\cong \Ext_{\QMod A}^i(\mathcal{A},\mathcal{R}). \end{equation*} By Proposition \ref{prop4.1}, we obtain \begin{equation}\label{eq4.3} \Ext_{\QMod R}^i(\mathcal{R},\mathcal{R})\cong \Ext_{\QMod A}^i(\mathcal{A},\mathcal{R}). \end{equation} The rest of the proof is almost the same as that of Theorem \ref{thm1} according to the value of $d$. We omit the similar narratives. \end{proof} \begin{remark} Under the assumptions in Theorem \ref{thm4.1}, the isomorphism (\ref{eq4.3}) implies that \begin{equation}\label{eq4.4} R^i\Gamma_{R^H}(R)\cong R^i\Gamma_R(R),\ \text{for all } i\ge0. \end{equation} If the algebra $R$ is a noetherian complete semilocal algebra and some further conditions are satisfied, then (\ref{eq4.4}) follows from \cite[Theorem 2.9]{WZ}. \end{remark} \section{Finite group actions on noetherian complete semilocal algebras} \label{sec5} Recall that a noetherian algebra $\Lambda$ is {\it semilocal} if $\Lambda/J(\Lambda)$ is a finite dimensional semisimple algebra, where $J(\Lambda)$ is the Jacobson radical of $\Lambda$. We say that $\Lambda$ is {\it complete} if $\Lambda$ is complete with respect to the $J(\Lambda)$-adic topology, or equivalently, $\Lambda=\underset{\longleftarrow}\lim \Lambda/J(\Lambda)^n$. Recall from \cite{WZ2} that a noetherian semilocal algebra $\Lambda$ is called a right {\it AS-Gorenstein} algebra if $\Lambda_\Lambda$ has finite injective dimension $d$, $\Ext^i_\Lambda(\Lambda/J(\Lambda),\Lambda)=0$ for $i\neq d$, and $\Ext^d_\Lambda(\Lambda/J(\Lambda),\Lambda)\cong \Lambda/J(\Lambda)$ as a left $\Lambda$-module. Left AS-Gorenstein algebras are defined similarly. If $\Lambda$ is both left and right AS-Gorenstein, then we simply say that $\Lambda$ is {\it AS-Gorenstein}. If, furthermore, $\Lambda$ has finite global dimension, then we say that $\Lambda$ is {\it AS-regular}. We remark that our definition of AS-Gorenstein algebras is a little stronger than that in \cite{WZ2}. \begin{lemma} \label{lem5.1} Let $\Lambda$ be a noetherian semilocal algebra. Then $$\Gamma_\Lambda\cong \underset{n\to\infty}\lim\Hom_\Lambda(\Lambda/J(\Lambda)^n,-).$$ \end{lemma} \begin{proof} Recall from Section \ref{sec1}, $\Gamma_\Lambda=\underset{\longrightarrow}\lim\Hom_\Lambda(\Lambda/K,-)$ where $K$ runs over all the cofinite right ideal of $\Lambda$. For any cofinite right ideal $K$ of $\Lambda$, there is an integer $n$ such that $(\Lambda/K)J(\Lambda)^n=0$. Then $J(\Lambda)^n\subseteq K$. Hence the direct system defined by all the cofinite right ideals of $S$ and the direct system defined by $\{J(\Lambda)^n|n\ge1\}$ are cofinal. Therefore, $\Gamma_\Lambda=\underset{\longrightarrow}\lim\Hom_\Lambda(\Lambda/K,-)\cong \underset{n\to\infty}\lim\Hom_\Lambda(\Lambda/J(\Lambda)^n,-)$. \end{proof} In the rest of this section, we assume $R$ that is a noetherian complete semilocal algebra with Jacobson radical $J$. Let $G\subseteq \Aut(R)$ be a finite subgroup. Then $R$ becomes a left $G$-module algebra. As before, set $B=R\#\kk G$ and $A=R^G$. Example \ref{ex1} shows that the $G$-module algebra $R$ is admissible and $\Tor B$ is stable. Note that $J$ is stable under the $G$-action. Let $\mfm=J\# \kk G$. Then $\mfm$ is a cofinite ideal of $B$, and $\mfm^n=J^n\#\kk G$ for all $n\ge1$. In general, $\mfm$ is not the Jacobson radical of $B$. Besides, for a general semisimple Hopf algebra $H$, an $H$-action on $R$ does not imply that the $H$ acts on the Jacobson radical $J$ stably. Some discussions on the stability of the $H$-action on the Jacobson radical, or on the Jacobson radical of the smash product may be found in \cite{LMS}. Similar to the proof of Lemma \ref{lem5.1}, we also have \begin{equation}\label{eqx5.1} \Gamma_B\cong \underset{n\to\infty}\lim\Hom_B(B/\mfm^n,-). \end{equation} Recall that the {\it depth} of a right module $M$ over a noetherian algebra $\Lambda$ is defined as follows $$\depth_\Lambda(M)=\min\{i|R^i\Gamma_\Lambda(M)\neq 0\}.$$ We have the following Auslander-Buchsbaum formula, which is a consequence of \cite[Theorem 0.1(1)]{WZ}. We remark that our definition of the depth of a module is equivalent to that in \cite{WZ} when the algebra $\Lambda$ is a notherian complete semilocal algebra (see also, Lemma \ref{lem5.3} below). \begin{lemma}\label{lem5.2} Let $R$ be a noetherian complete semilocal algebra which is also AS-Gorenstein of injective dimension $d$. For a finitely generated right $B$-module $M$ with finite projective dimension, we have $\pd_B(M)+\depth_B(M)=d$. \end{lemma} \begin{proof} Note that $M$ is also a finitely generated right $R$-module. By Lemma \ref{lemx1}, $\depth_B(M)=\depth_R(M)$. By \cite[Theorem 0.1(1)]{WZ}, we have \begin{equation}\label{eqx5.2} \pd_R(M)=d-\depth_R(M)=d-\depth_B(M). \end{equation} Since a projective right $B$-module is projective as a right $R$-module, $\pd_R(M)\leq \pd_B(M)$. On the other hand, set $p=\pd_R(M)$. Let $\cdots \to P^i\overset{\delta^i}\to\cdots\overset{\delta^1}\to P^0\to M\to 0$ be a projective resolution of the right $B$-module $M$. Then the $p$th syzygy $\ker\delta^{p-1}$ of the resolution is projective as an $R$-module since $\pd_R(M)=p$. By \cite[Proposition 2.5]{HVZ}, $\ker\delta^{p-1}$ is projective as a right $B$-module, and hence $\pd_B(M)\leq p$. Therefore $\pd_R(M)=\pd_B(M)$. By the equalities (\ref{eqx5.2}), $\pd_B(M)+\depth_B(M)=d$. \end{proof} It is not hard to see that if a noetherian complete semilocal algebra is AS-Gorenstein with injective dimension $d$, then it is AS-Cohen-Macaulay of dimension $d$ in the sense of Definition \ref{def3.1} (cf. \cite{WZ}). \begin{theorem}\label{thm5.1} Let $R$ be a noetherian complete semilocal algebra, and let $G\subseteq\Aut(R)$ be a finite subgroup. Assume that $R/R^G$ is a right $\kk G^*$-dense Galois extension and that $R$ is AS-regular of global dimension 2. Set $A=R^G$. Then the following statements hold. {\rm(i)} $A$ is AS-Cohen-Macaulay of dimension 2. {\rm(ii)} A finitely generated right $A$-module $M$ is Cohen-Macaulay if and only if $M\in \text{\rm add} (R_A)$, where $\text{\rm add} (R_A)$ is the subcategory of $\Mod A$ consisting of all the direct summands of finite direct sums of $R_A$. \end{theorem} \begin{proof} The statement (i) is a special case of Theorem \ref{thm1}. (ii) By Theorem \ref{thm4.1}, $R_A$ is a Cohen-Macaulay module. Hence any module in $\text{add}R_A$ is Cohen-Macaulay. Conversely, note that the functor $-\otimes_{\mathcal{B}}\mathcal{R}:\QMod B\to \QMod A$ in Lemma \ref{lem3.1} is induced by the functor $-\otimes_B R:\Mod B\to \Mod A$ (cf. \cite[Sections 2 and 3]{HVZ2}), that is, we have the following commutative diagram \begin{equation}\label{eq5.1} \xymatrix{ \Mod B \ar[d]_{\pi} \ar[r]^{-\otimes_BR} & \Mod A \ar[d]_{\pi} \\ \QMod B \ar[r]^{-\otimes_{\mathcal{B}}\mathcal{R}} & \QMod A. } \end{equation} Let $F=-\otimes_BR$ and $\mathcal{F}=-\otimes_{\mathcal{B}}\mathcal{R}$. The functor $F$ has a right adjoint functor $F':=\Hom_{A^\circ}(R,-)$ (cf. \cite[Section 2]{HVZ2}), where $\Hom_{A^\circ}(-,-)$ is the Hom-functor in the category of left $A$-modules. Moreover, $FF'\cong \text{id}_{\Mod A}$. Hence $F'$ is fully faithful and $\Hom_{B}(F'X,F'Y)\overset{F}\longrightarrow\Hom_{A}(X,Y)$ is an isomorphism for any $X,Y\in\Mod A$. Let $N_A$ be a nonzero finitely generated Cohen-Macaulay module. Then we obtain that the map $\Hom_B(F'(R),F'(N))\overset{F}\longrightarrow\Hom_A(R,N)$ is an isomorphism. Since $R/A$ is a right $\kk G^*$-dense Galois extension and $R$ is AS-regular of global dimension 2, the natural map $R\#\kk G\to \Hom_A(R,R), r\#h\mapsto [r'\mapsto r(h\cdot r')]$ is an isomorphism of algebras by \cite[Theorem 3.10]{HVZ2}. Hence as a right $B$-module, $F'(R)=\Hom_A(R,R)$ is isomorphic to $B$. Set $M=F'(N)=N\otimes _AR$. From the above commutative diagram (\ref{eq5.1}), we obtain the following commutative diagram \begin{equation}\label{eq5.2} \xymatrix{ \Hom_B(B,M) \ar[d]_{\pi} \ar[r]^{F} & \Hom_A(R,N) \ar[d]_{\pi} \\ \Hom_{\QMod B}(\mathcal{B},\mathcal{M}) \ar[r]^{\mathcal{F}} & \Hom_{\QMod A}(\mathcal{R},\mathcal{N}). } \end{equation} The top map in the diagram (\ref{eq5.2}) is an isomorphism. By Lemma \ref{lem3.1}, the bottom map is also an isomorphism. Consider the exact sequence $$0\to K\to R\to R/K\to0,$$ where $K$ is a cofinite submodule of $R$. Applying $\Hom_A(-,N)$ to the above sequence, we obtain the following exact sequence \begin{equation}\label{eq5.3} 0\to \Hom_A(R/K,N)\to\Hom_A(R,N)\to\Hom_A(K,N)\to \Ext^1_A(R/K,N) \end{equation} Since $N$ is Cohen-Macaulay and $R/K$ is a finite dimensional, $\Hom_A(R/K,N)=0$ and $\Ext^1_A(R/K,N)=0$ by Lemma \ref{lem5.3} below. Hence (\ref{eq5.3}) implies that the natural map $$\Hom_A(R,N)\to\Hom_A(K,N)$$ is an isomorphism for all cofinite submodules $K$ of $R$. Taking the direct limits on both sides of the isomorphism above over all the cofinite submodules $K$ of $R$, we obtain that the projection map \begin{equation*} \Hom_A(R,N)\overset{\pi}\to\Hom_{\QMod A}(\mathcal{R},\mathcal{N}) \end{equation*} is an isomorphism. In summary, we obtain that the top map, the bottom map and the right vertical map in the diagram (\ref{eq5.2}) are all isomorphisms. Hence the left vertical map in the diagram (\ref{eq5.2}) is also an isomorphism. By Lemma \ref{lem5.1}, the exact sequence $0\to \mathfrak{m}^n\to B\to B/\mathfrak{m}\to0$ implies the next exact sequence $$0\to \Gamma_B(M)\to \Hom_B(B,M)\overset{\pi}\to \Hom_{\QMod B}(\mathcal{B},\mathcal{M})\to R^1\Gamma_B(M)\to 0.$$ Since the projection map $\pi$ is an isomorphism, $\Gamma_B(M)=R^1\Gamma_B(M)=0$. Since $R$ is of global dimension 2, the global dimension of $B$ is 2 as well (cf. \cite{L}). By Lemma \ref{lem5.2}, $M$ is a projective $B$-module. Hence $M$ is a direct summand of a free module $B^{(n)}$. Therefore $N=F(M)$ is a direct summand of $F(B)^{(n)}=(R_A)^{(n)}$, that is, $N\in \text{add} (R_A)$. \end{proof} \begin{lemma} \label{lem5.3} Let $A$ be a noetherian algebra. Let $M_S$ be a finitely generated module. If $R^i\Gamma_A(M)=0$ for $i<d$, then $\Ext^i_A(T,M)=0$ for all finite dimensional right $A$-module $T$. \end{lemma} \begin{proof} Take a minimal injective resolution of $M$ as follows $$0\to M\to I^0\overset{\delta^0}\to I^1\overset{\delta^1}\to\cdots\overset{\delta^{i-1}}\to I^i\overset{\delta^i}\to\cdots.$$ We claim that each $I^i$ is torsion free for all $i<d$. Since $\Gamma_A(M)=0$, $M$ is torsion free. Hence $I^0$ is torsion free. Now suppose that there is some $i<d$ such that $I^i$ is not torsion free. Assume that $k<d$ is the smallest integer such that $\Gamma_A(I^k)\neq0$. Since the injective resolution is minimal, $\ker \delta^k$ is essential in $I^k$. Hence $\ker \delta^k\cap\Gamma_A(I^k)\neq0$. Then the kernel of the restriction $\delta^k:\Gamma_A(I^k)\to\Gamma_A(I^{k+1})$ is nontrivial. Therefore $R^k\Gamma_A(M)\neq0$, a contradiction. Hence for each $i<d$, $I^i$ is torsion free. Therefore, for any finite dimensional module $T$, $\Hom_A(T,I^i)=0$ for all $i<d$, which implies that $\Ext^i_A(T,M)=0$. \end{proof} \begin{remark}\label{rm5.1} The above theorem may be viewed as a nongraded version of \cite[Section 3]{Jo}, \cite[Corollary 1.3]{Ue1} (see also \cite[Proposition 5.2]{Ue2}) and \cite[Theorem 4.4]{CKWZ}. Our method is also valid for noetherian graded algebras (cf. Example \ref{ex2}), however our method is quite different from those in \cite{Jo,Ue1,CKWZ}. \end{remark} \vspace{5mm} \subsection*{Acknowledgments} J.-W. He is supported by NSFC (No. 11571239, 11401001) and Y. Zhang by an FWO-grant. \vspace{5mm}
157,550
WILMINGTON, NC (WWAY) -- Thousands of pounds of medications are off the street for good. The final step of Operation Medicine Drop took place today at the WASTEC facility in New Hanover County. The SBI dropped off the medication collected statewide as part of a program to dispose of drugs properly. It helps get the drugs out of the reach of children and teens. Operation Medicine Drop will also keep the medication out of our water system by keeping the pills from being flushed down the toilet. "It negatively impacts the aquatic environment, our streams and lakes," New Hanover Co. Environmental Manager Paul Marlow said. "The ultimate destruction of these medications is the preferred method." New Hanover County's WASTEC facility was one of two locations in the state used to dispose of the drugs. More than 2.5 tons were incinerated. Disclaimer: Comments posted on this, or any story are opinions of those people posting them, and not the views or opinions of WWAY NewsChannel 3, its management or employees. You can view our comment policy here. More information about formatting options
63,866
The company that operates the Nova Star ferry saw a 26 percent increase in ridership this June over the same month last year. Nova Star Cruises reported on its website that it carried 8,530 passengers in June compared with 6,768 in June 2014. The company said it’s on track to meet its goal of 80,000 passengers for the 2015 season. Last year in its inaugural season, the company projected it would carry 100,000 passengers, but carried only 59,000. The company, which operates a daily ferry between Portland and Yarmouth, Nova Scotia, has a one-year contract from the Nova Scotia government to operate the service. Nova Scotia officials plan to decide this summer whether to award the company a contract for the 2016 season or look for other operators. The province spent $28.5 million (Canadian) for the service in its inaugural season. Included was a $21 million loan that was intended to last seven years. This year, the government limited the subsidy to $13 million, equal to $10.4 million (U.S.) at current exchange rates. Last week, a delegation from the province came to Maine to discuss ways Maine can support the service. Later this year, a contingent of Maine officials is expected to travel to Nova Scotia and continue the discussion..
334,979
Meet the 10th of our ’12 Dogs of Christmas,’ DayZ! At 5 years old, this adventurous, spunky girl has spent over half her life in a shelter! DayZ! Name of Rescue Organization: Angels Among Us Pet Rescue How long at shelter/rescue: Over 3 years Type/Size/Sex: Boxer/Terrier Mix, 55-pounds, Female DayZ’s Story: This is sweet DayZ. She’s such a friendly, happy playful girl. She was pregnant when rescued and all of her babies found loving forever homes. Now DayZ’s wants her forever home. DayZ is a 5-year old, 55-pound sweetheart that loves to go for walks, especially at the lake where she can play in the water. DayZ would love to have a fenced yard where she can run and chase tennis balls to drain off her energy and be with you. She also enjoys car rides. DayZ is selective of her dog friends as she relates better to people. She is fine on-leash and ignores other dogs, but she doesn’t like dogs who “get in her face.” So, the best home for DayZ is one with no other pets. Because she can can be energetic…she is best in a home with older children–although she loves the little ones. But even though DayZ can be energetic, she can also be very calm and chill beside you. DayZ is also crate and house trained. Please consider making this sweet girl part of your family. She’s waiting for you!! Likes: Car rides, swimming in the lake, squeaky toys, going for a walk or just going anywhere. She loves to go on any adventure. Dislikes: Sharing her family with another dog or cat. She prefers to be the only child! Special Considerations: DayZ is not reactive to other dogs on walks but does not play well with other dogs or cats and should be in an only pet home. She has not been around small children but due to her size and energy level, she would probably do best with children 8 and older. DayZ’s Perfect Family: DayZ is a sweet funny girl who is crate trained, housebroken, loves to be with her people and would make a great jogging or hiking buddy. She’s a little timid at first with new people, but when she gets to know you, will love you. She is a great companion dog. DayZ loves being with her people so having a family member home the majority of the day would make her very happy but is not required. DayZ will soon face another Christmas without a family all her own. Does DayZ seem like the perfect fit for your family? Here’s how to begin the adoption process: Anyone that is interested in adopting DayZ can fill out an adoption application with Angels Among Us Pet Rescue. Or, for additional questions about DayZ or the adoption process, email [email. Will DayZ<< Barbara Berger Dec 21, 2017 at 2:10 pm Please find room in your heart and home for Dayz! Cheryl Dec 20, 2017 at 3:05 pm Love Dayz! She’s precious! Please consider her. SHe needs a family to take her in!
248,587
If you’re like the average investor, you’d say it’s easy to feel overwhelmed at times by confusion in the markets — and sometimes even by fear. But why should it have to be this way? Investing is a noble pursuit. Generating wealth for a comfortable retirement means creating the financial freedom to live well in the golden years … while providing support to loved ones … enjoying the finer things in life … and possibly making a real difference in the world. Those are all good things. The pursuit of good things should not be a stressed-out experience! But for too many investors, that’s exactly what it is. Their market journey is a series of obstacles and worries. There is bad news and good news here.. Walt Kelly, the artist who drew “Pogo,” had a caption on his most famous cartoon that read: “We have met the enemy, and he is us.” That statement could have been tailor-made for investors. Your biggest challenge as an investor will be overcoming your natural behavioral shortfalls and biases — learning to do the right thing and overcoming your built-in bad behaviors in the process. (And again, this isn’t just “you” specifically. It’s also true for the TradeSmith Research Team, and Warren Buffett, and everyone else.) This really goes back to the essential mission of TradeStops. We want to help as many investors as possible find a path to comfortable retirement. (We’ve helped 25,000 so far, but that number should expand 1,000-fold.) A key goal of TradeStops is to remove anxiety from the investment process, and in doing so, help investors rediscover the joy of investing as they build long-term wealth. How do we do this? By combining science, technology, and proven principles of behavior modification. To get rid of bad investing habits, you can’t conduct brain surgery on yourself (and you wouldn’t want someone else to try). But you can use software as a tool in the investment decision-making process … which in turn serves as a form of painless behavior modification … which puts you on the path to anxiety-free investment success. Again, this is what TradeStops is all about: Helping investors overcome their greatest obstacle to investing success … so they can meet their long-term wealth-building goals … and have a positive impact on everyone around them. Here’s something else funny about the brain: Knowledge makes behavior modification easier. The better and deeper the brain understands the “why” behind something, the easier it becomes to make a positive behavior change around that thing. And sometimes the “why” is even more important than the rules. The importance of the “why” was once vividly demonstrated by Ed Seykota, a famous trend follower who made countless millions in the commodity futures markets. Seykota was one of the earliest adopters of mechanical trend-following techniques. In the 1970s he was a pioneer in the use of exponential moving average crossover systems. (They were so new and exotic at the time, people called them “expedential” moving averages.) At one point, Seykota decided to teach a classroom course on trend following. For the curious who signed up — remember, trend following was totally new at this point — Seykota spent something like 10 percent of the classroom time explaining the very simple rules of his trend-following system — and the other 90 percent explaining the “why” behind the importance of sticking with the rules! We’ve realized a similar idea applies to TradeStops software. No matter how good our software is — and you continue to give us rave reviews, for which we are deeply grateful — it feels like there is always more opportunity to help you, our customers and fellow investors, to get more out of TradeStops by better understanding the “why” behind certain basic principles. To that end, we are excited to start something new: An “education series” of editorials, designed to help you become a better investor by sharing the “why” behind some very important concepts. Our game plan with the education series is to start with the following concepts, exploring each one over a period of weeks or months: - Cognitive Biases - Probability - Investor Psychology We’re confident the insight you gain from this series can help make you a better investor, even if you aren’t currently using TradeStops. (Though of course, if you haven’t yet experienced the power of TradeStops, we suggest rectifying that immediately!) If you have any questions, comments, or just something you’ve always wanted to know about cognitive biases, probability, or investor psychology, let us know! TradeSmith Research Team
149,466
Values Approach Return on Investment Alumni Support Certifications Approach. We are pleased to share that our Quality Management System adheres to the ISO 9001:2015 standard. You can view our Quality Policy here. Values Approach Return on Investment Alumni Support Certifications Working with World Leading Brands Services Sectors News & Analysis
135,299
Posted by sophievandijk | April 8, 2016 How is the state of our climate, and how can we respond with optimism and not despair? The World Meteorological Organisation (WMO) recently released a statement on the Status of the Climate in 2015, highlighting the concerning trends in the changing climate that have continued into 2016. Unfortunately some of the changes reported include the following: Ocean Heat and Sea Level: Large areas of the world’s oceans, particularly in the central and eastern Pacific Ocean, were much warmer on average during 2015. Increased ocean heat contributes to rising sea levels, which were the highest ever recorded last year. Temperature: Many countries experienced intense heatwaves, particularly in India and Pakistan. Asia and South America had their hottest years on record, while North West USA and Western Canada suffered from a record wildfire season due to the heat. Heavy Rainfall: Although the overall global precipitation was close to the long-term average, there were many cases of extreme rainfall. In some cases, 24 hour total rainfall exceeded the normal monthly average. Drought: Severe drought affected many regions in the world, particularly in southern Africa and South America. This has had a significant impact on agricultural production and food security. Tropical Cyclones: There were several rare and unusually strong hurricanes and cyclones recorded during 2015. Each of these changes are occurring rapidly, and it is clear that without action there will be significant problems in the future. Okay, how do you feel when you read this? It is easy to wonder what action can be taken in response to these findings, and to feel somewhat helpless in bringing about change. The WMO recognises this and explains that: Building climate and weather resilient communities is a vital part of the global strategy for achieving sustainable development and combating climate change. We agree with this but we are not sure that organisations like the WMO know how to do it. That is a big part of why we are doing what we are, here at Rivers of Carbon. We are investing in projects that restore vegetation which has multiple benefits, particularly in the context of climate change. This includes improving stream, river and pasture health to safeguard it for the future, as well as providing refuges for wildlife from climate change extremes. If you would like to find out more about what we are doing and get involved you can go to our Our Projects page. We also invest in people and optimism, and someone who is an expert at this is Les Robinson with his Enabling Change approach. Les wrote a terrific paper about how we can craft a climate change message that shifts the focus from negativity to action. As Les points out: When people are presented with uncomfortable facts, denial is always the most convenient option unless there is an immediately available, credible action, within their perceived abilities, that reduces the discomfort. So let’s shift the focus from knowledge to action… It reduces denial and focuses on the ultimate point of communication – action! It lets us tell hopeful stories about optimistic people living their dreams. And when many people act, it’s “social proof” that the action is right, safe and desirable. To this statement (2) Les’s paper takes you through a process of personalising climate change so that people can take action to care for themselves, their family and their community. This takes you from a fact-based, global, gloomy, abstract, and helplessness-inducing message to one that is action-based, personal, inspiring, concrete and empowering. Want to see how he does it? Follow this link to find out and join the Rivers of Carbon team in believing in the ability of people to adapt, act and respond with optimism to climate change. You might also like to take the time to watch this TEDx Talk from Bert Jacobs about optimism and their journey to create the ‘life is good’ brand. More information available here:
241,350
by Joe Hammerschmidt With the arrival of Deadpool 2, one may easily prove two years is the ripest waiting period for a sequel of its nature. Sure, waiting longer will either build nostalgia, or sour the taste like so many, so of course, it’s dependent on the property. Audiences for comic book films tend to stay quite restless, which means the goal of bringing back Ryan Reynolds and his spotless embodiment of a mercy killer in a blood-mingling spandex suit had to be achieved in record time. Despite a needful director replacement and plenty of on-set creative indifferences, Reynolds’ reflection of the character paired with the hyperviolent flair of up-and-coming director David Leitch makes for a delicate wine-and-cheese that could never be torn apart. Of course, we have to start with a slightly atypical recap montage of how former special op Wade Wilson (Reynolds) had been keeping himself busy, kicking ass, taking names, and often running when the action overwhelms. At least there’s still Vanessa (Morena Baccarin) to come home to, at least at first, won’t go any further than that. Regardless, after two years of the usual nonsense, Wade does crave a challenge, new job goals, whatever can keep things fresh for him, and for the audience looking for more than just an origin story continuation which is carefully avoided by Reynolds and co-writers Rhett Reese and Paul Wernick. What can’t be helped is the added emphasis that tries as we can to ignore the truth, “Deadpool 2 is technically part of the X-Men franchise”, which I personally refuse to accept for the time being. Yet its future may rest square on Wade and the formation of a secondary squadron, triumphantly named X-Force, and utilizing who all is available, specifically Colossus (Stefan Kapecic) and Negasonic Teenage Warhead (Brianna Hildebrand). The trio reluctantly teams up to pull in line a possible New Mutants reject who simply didn’t get along with the other kids. His name is Russell (performed so perfectly by Hunt for the Wilderpeople’s Julian Dennison), a victim of strict mental conditioning, or religious brainwashing, and he clearly has no interest in either locking away for his protection or honing his powers to be used for good. He doesn’t necessarily evolve into a Robin to Wade’s Batman, yet his candor as an evil wild child with plenty of guilt to keep him in line still make him a welcome joy. Now to Cable, the second character of what could be Josh Brolin’s most enjoyable single year (the Sicario sequel is up next, so fingers crossed it could be 3-for-3). His presence, simply to outdo Russell in the past for a chain of events that would outdo his future, is like a long-form dance, starts in the background so as to not outrank the principal actors. But it does take a little time for his purpose to fully enact on the leads, my nitpickiness would say to get him involved faster, but his formal entrance is still enjoyable, well worth all the teases in the marketing. Most importantly, he is indeed the purest antithesis to his in-your-face Thanos just three weeks ago in Infinity War. Shedding away the mo-cap costume, we leave the soul of a delicate dramatic actor, a fair exchange. And then there are the other newcomer characters who more than earn their bread. Zazie Beetz, of Atlanta fame, will have deserved some more screen time in the eventual X-force team up feature with her Domino, an unapologetic anti-hero with supposed luck on her side and a slight temper. Beetz is such a natural choice, it’s almost impossible to distinguish the actress from the character, almost as if she were modeled a little differently from the comics to resemble a more comical foil to Wade. Perhaps, maybe. Also, among the potential X-Force trainees, Terry Crews (still starring in Brooklyn Nine-Nine for another year, thanks to NBC) stands as the sparkling MVP with his portrayal of Bedlam. Without giving too much away, pay close attention when he appears and you’ll understand why. As for Reynolds, he, of course, is still in the driver’s seat, just a little more than the first one. Leitch, who proved his Hard-R style on John Wick, and last year’s understated favorite Atomic Blonde, is co-pilot, sharing two unique vertices of the same vision, still all Wade. The huge difference you’ll notice right from the beginning, way more action-driven, but that shouldn’t come as a large surprise. That freeway chase and the final boss battle of the first can’t hold a candle to a string of consistent fisticuff-laden scenes that, to this reviewer’s surprise, don’t grow exhaustive. Matter of fact, there is still so much room to spare for light-foot comedy (Reynolds’ department), and a metric ton of heartfelt compassion, instantly not Wade’s strong suit but he warms up to the idea. I may be a little too bold to proclaim the hallmarks, and blunders of Ryan’s varied acting career had prepared him so well for each corner this sequel touches, blowing unnecessary expectations out of the water. It’s certainly his show, with a diverse supporting cast further complimenting his every move. Needless to say, watch out for so many rapid-fire meta gags that only add to the fun, of which only Reynolds could achieve on a serious level; for one, the Celine Dion Oscar-bait tune that dominated the internet for a day or two? It is used very effectively in its rightful place, yet I’d rather not spill here. There is a very warm place in the center of my heart where Deadpool 2 ought to go; it was left very empty after the original had to disappear, but thankfully the second is a little more gentle, free-spirited, not as weighed down by origin story requirements. While running a little longer than the original, each minute is spent with extra care building strong character development and letting the raunchy jokes fly. One may be taken aback by the sudden family aesthetic Wade encompasses, but rest assured it only increases the character’s lore. Chances are, it may very well ensure franchise stability as multiple confusing continuities are retired, in favor of something brand new, exciting, worth of extended replay value. Reynolds has proven once and for all he is the true personification of Deadpool, and vice versa, with all the smut, grit, and gore one could shake their literal moneymakers at. Bring on the X-Force movie, and let a new chapter begin. And of course, do stay for the glorious capper during the mid-credits; there was nothing at the tail end, at least for the screening audience I was a part of. (A-) Deadpool 2 is in most area theaters this weekend; rated R for strong violence and language throughout, sexual references and brief drug material; 119 minutes.
232,919
\begin{document} \title{Reflection functors and symplectic reflection algebras for wreath products} \author{Wee Liang Gan} \address{Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \email{[email protected]} \begin{abstract} We construct reflection functors on categories of modules over deformed wreath products of the preprojective algebra of a quiver. These functors give equivalences of categories associated to generic parameters which are in the same orbit under the Weyl group action. We give applications to the representation theory of symplectic reflection algebras of wreath product groups. \end{abstract} \maketitle \section{\bf Introduction} Deformed preprojective algebras $\Pi_\la$ associated to a quiver were introduced by Crawley-Boevey and Holland in \cite{CBH}. A useful tool in their work is the reflection functors, which gives an equivalence from the category of modules over $\Pi_\la$ to the category of modules over $\Pi_\mu$, when the parameters $\la$ and $\mu$ are in the same orbit under an action of the Weyl group of the quiver on the space of parameters. Recently, in \cite{GG}, a one-parameter deformation $\A_{n, \la, \nu}$ of the wreath product of $\Pi_\la$ with $S_n$ was constructed. The purpose of this paper is to generalize the construction of the reflection functors to the algebras $\A_{n, \la, \nu}$. Actually, we construct reflection functors $F_i$ for the simple reflections $s_i$ at vertices $i$ without edge-loop. The author does not know if compositions of the functors satisfy the Weyl group relations. However, we prove in Theorem \ref{equivalence} that if $\la, \nu$ are generic, then $F_i^2 \cong 1$. It is interesting that to each $\A_{n, \la, \nu}$-module $V$, there is a natural complex $\CC^\bullet(V)$, depending on $i$, with the property that $F_i(V)=H^0(\CC^\bullet(V))$. Assuming that $\la$ is generic and $\nu=0$, we prove $H^r(\CC^\bullet(V))=0$ for all $r>0$, and hence obtain a `dimension vector' formula for $F_i(V)$. When the quiver is affine Dynkin of type ADE, there is a finite subgroup $\Gamma \subset SL_2(\C)$ associated to it by the McKay correspondence. Let $\GG$ be the wreath product group $S_n\ltimes\Gamma^n$. In \cite{EG}, Etingof and Ginzburg introduced the symplectic reflection algebras $\hh_{t,k,c}(\GG)$ attached to $\GG$. A Morita equivalence between the algebras $\A_{n, \la, \nu}$ and $\hh_{t,k,c}(\GG)$ was constructed in \cite{GG}; in the case $n=1$, this was done in \cite{CBH}. As a consequence, we obtain reflection functors for the algebras $\hh_{t,k,c}(\GG)$. This paper is a step towards the classification of the symplectic reflection algebras of wreath product groups up to Morita equivalence. The reflection functors are defined when $\Gamma\neq\{1\}$. Let us mention that when $\Gamma=\{1\}$ or $\Z/2\Z$, the algebras $\hh_{t,k,c}(\GG)$ are the rational Cherednik algebras of type A or B. In these cases, Morita equivalences for the algebras were constructed in \cite{BEG1} using shift functors, and a complete classification in type A (which corresponds to the affine Dynkin quiver of type $\mathrm A_0$) was proved in \cite{BEG2} (for generic parameter). We are interested in the representation theory of $\hh_{t,k,c}(\GG)$ when the parameter $t$ is nonzero. When $n=1$, there is no parameter $k$, and the finite dimensional simple modules of $\hh_{t,c}(\Gamma)$ were classified in \cite{CBH}. When $n>1$, we have $\hh_{t,0,c}(\GG) =\hh_{t,c}(\Gamma)^{\ot n}\rtimes \C[S_n]$. Thus, there is also a classification of finite dimensional simple modules of $\hh_{t,0,c}(\GG)$. In \cite{M2} (which generalizes \cite{EM}), Montarani found sufficient conditions for the existence of a deformation of a finite dimensional simple $\hh_{1,0,c_0}(\GG)$-module to a $\hh_{1,k,c_0+c}(\GG)$-module for formal parameters $k,c$. The proofs in \cite{EM} and \cite{M2} are based on homological arguments. We shall give a new proof by constructing the deformation using reflection functors. Moreover, using the reflection functors, we will show that if a finite dimensional simple $\hh_{1,0,c_0}(\GG)$-module can be formally deformed to a $\hh_{1,k,c_0+c}(\GG)$-module, then the conditions in \cite{M2} must necessarily hold. We shall also use the reflection functors to prove the existence of certain flat families of finite dimensional simple $\hh_{t,k,c}(\GG)$-modules (for complex parameters). We expect that there will be other applications of the reflection functors. This paper is organized as follows. In Section 2, we will recall the definition of the algebras $\A_{n,\la,\nu}$, and construct the reflection functors. In Section 3 and Section 4, we give the proofs of several identities required in the construction of the reflection functors. In Section 5, we prove that the reflection functor is an equivalence of categories for generic parameters. We also construct the complex $\CC^\bullet(V)$, and prove some other properties of the reflection functors. In Section 6, we give the applications to the symplectic reflection algebras for wreath products. \section{\bf Construction of the reflection functors} \subsection{} We first recall some standard notions. Let $\k$ be a commutative ring with $1$. We shall work over $\k$. Let $Q$ be a quiver, and denote by $I$ the set of vertices of $Q$. The double $\QQ$ of $Q$ is the quiver obtained from $Q$ by adding a reverse edge $\stackrel{a^*}{j\to i}$ for each edge $\stackrel{a}{i\to j}$ in $Q$. We let $(a^*)^*:=a$ for any edge $a\in Q$. If $\stackrel{a}{i\to j}$ is an edge in $\QQ$, we call $t(a):=i$ its tail, and $h(a):=j$ its head. When $t(a)=h(a)$, we say that $a$ is an edge-loop. The Ringel form of $Q$ is the bilinear form on $\Z^I$ defined by $$\langle \al,\beta \rangle := \sum_{i\in I} \al_i\beta_i - \sum_{a\in Q} \al_{t(a)}\beta_{h(a)}, \quad\mbox{ where } \alpha=(\al_i)_{i\in I}, \ \beta=(\beta_i)_{i\in I}.$$ Let $(\al,\beta):= \langle \al,\beta \rangle +\langle \beta,\al \rangle$ be its symmetrization. We write $\epsilon_i\in \Z^I$ for the coordinate vector corresponding to the vertex $i\in I$. If there is no edge-loop at the vertex $i$, then there is a reflection $s_i:\Z^I\to\Z^I$ defined by $s_i(\al):= \al - (\al,\epsilon_i)\epsilon_i$. We call $s_i$ a simple reflection. The Weyl group $W$ is the group of automorphisms of $\Z^I$ generated by all the simple reflections. Let $B:=\bigoplus_{i\in I} \k$, and $E$ the free $\k$-module with basis formed by the set of edges $\{a\in \QQ\}$. Thus, $E$ is naturally a $B$-bimodule and $E=\bigoplus_{i,j\in I} E_{i,j}$, where $E_{i,j}$ is spanned by the edges $a\in\QQ$ with $h(a)=i$ and $t(a)=j$. The path algebra of $\QQ$ is $\k\QQ := T_B E = \bigoplus_{n\geq 0} T^n_B E$, where $T^n_B E = E\ot_B \cdots \ot_B E$ is the $n$-fold tensor product. The trivial path for the vertex $i$ is denoted by $e_i$, an idempotent in $B$. For any element $\la\in B$, we will write $\la= \sum_{i\in I}\la_i e_i$ where $\la_i\in\k$. If $w\in W$, $\la\in B$ and $\al\in \Z^I$, then $(w\la)\cdot\al := \la\cdot(w^{-1}\al)$. The reflection $r_i:B\to B$ dual to $s_i$ is defined by $(r_i\la)_j := \la_j-(\epsilon_i,\epsilon_j)\la_i$. \subsection{} In this subsection, we recall the definition of the algebra $\A_{n,\la,\nu}$ from \cite[Definition 1.2.3]{GG}. From now on, we fix a positive integer $n$. Denote by $S_n$ the permutation group of $[1,n] := \{1, \ldots, n\}$, and write $s_{ij}\in S_n$ for the transposition $i\leftrightarrow j$. Let $\B := B^{\ot n}$. For any $\ell\in [1,n]$, define the $\B$-bimodules $$ \E_{\ell}:= B^{\ot (\ell-1)}\ot E\ot B^{\ot (n-\ell)} \qquad \mathrm{and}\qquad \E := \bigoplus_{1\leq\ell\leq n} \E_\ell\,.$$ Given $\ell\in [1,n]$, $a\in \k\QQ$, and $\ii=(i_1, \ldots, i_n)\in I^n$, we write $$a_\ell\big|_{\ii} \quad\mbox{ for the element }\quad e_{i_1}\otot ae_{i_\ell}\otot e_{i_n} \in T_\B\E_{\ell}.$$ We shall simply write $\big|_{\ii}$ for the element $e_{i_1}\otot e_{i_n}$. If $a\in \QQ$ and $i_\ell=t(a)$, then let $$a_\ell(\ii):= (i'_1,\ldots,i'_n)\in I^n,\quad \mbox{ where }\quad i'_m = \left\{ \begin{array}{ll} i_m & \mbox{ if } m\neq\ell,\\ h(a) & \mbox{ if } m=\ell. \end{array} \right.$$ \begin{definition} \label{algebra} For any $\la\in B$ and $\nu\in \k$, define the $\B$-algebra $\A_{n,\la,\nu}$ to be the quotient of $T_{\B}\E\rtimes \k[S_n]$ by the following relations. \begin{itemize} \item[{\rm (i)}] For any $\ell\in [1,n]$ and $\ii=(i_1,\ldots,i_n)\in I^n$: $$ \Big(\sum_{a\in Q} [a, a^*]-\la \Big)_\ell\Big|_{\ii} = \nu \sum_{\{ m\neq\ell \mid i_m=i_\ell\}} s_{\ell m}\Big|_{\ii}. $$ \item[{\rm (ii)}] For any $\ell,m\in [1,n]$ ($\ell\neq m$), $a,b\in\QQ$, and $\ii=(i_1,\ldots,i_n)\in I^n$ with $i_\ell=t(a)$, $i_m=t(b)$: $$ a_\ell \big|_{b_m(\ii)} b_m\big|_{\ii} -b_m\big|_{a_\ell(\ii)} a_\ell\big|_{\ii} = \left\{ \begin{array}{ll} \nu s_{\ell m} \big|_{\ii} & \textrm{if $b\in Q$ and $a=b^*$},\\ - \nu s_{\ell m} \big|_{\ii} & \textrm{if $a\in Q$ and $b=a^*$},\\ 0 & \textrm{else}\,. \end{array} \right. $$ \end{itemize} \end{definition} If $n=1$, there is no parameter $\nu$, and $\A_{1,\la}$ is the deformed preprojective algebra $\Pi_\la$. Observe that $\A_{n,\la,0} = \Pi_\la^{\ot n}\rtimes \k[S_n]$. We shall denote by $\A_{n,\la,\nu}\mathrm{-mod}$ the category of left $\A_{n,\la,\nu}$-modules. \subsection{} Let $i$ be a vertex of $Q$ such that there is no edge-loop at $i$. We shall define the reflection functor $$ F_i : \A_{n,\la,\nu}\mathrm{-mod} \ \too\ \A_{n,r_i\la,\nu}\mathrm{-mod}. $$ In the case $n=1$, the reflection functors were constructed by Crawley-Boevey and Holland in \cite[Theorem 5.1]{CBH}. They were also constructed by Nakajima in the context of quiver varieties, see \cite[Remark 3.20]{Na}. They are similar to (but not the same as) the reflection functors of Bernstein, Gelfand and Ponomarev in \cite{BGP}. Let $V$ be a $\A_{n,\la,\nu}$-module. We will first define $F_i(V)$ as a $\B\rtimes \k[S_n]$-module. Up to isomorphism, the algebra $\A_{n,\la,\nu}$ does not depend on the orientation of $Q$, so we may assume that $i$ is a sink in $Q$; let $$R:=\{a\in Q\mid h(a)=i\}.$$ For any $\jj=(j_1,\ldots,j_n)\in I^n$, let $$V_{\jj} := \big|_\jj V, \quad\mbox{ and }\quad \Delta(\jj):=\{m\in [1,n]\mid j_m=i\}.$$ For any $D\subseteq \Delta(\jj)$, define $$\X (D) := \mbox{ the set of all maps }\xi:D\to R: m\mapsto \xi(m).$$ Given $\xi\in \X(D)$, let $$t(\jj,\xi) := (t_1,\ldots,t_n)\in I^n, \quad \mbox{ where }\quad t_m = \left\{ \begin{array}{ll} j_m & \mbox{ if } m\notin D,\\ t(\xi(m)) & \mbox{ if } m\in D. \end{array} \right.$$ Define $$V(\jj,D) := \bigoplus_{\xi\in \X(D)} V_{t(\jj,\xi)}.$$ In particular, $V(\jj,\emptyset) = V_\jj$. Write $$\pi_{\jj,\xi}:V(\jj,D)\too V_{t(\jj,\xi)},\qquad \mu_{\jj,\xi}:V_{t(\jj,\xi)}\too V(\jj,D)$$ for the projection map and inclusion map, respectively. If $\sigma\in S_n$, then let $\sigma(\jj) := (j_{\sigma^{-1}(1)}, \ldots, j_{\sigma^{-1}(n)})$. We have $\Delta(\sigma(\jj)) = \sigma(\Delta(\jj))$. If $\xi\in \X(D)$, then let $\sigma(\xi) \in \X(\sigma(D))$ be the map $m\mapsto \xi(\sigma^{-1}m)$. Let $$ \sigma\big|_\jj: V(\jj, D) \too V(\sigma(\jj), \sigma(D)), \qquad \sigma\big|_\jj := \sum_{\xi\in \X(D)} \mu_{\sigma(\jj), \sigma(\xi)} \sigma \pi_{\jj,\xi}.$$ Suppose $p\in D$. We have a restriction map $\rho_p: \X(D)\to \X(D\setminus\{p\})$. For each $\xi\in \X(D)$, we have a composition of maps $$ \xymatrix{V(\jj,D) \ar[rr]^{\pi_{\jj,\xi}} & & V_{t(\jj,\xi)} \ar[rr]^{\xi(p)_p\big|_{t(\jj,\xi)}} & & V_{t(\jj,\rho_p(\xi))} \ar@{^{(}->}[rr]^{\mu_{\jj,\rho_p(\xi)}} & & V(\jj, D\setminus\{p\})}. $$ We also have a composition of maps $$ \xymatrix{ V(\jj, D\setminus\{p\}) \ar[rr]^{\pi_{\jj,\rho_p(\xi)}} & & V_{t(\jj,\rho_p(\xi))} \ar[rr]^{\xi(p)^*_p\big|_{t(\jj,\rho_p(\xi))}} & & V_{t(\jj,\xi)} \ar@{^{(}->}[rr]^{\mu_{\jj,\xi}} & & V(\jj,D) }.$$ Define $$\pi_{\jj,p}: V(\jj,D)\too V(\jj,D\setminus\{p\}), \quad \pi_{\jj,p} := \sum_{\xi\in \X(D)} \mu_{\jj,\rho_p(\xi)} \xi(p)_p\big|_{t(\jj,\xi)} \pi_{\jj,\xi},$$ and $$\mu_{\jj,p}: V(\jj,D\setminus\{p\})\too V(\jj,D), \quad \mu_{\jj,p}:= \sum_{\xi\in \X(D)} \mu_{\jj,\xi} \xi(p)^*_p\big|_{t(\jj,\rho_p(\xi))} \pi_{\jj,\rho_p(\xi)}.$$ The maps $\pi_{\jj,p}$ and $\mu_{\jj,p}$ depend on $D$, but we suppress it from our notations. We state here the following lemma which will be used later. \begin{lemma} \label{firstlemma} For any $p,q\in D$ ($p\neq q$), we have the following. {\rm (i)} $\pi_{\sigma(\jj), \sigma(p)} \sigma\big|_\jj = \sigma\big|_{\jj} \pi_{\jj,p}$, and $\mu_{\sigma(\jj), \sigma(p)} \sigma\big|_\jj = \sigma\big|_{\jj} \mu_{\jj,p}$. {\rm (ii)} $\pi_{\jj,p}\mu_{\jj,p} = \lambda_i + \nu \sum_{m\in \Delta(\jj)\setminus D}s_{pm}\big|_\jj$. {\rm (iii)} $\pi_{\jj,p}\mu_{\jj,q} = \mu_{\jj,q}\pi_{\jj,p} -\nu s_{pq}\big|_{\jj}$. {\rm (iv)} $\pi_{\jj,p}\pi_{\jj,q}=\pi_{\jj,q}\pi_{\jj,p}$, and $\mu_{\jj,p}\mu_{\jj,q}=\mu_{\jj,q}\mu_{\jj,p}$. \end{lemma} The proof of Lemma \ref{firstlemma} will be given in Section 3. Let $$ V_\jj(D):=\left\{ \begin{array}{ll} \bigcap_{p\in D} \mathrm{Ker}(\pi_{\jj,p}) & \mbox{ if } D \neq\emptyset,\\ V_\jj & \mbox{ if } D=\emptyset. \end{array} \right.$$ Let $V'_\jj := V_\jj(\Delta(\jj))$. \begin{definition} Let $F_i(V) := V'= \bigoplus_{\jj\in I^n} V'_\jj$ as a $\B\rtimes \k[S_n]$-module. \end{definition} \subsection{} The proofs of Lemmas \ref{lemmaforcase1}, \ref{lemmaforcase2} and \ref{lemmaforcase3} of this subsection will be given in Section 3. The proof of Proposition \ref{functor} of this subsection will be given in Section 4. Given any $\ell\in[1,n]$, $a\in \QQ$, and $\jj=(j_1,\ldots, j_n)\in I^n$ with $j_\ell=t(a)$, we shall define a map $a'_\ell\big|_\jj : V'_\jj \to V'_{a_\ell(\jj)}$. There are three cases. {\bf Case (I)}, $h(a),t(a)\neq i$: Then $\ell\notin\Delta(\jj)=\Delta(a_\ell(\jj))$. Let $D\subset \Delta(\jj)$. For any $\xi\in \X(D)$, we have a composition of maps $$ \xymatrix{ V(\jj,D) \ar[rr]^{\pi_{\jj,\xi}} && V_{t(\jj,\xi)} \ar[rr]^{a_\ell\big|_{t(\jj,\xi)}} && V_{t(a_\ell(\jj),\xi)} \ar@{^{(}->}[rr]^{\mu_{a_\ell(\jj),\xi}} && V\big(a_\ell(\jj),D)} .$$ Define $$a_\ell\big|_{\jj,D}: V(\jj,D)\too V\big(a_\ell(\jj),D),\qquad a_\ell\big|_{\jj,D}:= \sum_{\xi\in \X(D)} \mu_{a_\ell(\jj),\xi} a_\ell\big|_{t(\jj,\xi)} \pi_{\jj,\xi}.$$ Let $a'_\ell\big|_\jj := a_\ell\big|_{\jj,\Delta(\jj)}$. \begin{lemma} \label{lemmaforcase1} {\rm (i)} If $p\in D$, then $\pi_{a_\ell(\jj),p} a_\ell\big|_{\jj,D} = a_\ell\big|_{\jj,D\setminus\{p\}} \pi_{\jj,p}$. {\rm (ii)} If $p\notin D$, then $\mu_{a_\ell(\jj),p} a_\ell\big|_{\jj,D} = a_\ell\big|_{\jj,D\cup\{p\}} \mu_{\jj,p}$. \end{lemma} It follows from Lemma \ref{lemmaforcase1}(i) that $a'_\ell\big|_\jj$ defined in Case (I) sends $V'_\jj$ into $V'_{a_\ell(\jj)}$. {\bf Case (II)}, $t(a)=i$: Then $\ell\in\Delta(\jj)$, and $\Delta(a_\ell(\jj)) = \Delta(\jj)\setminus\{\ell\}$. Suppose $\ell\in D\subset \Delta(\jj)$. For any $r\in R$, we have an injective map $$ \tau_{r,\ell,D}: \X(D\setminus\{\ell\}) \hookrightarrow \X(D):\eta \mapsto \tau_{r,\ell,D}(\eta), $$ where $$ \tau_{r,\ell,D}(\eta)(m) := \left\{ \begin{array}{ll} \eta(m) & \mbox{ if } m\in D\setminus\{\ell\},\\ r & \mbox{ if } m=\ell. \end{array} \right.$$ Since $t(\jj,\tau_{r,\ell,D}(\eta)) = t(r^*_\ell(\jj), \eta)$, there is a projection map $$ \tau_{r,\ell,\jj,D}^!: V(\jj,D) \too V(r^*_\ell(\jj),D\setminus\{\ell\}), \quad \tau_{r,\ell,\jj,D}^!:= \sum_{\eta\in\X(D\setminus\{\ell\})} \mu_{r^*_\ell(\jj), \eta} \pi_{\jj,\tau_{r,\ell,D}(\eta)},$$ and an inclusion map $${\tau_{r,\ell,\jj,D}}_!: V(r^*_\ell(\jj),D\setminus\{\ell\}) \too V(\jj,D), \quad {\tau_{r,\ell,\jj,D}}_!:= \sum_{\eta\in\X(D\setminus\{\ell\})} \mu_{\jj,\tau_{r,\ell,D}(\eta)}\pi_{r^*_\ell(\jj), \eta}.$$ Let $a'_\ell\big|_\jj := \tau_{a^*, \ell, \jj, \Delta(\jj)}^!$. \begin{lemma} \label{lemmaforcase2} {\rm (i)} $\sum_{r\in R} {\tau_{r,\ell,\jj,D}}_! \tau_{r,\ell,\jj,D}^! =1$. {\rm (ii)} If $p\in D\setminus\{\ell\}$, then $$\pi_{r^*_\ell(\jj),p} \tau_{r,\ell,\jj,D}^! = \tau_{r, \ell, \jj, D\setminus\{p\}}^! \pi_{\jj,p} \quad \mbox{ and } \quad \pi_{\jj,p} {\tau_{r,\ell,\jj,D}}_! = {\tau_{ r,\ell,\jj,D\setminus\{p\} }}_! \pi_{r^*_\ell(\jj),p}.$$ {\rm (iii)} If $p\notin D$, then $$ \mu_{r^*_\ell(\jj),p} \tau^!_{r,\ell,\jj,D} = \tau^!_{r,\ell,\jj,D\cup\{p\}} \mu_{\jj,p} \quad \mbox{ and } \quad \mu_{\jj,p} { \tau_{r,\ell,\jj,D} }_! = { \tau_{r,\ell,\jj,D\cup\{p\}} }_! \mu_{r^*_\ell(\jj),p}.$$ \end{lemma} It follows from Lemma \ref{lemmaforcase2}(ii) that $a'_\ell\big|_\jj$ defined in Case (II) sends $V'_\jj$ into $V'_{a_\ell(\jj)}$. {\bf Case (III)}, $h(a)=i$: Then $\ell\notin\Delta(\jj)$, and $\Delta(a_\ell(\jj))= \Delta(\jj)\cup\{\ell\}$. Let $D\subset \Delta(\jj)$. We have the inclusion map $$ { \tau_{a, \ell, a_\ell(\jj), D\cup\{\ell\}} }_! : V(\jj,D) \too V(a_\ell(\jj), D\cup\{\ell\}).$$ Define $$ \theta_{a,\ell,\jj, D} : V(\jj,D) \too V(a_\ell(\jj), D\cup\{\ell\})$$ by $$ \theta_{a,\ell,\jj, D} := \Big(-\lambda_i + \mu_{a_\ell(\jj), \ell} \pi_{a_\ell(\jj), \ell} + \nu\sum_{m\in D} s_{m\ell}\big|_{a_\ell(\jj)} \Big) { \tau_{a, \ell, a_\ell(\jj), D\cup\{\ell\}} }_!.$$ Let $a'_\ell\big|_\jj := \theta_{a,\ell,\jj, \Delta(\jj)}$. \begin{lemma} \label{lemmaforcase3} {\rm (i)} If $p\in D$, then $\pi_{a_\ell(\jj),p} \theta_{a,\ell,\jj, D} = \theta_{a,\ell,\jj, D\setminus\{p\}} \pi_{\jj,p}$. {\rm (ii)} We have $$ \pi_{a_\ell(\jj), \ell} \theta_{a,\ell,\jj, \Delta(\jj)} = \nu\sum_{m\in \Delta(\jj)} s_{m\ell} \big|_{a_\ell(\jj)} {\tau_{ a, \ell, a_\ell(\jj), \Delta(a_\ell(\jj))\setminus\{m\} }}_! \pi_{\jj,m} .$$ {\rm (iii)} If $p\in D$, then $ \theta_{a,\ell,\jj, D} \mu_{\jj,p} = \mu_{a_\ell(\jj), p} \theta_{a,\ell,\jj, D\setminus\{p\}}$. \end{lemma} It follows from Lemma \ref{lemmaforcase3}(i)-(ii) that $a'_\ell\big|_\jj$ defined in Case (III) maps $V'_\jj$ into $V'_{a_\ell(\jj)}$. Thus, $F_i(V)$ is a $T_\B \E \rtimes \k[S_n]$-module, where $a_\ell\big|_\jj\in \E$ acts by $a'_\ell\big|_\jj$. \begin{proposition} \label{functor} With the above action, $F_i(V)$ is a $\A_{n, r_i \la, \nu}$-module. \end{proposition} It is clear from our construction that the assignment $V\mapsto F_i(V)$ is functorial. \section{\bf Proofs of lemmas} {\bf Proof of Lemma \ref{firstlemma}}: (i) \begin{align*} \pi_{\sigma(\jj), \sigma(p)} \sigma\big|_\jj =& \sum_{\xi\in \X(D)} \mu_{\sigma(\jj), \rho_{\sigma(p)} (\sigma(\xi))} \sigma(\xi)(\sigma(p))_{\sigma(p)} \big|_{t(\sigma(\jj), \sigma(\xi))} \sigma \pi_{\jj,\xi}\\ =& \sum_{\xi\in \X(D)} \mu_{\sigma(\jj), \rho_{\sigma(p)} (\sigma(\xi))} \sigma \xi(p)_p\big|_{t(\jj,\xi)} \pi_{\jj,\xi}\\ =& \sigma\big|_{\jj} \mu_{\jj,p}. \end{align*} \begin{align*} \mu_{\sigma(\jj), \sigma(p)} \sigma\big|_\jj =& \sum_{\xi\in\X(D)} \mu_{\sigma(\jj),\sigma(\xi)} \sigma(\xi)(\sigma(p))^*_{\sigma(p)} \big|_{t(\sigma(\jj), \rho_{\sigma(p)}(\sigma(\xi))} \sigma \pi_{\jj, \rho_p(\xi)}\\ =& \sum_{\xi\in\X(D)} \mu_{\sigma(\jj),\sigma(\xi)} \sigma \xi(p)^*_p\big|_{t(\jj,\rho_p(\xi))} \pi_{\jj, \rho_p(\xi)}\\ =& \sigma\big|_{\jj} \mu_{\jj,p}. \end{align*} (ii) \begin{align*} \pi_{\jj,p}\mu_{\jj,p} =& \sum_{\xi\in \X(D)} \mu_{\jj,\rho_p(\xi)} \xi(p)_p \big|_{t(\jj,\xi)} \xi(p)^*_p \big|_{t(\jj,\rho_p(\xi))} \pi_{\jj,\rho_p(\xi)} \\ =& \sum_{\eta\in \X(D\setminus\{p\})} \mu_{\jj,\eta}\Big( \sum_{a\in R} aa^* \Big)_p\Big|_{t(\jj,\eta)} \pi_{\jj,\eta} \\ =& \sum_{\eta\in \X(D\setminus\{p\})} \mu_{\jj,\eta}\Big( \lambda_i + \nu \sum_{m\in \Delta(\jj)\setminus D} s_{pm} \Big) \pi_{\jj,\eta} \\ =& \lambda_i+\nu\sum_{m\in \Delta(\jj)\setminus D}s_{pm}\big|_\jj. \end{align*} (iii) \begin{align*} \pi_{\jj,p}\mu_{\jj,q} =& \sum_{\xi\in \X(D)} \mu_{\jj,\rho_p(\xi)} \xi(p)_p \big|_{t(\jj,\xi)} \xi(q)^*_q \big|_{t(\jj,\rho_q(\xi)} \pi_{\jj,\rho_q(\xi)} \\ =& \sum_{\xi\in \X(D)} \mu_{\jj,\rho_p(\xi)} \xi(q)^*_q \big|_{t( \jj,\rho_p(\rho_q(\xi)))} \xi(p)_p \big|_{t(\jj,\rho_q(\xi))} \pi_{\jj,\rho_q(\xi)} \\ & - \nu \sum_{ \{ \xi\in\X(D) \mid \xi(p)=\xi(q) \} } \mu_{\jj,\rho_p(\xi)} s_{pq} \pi_{\jj,\rho_q(\xi)} \\ =& \mu_{\jj,q}\pi_{\jj,p} -\nu s_{pq}\big|_\jj. \end{align*} (iv) \begin{align*} \pi_{\jj,p} \pi_{\jj,q} =& \sum_{\xi \in \X(D)} \mu_{\jj, \rho_p(\rho_q(\xi))} \xi(p)_p \big|_{ t(\jj, \rho_q(\xi)) } \xi(q)_q \big|_{ t(\jj,\xi) } \pi_{\jj,\xi} \\ =& \sum_{\xi \in \X(D)} \mu_{\jj, \rho_p(\rho_q(\xi))} \xi(q)_q \big|_{ t(\jj, \rho_p(\xi)) } \xi(p)_p \big|_{ t(\jj,\xi) } \pi_{\jj,\xi} \\ =& \pi_{\jj,q} \pi_{\jj,p}. \end{align*} \begin{align*} \mu_{\jj,p} \mu_{\jj,q} =& \sum_{\xi\in\X(D)} \mu_{\jj,\xi} \xi(p)^*_p \big|_{t(\jj,\rho_p(\xi))} \xi(q)^*_q\big|_{t(\jj, \rho_q(\rho_p(\xi)))} \pi_{\jj, \rho_q(\rho_p(\xi))}\\ =& \sum_{\xi\in\X(D)} \mu_{\jj,\xi} \xi(q)^*_q \big|_{t(\jj,\rho_q(\xi))} \xi(p)^*_p\big|_{t(\jj, \rho_q(\rho_p(\xi)))} \pi_{\jj, \rho_q(\rho_p(\xi))}\\ =& \mu_{\jj,q} \mu_{\jj,p}. \end{align*} \qed {\bf Proof of Lemma \ref{lemmaforcase1}}: (i) \begin{align*} \pi_{a_\ell(\jj),p} a_\ell\big|_{\jj,D} =& \sum_{\xi\in\X(D)} \mu_{a_\ell(\jj),\rho_p(\xi)} \xi(p)_p\big|_{t(a_\ell(\jj),\xi)} a_\ell\big|_{t(\jj,\xi)} \pi_{\jj,\xi}\\ =& \sum_{\xi\in\X(D)} \mu_{a_\ell(\jj),\rho_p(\xi)} a_\ell\big|_{t(a_\ell(\jj),\rho_p(\xi))} \xi(p)_p\big|_{t(\jj,\xi)} \pi_{\jj,\xi}\\ =& a_\ell\big|_{\jj,D\setminus\{p\}} \pi_{\jj,p}. \end{align*} (ii) \begin{align*} \mu_{a_\ell(\jj),p} a_\ell\big|_{\jj,D} =& \sum_{\xi\in \X(D\cup\{p\})} \mu_{a_\ell(\jj),\xi} \xi(p)^*_p \big|_{t(a_\ell(\jj), \rho_p(\xi))} a_\ell\big|_{t(\jj,\rho_p(\xi))} \pi_{\jj,\rho_p(\xi)} \\ =& \sum_{\xi\in \X(D\cup\{p\})} \mu_{a_\ell(\jj),\xi} a_\ell\big|_{t(\jj,\xi)} \xi(p)^*_p \big|_{t(\jj,\rho_p(\xi))} \pi_{\jj,\rho_p(\xi)} \\ =& a_\ell\big|_{\jj,D\cup\{p\}} \mu_{\jj,p}. \end{align*} \qed {\bf Proof of Lemma \ref{lemmaforcase2}}: (i) \begin{align*} \sum_{r\in R} {\tau_{r,\ell,\jj,D}}_! \tau_{r,\ell,\jj,D}^! =& \sum_{r\in R} \sum_{\eta\in \X(D\setminus\{\ell\})} \mu_{\jj, \tau_{r,\ell,D} (\eta)} \pi_{\jj, \tau_{r,\ell,D} (\eta)} \\ =& \sum_{ \xi \in \X(D) } \mu_{\jj,\xi} \pi_{\jj,\xi} \\ =& 1. \end{align*} (ii) \begin{align*} \pi_{r^*_\ell(\jj),p} \tau_{r,\ell,\jj,D}^! =& \sum_{\eta\in \X(D\setminus\{\ell\})} \mu_{r^*_\ell(\jj),\rho_p(\eta)} \eta(p)_p\big|_{t(r^*_\ell(\jj),\eta)} \pi_{\jj,\tau_{r,\ell,D}(\eta)} \\ =& \sum_{ \{ \xi\in\X(D)\mid \xi(\ell)=r \} } \mu_{r^*_\ell(\jj),\rho_p(\rho_\ell(\xi))} \xi(p)_p\big|_{t(\jj,\xi)} \pi_{\jj,\xi} \\ =& \Big( \sum_{ \{ \varepsilon \in \X(D\setminus\{p\}) \mid \varepsilon(\ell)=r \} } \mu_{r^*_\ell(\jj),\rho_\ell(\varepsilon)} \pi_{\jj,\varepsilon} \Big) \pi_{\jj,p} \\ =& \Big( \sum_{\zeta\in \X(D\setminus\{ \ell,p \})} \mu_{r^*_\ell(\jj),\zeta} \pi_{\jj, \tau_{r,\ell,D\setminus\{p\}} (\zeta)} \Big) \pi_{\jj,p} \\ =& \tau_{r, \ell,\jj, D\setminus\{p\}}^! \pi_{\jj,p}. \end{align*} \begin{align*} \pi_{\jj,p} {\tau_{r,\ell,\jj,D}}_! =& \Big(\sum_{\xi\in\X(D)} \mu_{\jj,\rho_p(\xi)} \xi(p)_p\big|_{t(\jj,\xi)} \pi_{\jj,\xi} \Big) \Big( \sum_{\eta\in \X(D\setminus\{\ell\})} \mu_{\jj, \tau_{r,\ell,D} (\eta)} \pi_{r^*_\ell(\jj),\eta} \Big) \\ =& \sum_{\eta\in \X(D\setminus\{\ell\})} \mu_{\jj,\rho_p(\tau_{r,\ell,D} (\eta))} \eta(p)_p \big|_{t( \jj, \tau_{r,\ell,D} (\eta) )} \pi_{r^*_\ell(\jj),\eta} \\ =& \Big( \sum_{\zeta\in \X(D\setminus \{\ell,p\} )} \mu_{\jj, \tau_{r, \ell,D\setminus\{p\}} (\zeta) } \pi_{r^*_\ell(\jj),\zeta} \Big) \pi_{r^*_\ell(\jj),p} \\ =& { \tau_{r, \ell, \jj, D\setminus \{ p\}} }_! \pi_{r^*_\ell(\jj),p} \end{align*} (iii) \begin{align*} \mu_{r^*_\ell(\jj),p} \tau^!_{r,\ell,\jj,D} =& \sum_{\xi\in \X(D\cup\{p\}\setminus\{\ell\})} \mu_{r^*_\ell(\jj), \xi} \xi(p)^*_p \big|_{t(r^*_\ell(\jj), \rho_p(\xi))} \pi_{\jj, \tau_{r,\ell,D}(\rho_p(\xi))} \\ =& \sum_{ \{\zeta\in\X(D\cup\{p\}) \mid \zeta(\ell)=r \} } \mu_{r^*_\ell(\jj),\rho_\ell(\zeta)} \zeta(p)^*_p \big|_{t(r^*_\ell(\jj),\rho_p(\rho_\ell(\zeta)))} \pi_{\jj, \rho_p(\zeta)} \\ =& \Big( \sum_{\eta\in \X(D\cup\{p\}\setminus\{\ell\})} \mu_{r^*_\ell(\jj),\eta} \pi_{\jj,\tau_{r,\ell,D\cup\{p\}}(\eta)} \Big) \mu_{\jj,p} \\ =& \tau^!_{r,\ell,\jj,D\cup\{p\}} \mu_{\jj,p}. \end{align*} \begin{align*} \mu_{\jj,p} { \tau_{r,\ell,\jj,D} }_! =& \sum_{\{ \xi\in\X(D\cup\{p\}) \mid \xi(\ell)=r \}} \mu_{\jj,\xi} \xi(p)^*_p \big|_{t(\jj,\rho_p(\xi))} \pi_{r^*_\ell(\jj), \rho_p(\rho_\ell(\xi))} \\ =& \sum_{\eta\in \X(D\cup\{p\}) \setminus \{\ell\}} \mu_{\jj,\tau_{r,\ell,D\cup\{p\}}(\eta)} \eta(p)^*_p \big|_{ t(r^*_\ell(\jj), \rho_p(\eta)) } \pi_{r^*_\ell(\jj), \rho_p(\eta)} \\ =& { \tau_{r,\ell,\jj,D\cup\{p\}} }_! \mu_{r^*_\ell(\jj),p}. \end{align*} \qed {\bf Proof of Lemma \ref{lemmaforcase3}}: (i) Using Lemma \ref{firstlemma}(iii), Lemma \ref{firstlemma}(iv), and Lemma \ref{lemmaforcase2}(ii), we have \begin{align*} \pi_{a_\ell(\jj),p} \theta_{a,\ell,\jj, D} =& \Big( -\lambda_i \pi_{a_\ell(\jj),p} + \mu_{a_{\ell(\jj),\ell}} \pi_{a_\ell(\jj), \ell} \pi_{a_\ell(\jj), p} -\nu s_{p\ell} \big|_{a_\ell(\jj)} \pi_{a_\ell(\jj), \ell} \\ & + \nu \sum_{m\in D} \pi_{a_\ell(\jj), p} s_{m\ell}\big|_{a_\ell(\jj)} \Big) { \tau_{a, \ell, a_\ell(\jj), D\cup\{\ell\}} }_! \\ =& -\lambda_i { \tau_{a, \ell, a_\ell(\jj), D\cup\{\ell\}\setminus\{p\}} }_! \pi_{\jj,p} + \mu_{a_{\ell(\jj),\ell}} \pi_{a_\ell(\jj), \ell} { \tau_{a, \ell, a_\ell(\jj), D\cup\{\ell\}\setminus\{p\}} }_! \pi_{\jj,p} \\ & + \nu \sum_{m\in D\setminus\{p\}} s_{m\ell} \big|_{a_\ell(\jj)} \pi_{a_\ell(\jj), p} { \tau_{a, \ell, a_\ell(\jj), D\cup\{\ell\}} }_! \\ =& \Big( -\lambda_i + \mu_{a_{\ell(\jj),\ell}} \pi_{a_\ell(\jj), \ell} + \nu \sum_{m\in D\setminus\{p\}} s_{m\ell} \big|_{a_\ell(\jj)} \Big) { \tau_{a, \ell, a_\ell(\jj), D\cup\{\ell\}\setminus\{p\}} }_! \pi_{\jj,p} \\ =& \theta_{a,\ell,\jj, D\setminus\{p\}} \pi_{\jj,p}. \end{align*} (ii) Using Lemma \ref{firstlemma}(ii) and Lemma \ref{lemmaforcase2}(ii), we have \begin{align*} \pi_{a_\ell(\jj),\ell} \theta_{a,\ell,\jj, \Delta(\jj)} =& \Big( -\lambda_i \pi_{a_\ell(\jj),\ell} + \lambda_i \pi_{a_\ell(\jj),\ell} + \nu \sum_{m\in \Delta(\jj)} s_{m\ell} \big|_{a_\ell(\jj)} \pi_{a_\ell(\jj),m} \Big) { \tau_{a, \ell, a_\ell(\jj), \Delta(\jj)\cup\{\ell\}} }_! \\ =& \nu \sum_{m\in \Delta(\jj)} s_{m\ell} \big|_{a_\ell(\jj)} { \tau_{a, \ell, a_\ell(\jj), \Delta(\jj)\cup\{\ell\}\setminus\{m\}} }_! \pi_{\jj,m} . \end{align*} (iii) Using Lemma \ref{lemmaforcase2}(iii) and Lemma \ref{firstlemma}(iii), we have \begin{align*} \theta_{a,\ell,\jj,D}\mu_{\jj,p} =& \Big(-\la_i + \mu_{a_\ell(\jj),\ell}\pi_{a_\ell(\jj),\ell} +\nu \sum_{m\in D} s_{m\ell} \big|_{a_\ell(\jj)} \Big) { \tau_{a,\ell,a_\ell(\jj),D\cup\{\ell\}} }_! \mu_{\jj,p} \\ =& \Big( -\la_i + \mu_{a_\ell(\jj),\ell}\pi_{a_\ell(\jj),\ell} +\nu \sum_{m\in D} s_{m\ell} \big|_{a_\ell(\jj)} \Big) \mu_{a_\ell(\jj),p} { \tau_{a,\ell,a_\ell(\jj),D\cup\{\ell\}\setminus\{p\}} }_! \\ =& \Big( -\la_i \mu_{a_\ell(\jj),p} + \mu_{a_\ell(\jj),\ell}\big( \mu_{a_\ell(\jj),p} \pi_{a_\ell(\jj),\ell} - \nu s_{p\ell} \big|_{a_\ell(\jj)} \big) \\ & +\nu \sum_{m\in D} s_{m\ell} \big|_{a_\ell(\jj)} \mu_{a_\ell(\jj),p}\Big) { \tau_{a,\ell,a_\ell(\jj),D\cup\{\ell\}\setminus\{p\}} }_! \\ =& \mu_{a_\ell(\jj),p} \theta_{a,\ell,\jj,D\setminus\{p\}} . \end{align*} \qed \section{\bf Proof of Proposition \ref{functor}} To prove Proposition \ref{functor}, we have to check that the relations in Definition \ref{algebra} hold for the maps $a'_\ell\big|_\jj$. First, we check the relations of type (i). Suppose we are given $\ell\in [1,n]$ and $\jj=(j_1,\ldots,j_n)\in I^n$. There are two cases. {\bf Case 1}, $j_\ell=i$: Then $\ell\in\Delta(\jj)$. We have \begin{align*} \sum_{a\in R} a'_\ell \big|_{a^*_\ell(\jj)} a^{*'}_\ell \big|_\jj =& \sum_{a\in R} \theta_{a,\ell,a^*_\ell(\jj),\Delta(\jj)\setminus\{\ell\}} \tau_{a,\ell,\jj,\Delta(\jj)}^! \\ =& \sum_{a\in R} \Big( -\la_i + \mu_{\jj,\ell} \pi_{\jj,\ell} + \nu\sum_{m\in \Delta(\jj)\setminus\{\ell\}} s_{m\ell} \big|_\jj \Big) { \tau_{a,\ell,\jj,\Delta(\jj)} }_! \tau_{a,\ell,\jj,\Delta(\jj)}^! \\ =& -\la_i + \mu_{\jj,\ell} \pi_{\jj,\ell} + \nu\sum_{m\in \Delta(\jj)\setminus\{\ell\}} s_{m\ell} \big|_\jj \quad \mbox{ (by Lemma \ref{lemmaforcase2}(i)) } \\ =& (r_i\la)_i + \nu\sum_{m\in \Delta(\jj)\setminus\{\ell\}} s_{m\ell} \big|_\jj \quad \mbox{ (since $\pi_{\jj,\ell}$ vanishes on $V'_\jj$) }. \end{align*} {\bf Case 2}, $j_\ell\neq i$: Then $\ell\notin \Delta(\jj)$. If $a\in R$ and $t(a)=j_\ell$, then \begin{align*} a^{*'}_\ell \big|_{ a_\ell(\jj) } a'_\ell \big|_\jj =& \tau^!_{a,\ell, a_\ell(\jj), \Delta(\jj)\cup\{\ell\}} \theta_{ a,\ell,\jj,\Delta(\jj) } \\ =& \Big( \sum_{\xi\in\X(\Delta(\jj))} \mu_{\jj,\xi} \pi_{a_\ell(\jj), \tau_{a,\ell, \Delta(\jj)\cup\{\ell\}} (\xi)} \Big)\\ & \times \Big( -\la_i + \mu_{a_\ell(\jj),\ell} \pi_{a_\ell(\jj),\ell} + \nu\sum_{m\in \Delta(\jj)} s_{m\ell}\big|_{a_\ell(\jj)} \Big) \\ & \times \Big(\sum_{\varepsilon\in\X(\Delta(\jj))} \mu_{a_\ell(\jj), \tau_{a,\ell, \Delta(\jj)\cup\{\ell\}} (\varepsilon)} \pi_{\jj,\varepsilon} \Big) \\ =& -\la_i + \sum_{\xi\in \X(\Delta(\jj))} \mu_{\jj,\xi} (a^*a)_\ell\big|_{t(\jj,\xi)} \pi_{\jj,\xi} \\ & + \nu \sum_{m\in\Delta(\jj)} \sum_{ \{ \xi\in\X(\Delta(\jj)) \mid \xi(m)=a \} } \mu_{\jj,\xi} s_{m\ell} \pi_{\jj,\xi} . \end{align*} Hence, \begin{align*} & \sum_{ \{a\in Q\mid h(a)=j_\ell\} } a'_\ell\big|_{a^*_\ell(\jj)} a^{*'}_\ell\big|_\jj - \sum_{ \{a\in Q\mid t(a)=j_\ell\} } a^{*'}_\ell\big|_{a_\ell(\jj)} a'_\ell\big|_\jj \\ =& \sum_{\xi\in \X(\Delta(\jj))} \mu_{\jj,\xi} \Big( \sum_{ \{a\in Q\mid h(a)=j_\ell\} } aa^* -\sum_{ \{a\in Q\mid t(a)=j_\ell\} } a^*a \Big)_\ell \big|_{t(\jj,\xi)} \pi_{\jj,\xi} \\ & + \la_i \Big(\sum_{ \{a\in R\mid t(a)=j_\ell\} } 1\Big) - \nu \sum_{ \{a\in R\mid t(a)=j_\ell\} } \sum_{m\in\Delta(\jj)} \sum_{ \{ \xi\in\X(\Delta(\jj)) \mid \xi(m)=a \} } \mu_{\jj,\xi} s_{m\ell} \pi_{\jj,\xi} \\ =& \sum_{\xi\in \X(\Delta(\jj))} \mu_{\jj,\xi} \Big( \la_{j_\ell} + \nu\sum_{ \{m\neq\ell \mid j_m=j_\ell\} } s_{m\ell} \big|_{t(\jj,\xi)} + \nu\sum_{ \{ m\in\Delta(\jj) \mid t(\xi(m))=j_\ell \} } s_{m\ell} \big|_{t(\jj,\xi)} \Big) \pi_{\jj,\xi} \\ & -(\epsilon_{j_\ell},\epsilon_i)\la_i - \nu \sum_{ \{a\in R\mid t(a)=j_\ell\} } \sum_{m\in\Delta(\jj)} \sum_{ \{ \xi\in\X(\Delta(\jj)) \mid \xi(m)=a \} } \mu_{\jj,\xi} s_{m\ell} \pi_{\jj,\xi} \\ =& (r_i\la)_{j_\ell} + \nu\sum_{ \{m\neq\ell \mid j_m=j_\ell\} } s_{m\ell} \big|_\jj . \end{align*} Next, we check the relations of type (ii). Suppose we are given $\ell, m\in [1,n]$ ($\ell\neq m$), $a,b\in \QQ$, and $\jj=(j_1,\ldots,j_n)\in I^n$, such that $j_\ell=t(a)$, $j_m=t(b)$. There are six cases. {\bf Case 1}, $t(a),h(a),t(b),h(b)\neq i$: \begin{align*} & a'_\ell\big|_{b_m(\jj)} b'_m\big|_\jj - b'_m\big|_{a_\ell(\jj)} a'_\ell\big|_\jj \\ =& a_\ell\big|_{b_m(\jj), \Delta(\jj)} b_m\big|_{\jj,\Delta(\jj)} - b_m\big|_{a_\ell(\jj),\Delta(\jj)} a_\ell\big|_{\jj,\Delta(\jj)}\\ =& \sum_{\xi\in \X(\Delta(\jj))} \mu_{a_\ell(b_m(\jj)), \xi} \Big( a_\ell\big|_{t(b_m(\jj),\xi)} b_m\big|_{t(\jj,\xi)} - b_m\big|_{t(a_\ell(\jj),\xi)} a_\ell\big|_{t(\jj,\xi)} \Big) \pi_{\jj,\xi} \\ =& \left\{ \begin{array}{ll} \nu s_{\ell m} \big|_{\jj} & \textrm{if $b\in Q$ and $a=b^*$},\\ - \nu s_{\ell m} \big|_{\jj} & \textrm{if $a\in Q$ and $b=a^*$},\\ 0 & \textrm{else}\,. \end{array} \right. \end{align*} {\bf Case 2}, $t(a)=i$ and $t(b),h(b)\neq i$: \begin{align*} & a'_\ell\big|_{b_m(\jj)} b'_m\big|_\jj - b'_m\big|_{a_\ell(\jj)} a'_\ell\big|_\jj \\ =& \tau^!_{a^*, \ell, b_m(\jj), \Delta(\jj)} b_m\big|_{\jj, \Delta(\jj)} - b_m\big|_{a_\ell(\jj), \Delta(\jj)\setminus\{\ell\}} \tau^!_{a^*,\ell,\jj,\Delta(\jj)}\\ =& \sum_{\eta\in\X(\Delta(\jj)\setminus\{\ell\})} \mu_{a_\ell(b_m(\jj)),\eta} b_m\big|_{t(\jj, \tau_{a^*,\ell,\Delta(\jj)}(\eta) )} \pi_{\jj, \tau_{a^*,\ell,\Delta(\jj)}(\eta)} \\ & - \sum_{\eta\in\X(\Delta(\jj)\setminus\{\ell\})} \mu_{b_m(a_\ell(\jj)),\eta} b_m\big|_{t(a_\ell(\jj),\eta)} \pi_{\jj, \tau_{a^*,\ell,\Delta(\jj)}(\eta)} \\ =& 0. \end{align*} {\bf Case 3}, $h(a)=i$ and $t(b),h(b)\neq i$: We have \begin{align*} { \tau_{a, \ell, a_\ell(b_m(\jj)), \Delta(\jj)\cup\{\ell\}} }_! b_m\big|_{\jj,\Delta(\jj)} =& \sum_{\xi\in\X(\Delta(\jj)} \mu_{ a_\ell(b_m(\jj)),\tau_{a,\ell,\Delta(\jj)\cup\{\ell\}} } b_m\big|_{t(\jj,\xi)} \pi_{\jj,\xi}\\ =& b_m\big|_{a_\ell(\jj), \Delta(\jj)\cup\{\ell\}} { \tau_{a,\ell,a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} }_!. \end{align*} By Lemma \ref{lemmaforcase1}, \begin{align*} \mu_{a_\ell(b_m(\jj)),\ell} \pi_{a_\ell(b_m(\jj)),\ell} b_m\big|_{a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} =& \mu_{a_\ell(b_m(\jj)),\ell} b_m\big|_{a_\ell(\jj),\Delta(\jj)} \pi_{a_\ell(\jj),\ell} \\ =& b_m\big|_{a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} \mu_{a_\ell(\jj),\ell} \pi_{a_\ell(\jj),\ell} . \end{align*} Hence, \begin{align*} & a'_\ell\big|_{b_m(\jj)} b'_m\big|_\jj - b'_m\big|_{a_\ell(\jj)} a'_\ell\big|_\jj \\ =& \theta_{a,\ell,b_m(\jj),\Delta(\jj)} b_m\big|_{\jj,\Delta(\jj)} - b_m\big|_{a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} \theta_{a,\ell,\jj,\Delta(\jj)} \\ =& \Big( -\la_i + \mu_{a_\ell(b_m(\jj)),\ell} \pi_{a_\ell(b_m(\jj)),\ell} + \nu \sum_{p\in \Delta(\jj)} s_{p\ell} \big|_{a_\ell(b_m(\jj))} \Big) { \tau_{a, \ell, a_\ell(b_m(\jj)), \Delta(\jj)\cup\{\ell\}} }_! b_m\big|_{\jj,\Delta(\jj)} \\ & - b_m\big|_{a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} \Big( -\la_i + \mu_{a_\ell(\jj),\ell} \pi_{a_\ell(\jj),\ell} + \nu \sum_{p\in \Delta(\jj)} s_{p\ell} \big|_{a_\ell(\jj)} \Big) { \tau_{a,\ell,a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} }_! \\ =& 0. \end{align*} {\bf Case 4}, $t(a)=t(b)=i$: \begin{align*} a'_\ell\big|_{b_m(\jj)} b'_m\big|_\jj =& \tau^!_{a^*,\ell,b_m(\jj),\Delta(\jj)\setminus\{m\}} \tau^!_{b^*,m,\jj,\Delta(\jj)} \\ =& \sum_{\eta\in\X( \Delta(\jj)\setminus \{\ell,m\} )} \mu_{a_\ell(b_m(\jj)), \eta} \pi_{ \jj, \tau_{b^*, m, \Delta(\jj)} (\tau_{a^*,\ell,\Delta(\jj)\setminus\{m\}} (\eta)) }\\ =& \tau^!_{b^*,m,a_\ell(\jj),\Delta(\jj)\setminus\{\ell\}} \tau^!_{a^*,\ell,\jj,\Delta(\jj)} \\ =& b'_m\big|_{a_\ell(\jj)} a'_\ell\big|_\jj . \end{align*} {\bf Case 5}, $h(a)=t(b)=i$: \begin{align*} & b'_m\big|_{a_\ell(\jj)} a'_\ell\big|_\jj \\ =& \tau^!_{b^*,m,a_\ell(\jj), \Delta(\jj)\cup\{\ell\} } \theta_{a,\ell, \jj, \Delta(\jj)} \\ =& \tau^!_{b^*,m,a_\ell(\jj), \Delta(\jj)\cup\{\ell\} } \Big( -\la_i + \mu_{a_\ell(\jj),\ell} \pi_{a_\ell(\jj),\ell} + \nu \sum_{p\in \Delta(\jj)} s_{p\ell} \big|_{a_\ell(\jj)} \Big) { \tau_{a,\ell,a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} }_! \\ =&\Big( -\la_i + \mu_{a_\ell(b_m(\jj)),\ell} \pi_{a_\ell(b_m(\jj)),\ell} + \nu \sum_{p\in \Delta(\jj)\setminus\{m\}} s_{p\ell} \big|_{a_\ell(b_m(\jj))} \Big) \tau^!_{b^*,m,a_\ell(\jj), \Delta(\jj)\cup\{\ell\} } \\ &\times { \tau_{a,\ell,a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} }_! + \nu s_{m\ell} \big|_{b_\ell(a_\ell(\jj))} \tau^!_{b^*,\ell,a_\ell(\jj), \Delta(\jj)\cup\{\ell\} } { \tau_{a,\ell,a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} }_! \\ & \mbox{ (using Lemma \ref{lemmaforcase2}).} \end{align*} Now \begin{align*} & \tau^!_{b^*,m,a_\ell(\jj), \Delta(\jj)\cup\{\ell\} } { \tau_{a,\ell,a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} }_! \\ =& \Big( \sum_{\xi\in \X(\Delta(\jj)\cup\{\ell\}\setminus\{m\})} \mu_{b_m(a_\ell(\jj)),\xi} \pi_{a_\ell(\jj), \tau_{ b^*,m,\Delta(\jj)\cup\{\ell\} } (\xi)} \Big) \Big( \sum_{\zeta\in \X(\Delta(\jj))} \mu_{ a_\ell(\jj), \tau_{ a,\ell,\Delta(\jj)\cup\{\ell\} } (\zeta) } \pi_{\jj,\zeta} \Big)\\ =& \sum_{\eta \in \X( \Delta(\jj)\setminus\{m\} )} \mu_{ a_\ell(b_m(\jj)), \tau_{ a,\ell,\Delta(\jj)\cup\{\ell\}\setminus\{m\} } (\eta) } \pi_{\jj, \tau_{b^*,m,\Delta(\jj)} (\eta)} \\ =& { \tau_{a,\ell,a_\ell(b_m(\jj)), \Delta(\jj)\cup\{\ell\}\setminus\{m\}} }_! \tau^!_{b^*,m,\jj,\Delta(\jj)}, \end{align*} and \begin{align*} & \tau^!_{b^*,\ell,a_\ell(\jj), \Delta(\jj)\cup\{\ell\} } { \tau_{a,\ell,a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} }_! \\ =& \Big( \sum_{\xi\in \X(\Delta(\jj))} \mu_{b_\ell(a_\ell(\jj)),\xi} \pi_{a_\ell(\jj), \tau_{ b^*,\ell,\Delta(\jj)\cup\{\ell\} } (\xi)} \Big) \Big( \sum_{\zeta\in \X(\Delta(\jj))} \mu_{ a_\ell(\jj), \tau_{ a,\ell,\Delta(\jj)\cup\{\ell\} } (\zeta) } \pi_{\jj,\zeta} \Big)\\ =& \left\{ \begin{array}{ll} 0 & \mbox{ if } a\neq b^*,\\ 1 & \mbox{ if } a = b^*. \end{array}\right. \end{align*} Hence, $$ b'_m\big|_{a_\ell(\jj)} a'_\ell\big|_\jj \\ = \left\{ \begin{array}{ll} a'_\ell\big|_{b_m(\jj)} b'_m\big|_\jj & \mbox{ if } a\neq b^*,\\ a'_\ell\big|_{b_m(\jj)} b'_m\big|_\jj +\nu s_{m\ell}\big|_{\jj} & \mbox{ if } a=b^*. \end{array}\right. $$ {\bf Case 6}, $h(a)=h(b)=i$: \begin{align*} & a'_\ell\big|_{b_m(\jj)} b'_m\big|_\jj \\ =& \theta_{a,\ell,b_m(\jj),\Delta(\jj)\cup\{m\}} \theta_{b,m,\jj,\Delta(\jj)} \\ =& \Big( -\la_i + \mu_{a_\ell(b_m(\jj)),\ell} \pi_{a_\ell(b_m(\jj)),\ell} + \nu\sum_{p \in \Delta(\jj)\cup\{m\}} s_{p\ell}\big|_{a_\ell(b_m(\jj))} \Big) { \tau_{a,\ell,a_\ell(b_m(\jj)),\Delta(\jj)\cup\{\ell,m\}} }_! \\ & \times \Big( -\la_i + \mu_{b_m(\jj),m} \pi_{b_m(\jj),m} + \nu\sum_{q\in \Delta(\jj)} s_{qm}\big|_{b_m(\jj)} \Big) { \tau_{b,m,b_m(\jj),\Delta(\jj)\cup\{m\}} }_! \\ =& \Big( -\la_i + \mu_{a_\ell(b_m(\jj)),\ell} \pi_{a_\ell(b_m(\jj)),\ell} + \nu\sum_{p \in \Delta(\jj)\cup\{m\}} s_{p\ell}\big|_{a_\ell(b_m(\jj))} \Big) \\ & \times \Big( -\la_i + \mu_{a_\ell(b_m(\jj)),m} \pi_{a_\ell(b_m(\jj)),m} + \nu\sum_{q \in \Delta(\jj)} s_{qm}\big|_{a_\ell(b_m(\jj))} \Big) \\ & \times { \tau_{a,\ell,a_\ell(b_m(\jj)),\Delta(\jj)\cup\{\ell,m\}} }_! { \tau_{b,m,b_m(\jj),\Delta(\jj)\cup\{m\}} }_! \quad \mbox{ (using Lemma \ref{lemmaforcase2}) }. \end{align*} Now \begin{align*} & { \tau_{a,\ell,a_\ell(b_m(\jj)),\Delta(\jj)\cup\{\ell,m\}} }_! { \tau_{b,m,b_m(\jj),\Delta(\jj)\cup\{m\}} }_! \\ =& \sum_{\xi\in\X(\Delta(\jj))} \mu_{a_\ell(b_m(\jj)), \tau_{a,\ell, \Delta(\jj)\cup\{\ell,m\}} (\tau_{b,m,\Delta(\jj)\cup\{m\}} (\xi) )} \pi_{\jj,\xi}\\ =& { \tau_{b,m,a_\ell(b_m(\jj)),\Delta(\jj)\cup\{\ell,m\}} }_! { \tau_{a,\ell,a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} }_! \end{align*} Using Lemma \ref{firstlemma}, we have \begin{align*} & \big(\mu_{a_\ell(b_m(\jj)),\ell} \pi_{a_\ell(b_m(\jj)),\ell} +\nu s_{m\ell}\big|_{a_\ell(b_m(\jj))} \big) \mu_{a_\ell(b_m(\jj)),m} \pi_{a_\ell(b_m(\jj)),m} \\ =& \mu_{a_\ell(b_m(\jj)),\ell} \mu_{a_\ell(b_m(\jj)),m} \pi_{a_\ell(b_m(\jj)),\ell} \pi_{a_\ell(b_m(\jj)),m} \\ =& \big(\mu_{a_\ell(b_m(\jj)),m} \pi_{a_\ell(b_m(\jj)),m} +\nu s_{m\ell}\big|_{a_\ell(b_m(\jj))} \big) \mu_{a_\ell(b_m(\jj)),\ell} \pi_{a_\ell(b_m(\jj)),\ell}. \end{align*} Moreover, \begin{align*} \sum_{p\in\Delta(\jj)\cup\{m\}} s_{p\ell} \sum_{q \in \Delta(\jj)} s_{qm} =& \sum_{\substack{p,q\in\Delta(\jj) \\ p\neq q}} s_{p\ell}s_{qm} + \sum_{q\in\Delta(\jj)} s_{q\ell}s_{qm} + \sum_{q\in\Delta(\jj)} s_{m\ell}s_{qm}\\ =& \sum_{\substack{p,q\in\Delta(\jj) \\ p\neq q}} s_{qm}s_{p\ell} + \sum_{q\in\Delta(\jj)} s_{m\ell}s_{q\ell} + \sum_{q\in\Delta(\jj)} s_{qm}s_{q\ell}\\ =& \sum_{p\in\Delta(\jj)\cup\{\ell\}} s_{pm} \sum_{q \in \Delta(\jj)} s_{q\ell}. \end{align*} Hence, $$ a'_\ell\big|_{b_m(\jj)} b'_m\big|_\jj = b'_m\big|_{a_\ell(\jj)} a'_\ell\big|_\jj .$$ This completes the proof of Proposition \ref{functor}. \section{\bf Properties of the reflection functors} \subsection{} Let $i$ be a vertex of $Q$ such that there is no edge-loop at $i$. Define $\Lambda_i$ to be the set of all $(\la,\nu)\in B\times \k$ such that $\la_i \pm \nu \sum_{m=2}^r s_{1 m}$ are invertible in $\k[S_r]$ for all $r\in [1,n]$. \begin{theorem} \label{equivalence} If $(\la,\nu)\in\Lambda_i$, then the functor $$F_i: \A_{n,\la,\nu}\mathrm{-mod} \ \too\ \A_{n,r_i\la,\nu}\mathrm{-mod}$$ is an equivalence of categories with quasi-inverse functor $F_i$. \end{theorem} \begin{proof} Let $V\in\A_{n,\la,\nu}\mathrm{-mod}$ and $V'=F_i(V)\in\A_{n,r_i\la,\nu}\mathrm{-mod}$. Let $\jj\in I^n$. By Lemma \ref{firstlemma}(ii), for any $p\in \Delta(\jj)$, the composition $$ \xymatrix{ V(\jj, \Delta(\jj)\setminus\{p\}) \ar[rr]^{\mu_{\jj,p}} && V(\jj,\Delta(\jj)) \ar[rr]^{\pi_{\jj,p}} && V(\jj, \Delta(\jj)\setminus\{p\}) }$$ is equal to $\la_i$. Since $\la_i$ is invertible, we have a direct sum decomposition $$ V(\jj,\Delta(\jj)) = \mathrm{Ker} \pi_{\jj,p} \oplus \mathrm{Im} \mu_{\jj,p}.$$ Now $V'(\jj,\Delta(\jj))=V(\jj,\Delta(\jj))$, and $$ V'(\jj,\Delta(\jj)\setminus\{p\}) = \bigoplus_{\eta\in\X(\Delta(\jj)\setminus\{p\})} V'_{t(\jj,\eta)} \subset \bigoplus_{\eta\in\X(\Delta(\jj)\setminus\{p\})} \Big( \bigoplus_{r\in R} V_{t(\jj, \tau_{r,p,\Delta(\jj)}(\eta))} \Big) = V(\jj,\Delta(\jj)). $$ Observe that the kernel of the map $$ -\la_i + \mu_{\jj,p}\pi_{\jj,p}:V(\jj,\Delta(\jj)) \too V(\jj,\Delta(\jj)) $$ is $\mathrm{Im} \mu_{\jj,p} \subset V(\jj,\Delta(\jj))$. Hence, $$ F_i(F_i(V))_\jj = F_i(V')_\jj = \left\{ \begin{array}{ll} \bigcap_{p\in \Delta(\jj)} \mathrm{Im}(\mu_{\jj,p}) & \mbox{ if } \Delta(\jj) \neq\emptyset,\\ V_\jj & \mbox{ if } \Delta(\jj)=\emptyset. \end{array} \right.$$ Suppose $\Delta(\jj) = \{ p_1, \ldots, p_r \}$. We have a canonical map $$\mu_{\jj,p_1} \cdots \mu_{\jj,p_r}:V_\jj \too V(\jj,\Delta(\jj)).$$ By Lemma \ref{firstlemma}(iv), this map does not depend on the ordering of $p_1,\ldots,p_r$, and its image lies in $\bigcap_{p\in \Delta(\jj)} \mathrm{Im}(\mu_{\jj,p})$. We claim that it is an isomorphism from $V_\jj$ to $\bigcap_{p\in \Delta(\jj)} \mathrm{Im}(\mu_{\jj,p})$. By Lemma \ref{firstlemma}(ii), each $\mu_{\jj,p}$ is injective. Hence, we have to show that $\bigcap_{p\in \Delta(\jj)} \mathrm{Im}(\mu_{\jj,p}) \subset \mathrm{Im}(\mu_{\jj,p_1} \cdots \mu_{\jj,p_r})$. It suffices to prove that, for $h\in [2,r]$, we have $$ \mathrm{Im}(\mu_{\jj,p_1} \cdots \mu_{\jj,p_{h-1}}) \cap \mathrm{Im}(\mu_{\jj,p_h}) \subset \mathrm{Im}(\mu_{\jj,p_1} \cdots \mu_{\jj,p_h}) \subset V(\jj,\Delta(\jj)).$$ Suppose that $\mu_{\jj,p_1} \cdots \mu_{\jj,p_{h-1}}(v) = \mu_{\jj,p_h}(w)$. Then, by Lemma \ref{firstlemma}, \begin{gather*} \la_i w = \pi_{\jj,p_h}\mu_{\jj,p_h}(w) = \pi_{\jj,p_h} \mu_{\jj,p_1} \cdots \mu_{\jj,p_{h-1}}(v) \\ = \mu_{\jj,p_1} \cdots \mu_{\jj,p_{h-1}} \pi_{\jj,p_h} (v) -\nu\sum_{g=1}^{h-1} s_{p_g p_h} \mu_{\jj,p_1} \cdots \mu_{\jj,p_{g-1}}\mu_{\jj,p_{g+1}} \cdots \mu_{\jj,p_{h-1}} (v). \end{gather*} Hence, \begin{align*} \la_i \mu_{\jj,p_h}(w) =& \mu_{\jj,p_1} \cdots \mu_{\jj,p_{h}} \pi_{\jj,p_h} (v) -\nu\sum_{g=1}^{h-1} s_{p_g p_h} \mu_{\jj,p_1} \cdots \mu_{\jj,p_{g}} \cdots \mu_{\jj,p_{h-1}} (v)\\ =& \mu_{\jj,p_1} \cdots \mu_{\jj,p_{h}} \pi_{\jj,p_h} (v) -\nu\sum_{g=1}^{h-1} s_{p_g p_h} \mu_{\jj,p_h}(w). \end{align*} It follows that $$ \mu_{\jj,p_h}(w) = (\la_i+\nu\sum_{g=1}^{h-1} s_{p_g p_h})^{-1} \mu_{\jj,p_1} \cdots \mu_{\jj,p_{h}} \pi_{\jj,p_h} (v) \in \mathrm{Im}(\mu_{\jj,p_1} \cdots \mu_{\jj,p_h}).$$ It remains to show that our map $V\to F_i(F_i(V))$ commutes with the actions. Let $a_\ell\big|_\jj\in\E$, with $t(a)=j_\ell$. Let $\Delta(\jj)=\{p_1,\ldots,p_r\}$. If $h(a),t(a)\neq i$, then by Lemma \ref{lemmaforcase1}(ii), $$ a_\ell\big|_{\jj, \Delta(\jj)} \mu_{\jj,p_1} \cdots \mu_{\jj,p_r} = \mu_{a_\ell(\jj),p_1} \cdots \mu_{a_\ell(\jj),p_r} a_\ell\big|_\jj.$$ If $t(a)=i$, let $p_r=\ell$. Then by Lemma \ref{lemmaforcase2}(iii), \begin{align*} \tau^!_{a^*,\ell,\jj,\Delta(\jj)} \mu_{\jj,p_1} \cdots \mu_{\jj,p_r} =& \mu_{a_\ell(\jj),p_1} \cdots \mu_{a_\ell(\jj),p_{r-1}} \tau^!_{a^*,\ell,\jj,\{\ell\}} \mu_{\jj,p_r}\\ =& \mu_{a_\ell(\jj),p_1} \cdots \mu_{a_\ell(\jj),p_{r-1}} a_\ell\big|_\jj. \end{align*} If $h(a)=i$, then the map $(F_iF_iV)_\jj\to (F_iF_iV)_{a_\ell(\jj)}$ is \begin{gather*} \Big( \la_i +\big( -\la_i+ \mu_{a_\ell(\jj),\ell} \pi_{a_\ell(\jj),\ell} \big) + \nu \sum_{m\in \Delta(\jj)} s_{m\ell}\big|_{a_\ell(\jj)} \Big) { \tau_{a,\ell,a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} }_! \\ = \la_i { \tau_{a,\ell,a_\ell(\jj),D\cup\{\ell\}} }_! +\theta_{a,\ell,\jj,D}. \end{gather*} By Lemma \ref{lemmaforcase2}(iii) and Lemma \ref{lemmaforcase3}(iii), we have \begin{align*} & \big(\la_i { \tau_{a,\ell,a_\ell(\jj),\Delta(\jj)\cup\{\ell\}} }_! +\theta_{a,\ell,\jj,\Delta(\jj)}\big) \mu_{\jj,p_1} \cdots \mu_{\jj,p_r} \\ =& \mu_{a_\ell(\jj),p_1} \cdots \mu_{a_\ell(\jj),p_r} \big( \la_i { \tau_{a,\ell,a_\ell(\jj), \{\ell\}} }_! + \theta_{a,\ell,\jj,\emptyset} \big) \\ =& \mu_{a_\ell(\jj),p_1} \cdots \mu_{a_\ell(\jj),p_r} \mu_{a_\ell(\jj),\ell} \pi_{a_\ell(\jj),\ell} { \tau_{a,\ell,a_\ell(\jj),\{\ell\}} }_! \\ =& \mu_{a_\ell(\jj),p_1} \cdots \mu_{a_\ell(\jj),p_r} \mu_{a_\ell(\jj),\ell} a_\ell\big|_\jj. \end{align*} \end{proof} It is easy to see that the functor $F_i: \A_{n,\la,\nu}\mathrm{-mod} \,\to\,\A_{n,r_i\la,\nu}\mathrm{-mod}$ is left exact. \begin{corollary} \label{exact} Suppose $(\la,\nu)\in \Lambda_i$. Then the functor $F_i:\A_{n,\la,\nu}\mathrm{-mod} \to \A_{n,r_i\la,\nu}\mathrm{-mod}$ is exact. \end{corollary} \begin{proof} By Theorem \ref{equivalence}, $F_i$ has a quasi-inverse functor. Since $F_i$ is left exact, it must also be right exact by \cite[Theorem 5.8.3]{We}. \end{proof} Let $\k'$ be a commutative $\k$-algebra. The following corollary will be used later in the proof of Proposition \ref{flat2}. \begin{corollary} \label{fiber} Suppose $(\la,\nu)\in \Lambda_i$. Let $V$ be any $\A_{n,\la,\nu}$-module. Then $F_i(V)\ot_\k \k'=F_i(V\ot_\k \k')$. \end{corollary} \begin{proof} There is a natural map $f:F_i(V)\ot_\k \k' \to F_i(V\ot_\k \k')$. We have $$ (F_i(F_iV))\ot_\k \k' \too F_i((F_iV)\ot_\k \k') \too F_i(F_i(V\ot_\k \k')). $$ By Theorem \ref{equivalence}, the composition of these two maps is the identity map of $V\ot_\k \k'$. The injectivity of the first map for any $V$ implies that $f$ is injective. The surjectivity of the second map and the exactness of $F_i$ imply that $f$ is surjective. \end{proof} \begin{remark} Let $g$ be an automorphism of the graph $Q$. Then $g$ acts on $B$ by $(g\la)_j=\la_{g^{-1}(j)}$ for any $j$, and $g(r_i\la)=r_{g(i)}(g\la)$. It was pointed out to the author by Iain Gordon that $g$ induces an isomorphism of algebras $\A_{n,\la,\nu}\to\A_{n,g\la,\nu}$. Observe that the following diagram commutes: $$\xymatrix{ \A_{n,g\la,\nu}-\mathrm{mod} \ar[rr]^{} \ar[d]_{F_{g(i)}} && \A_{n,\la,\nu}-\mathrm{mod} \ar[d]^{F_i} \\ \A_{n,r_{g(i)}(g\la),\nu}-\mathrm{mod} \ar[rr]^{} && \A_{n,r_i\la,\nu}-\mathrm{mod} }$$ \end{remark} \subsection{} In this subsection, we recall the definitions of a commutative cube and its associated complex; see \cite[\S3]{Kh} for a more detailed discussion. Let $\Delta$ be a finite set. For any $J\subset\Delta$, we let $\Z J$ be the $\Z$-module freely generated by the elements of $J$, and write $\det(J)$ for $\det(\Z J)$. If $p\in \Delta\setminus J$, then we define an isomorphism $$\iota: \det(J) \too \det(J\cup\{p\}),\quad x\mapsto x\wedge p.$$ Let $\mathscr Z$ be the category of modules over a ring. \begin{definition} A commutative $\Delta$-cube $(Z,\psi)$ (over $\mathscr Z$) consists of data: \begin{itemize} \item an object $Z(J)\in \mathrm{Ob}(\mathscr Z)$ for each $J\subset\Delta$; \item a morphism $\psi_{J,p}: Z(J)\to Z(J\cup\{p\})$ for each $J\subset\Delta$ and $p\in \Delta\setminus J$. \end{itemize} These data are required to satisfy the following conditions: for each $J\subset\Delta$ and $p,q\in \Delta\setminus J$ where $p\neq q$, we have $\psi_{J\cup\{p\},q}\psi_{J,p} =\psi_{J\cup\{q\},p}\psi_{J,q}$. \end{definition} Let $(Z,\psi)$ be a commutative $\Delta$-cube. We shall construct a complex $\CC^\bullet(Z)$ in the category $\mathscr Z$. For each integer $r$, let $$\CC^r(Z):= \bigoplus_{\substack{J\subset\Delta\\ |J|=r}} Z(J) \ot_\Z \det(J).$$ Define the map $$ d: \CC^r(Z)\to\CC^{r+1}(Z), \quad d:= \sum_{\substack{J\subset\Delta\\ |J|=r}} \sum_{p\in \Delta\setminus J} \psi_{J,p} \ot \iota.$$ It is easy to check that $d^2=0$, so that $(\CC^\bullet(Z), d)$ is a complex. Let $q\in\Delta$, and let $\Delta_q:=\Delta\setminus\{q\}$. Define $$ Z_0(J):=Z(J),\quad Z_1(J):=Z(J\cup\{q\}), \quad \mbox{ for all } J\subset\Delta_q.$$ Then both $(Z_0, \psi)$ and $(Z_1, \psi)$ are commutative $\Delta_q$-cubes. Let $$f: \CC^\bullet (Z_0) \too \CC^\bullet (Z_1), \quad f := \sum_{J\subset\Delta_q} \psi_{J,q} \ot \mathrm{Id}.$$ The map $f$ is a morphism of complexes. We note that the complex $\CC^\bullet (Z)$ is the cone of the morphism $f[-1]$, and we have a short exact sequence \begin{equation} \label{cone} 0\too \CC^\bullet (Z_1)[-1] \too \CC^\bullet (Z) \too \CC^\bullet (Z_0) \too 0. \end{equation} \begin{example} \label{idempotent} Let $Z'$ be an object of $\mathscr Z$, and let $\psi_1, \ldots, \psi_m$ be a set of commuting endomorphisms of $Z'$. Let $\Delta:=[1,m]$. For each $J\subset\Delta$, define $$ Z(J):= \left\{\begin{array}{ll} \mathrm{Im}(\psi_{q_1}\cdots\psi_{q_r}) &\mbox{ if } J = \{ q_1, \ldots, q_r \}, \\ Z' & \mbox{ if } J=\emptyset. \end{array}\right. $$ If $J\subset\Delta$ and $p\in\Delta\setminus J$, then define $\psi_{J,p} : Z(J)\to Z(J\cup\{p\})$ to be the restriction of the morphism $\psi_p$ to $Z(J)$. It is clear that $(Z,\psi)$ is a commutative $\Delta$-cube. {\it Claim}: If $\psi_1, \ldots, \psi_m$ are idempotents, then $H^r (\CC^\bullet(Z)) = 0$ for all $r>0$. {\it Proof of Claim}: This is clear if $m=1$. We shall prove the claim by induction on $m$. Let $q=1$. We have the commutative $\Delta_q$-cubes $(Z_0,\psi)$ and $(Z_1,\psi)$. By (\ref{cone}), we have the long exact sequence \begin{gather*} 0 \too H^0(\CC^\bullet (Z)) \too H^0(\CC^\bullet (Z_0)) \stackrel{\psi_1}{\too} H^0(\CC^\bullet (Z_1)) \too H^1(\CC^\bullet (Z)) \too \\ \too H^1(\CC^\bullet (Z_0)) \too H^1(\CC^\bullet (Z_1)) \too H^2(\CC^\bullet (Z)) \too H^2(\CC^\bullet (Z_0)) \too \cdots \end{gather*} By the induction hypothesis, we have $H^r(\CC^\bullet (Z_0))=H^r(\CC^\bullet (Z_1))=0$ for all $r>0$. Hence, $H^r(\CC^\bullet (Z))=0$ for all $r>1$, and $H^1(\CC^\bullet (Z))$ is isomorphic to the cokernel of $\psi_1: H^0(\CC^\bullet (Z_0))\to H^0(\CC^\bullet (Z_1))$. Suppose $z\in H^0(\CC^\bullet (Z_1))$. Then $z\in\mathrm{Im}(\psi_1)$ implies $\psi_1(z)=z$; and $z\in\mathrm{Ker}(\psi_p)$ for all $p>1$ implies $z\in H^0(\CC^\bullet (Z_0))$. Therefore, $H^1(\CC^\bullet(Z))=0$. \qed \end{example} \subsection{} Let $V$ be a $\A_{n,\la,\nu}$-module. For each $\jj\in I^n$, define $$Z_\jj(J):= V(\jj, \Delta(\jj)\setminus J) \quad \mbox{ and } \quad \psi_{J,p} := \pi_{\jj,p} \qquad \mbox{ for } J\subset \Delta(\jj),\ p\in \Delta(\jj)\setminus J.$$ By Lemma \ref{firstlemma}(iv), $(Z_\jj,\psi)$ is a commutative $\Delta(\jj)$-cube (over the category of $\B$-modules). Define the complex of $\B\rtimes \k[S_n]$-modules $$ \CC^\bullet(V) := \bigoplus_{\jj\in I^n} \CC^\bullet (Z_\jj). $$ Thus, $$ \CC^r (V) = \bigoplus_{\jj\in I^n} \bigoplus_{\substack{D\subset \Delta(\jj) \\ |D|=|\Delta(\jj)|-r}} V(\jj,D) \ot_\Z \det(\Delta(\jj)\setminus D), \qquad r=0, \ldots, n. $$ We remark that $S_n$ acts diagonally. Observe that \begin{equation} \label{h0} F_i(V) = H^0(\CC^\bullet(V)). \end{equation} We have the following results when $\nu=0$. \begin{proposition} \label{highercohomology} Let $\la\in B$ and assume $\la_i$ is invertible in $\k$. Let $V$ be a $\A_{n,\la, 0}$-module. Then $H^r (\CC^\bullet (V)) = 0$ for all $r>0$. \end{proposition} \begin{proof} Fix $\jj\in I^n$. Let $Z'=V(\jj,\Delta(\jj))$. For each $p\in \Delta(\jj)$, define an endomorphism $\psi_p$ of $Z'$ by $\psi_p = \la_i^{-1} \mu_{\jj,p}\pi_{\jj,p}$. Suppose $D=\Delta(\jj)\setminus \{q_1,\ldots,q_r\}$ and $p\in D$. Then by Lemma \ref{firstlemma}, the following diagram commutes: $$\xymatrix{ V(\jj, D) \ar[rrrr]^{\la_i^{-r}\mu_{\jj,q_1}\cdots\mu_{\jj,q_r}} \ar[d]_{\pi_{\jj,p}} &&&& \mathrm{Im}(\psi_{q_1}\cdots\psi_{q_r}) \ar[d]^{\psi_p} \\ V(\jj, D\setminus\{p\}) \ar[rrrr]^{\la_i^{-(r+1)}\mu_{\jj,p}\mu_{\jj,q_1}\cdots\mu_{\jj,q_r}} &&&& \mathrm{Im}(\psi_p\psi_{q_1}\cdots\psi_{q_r}) }$$ Moreover, the horizontal maps in the above diagram are isomorphisms. Hence, the proposition is immediate from the claim in Example \ref{idempotent}. \end{proof} The Grothendieck group of an abelian category $\mathscr Z$ is an abelian group with generators $[Z]$, for all objects $Z$ of $\mathscr Z$, and defining relations $[Z] = [Z']+[Z'']$ for all short exact sequences $0\to Z' \to Z \to Z'' \to 0$. \begin{corollary} \label{eulerpoincare} Let $\la\in B$ and assume $\la_i$ is invertible in $\k$. Let $V$ be a $\A_{n,\la, 0}$-module. Then in the Grothendieck group of the category of $\B\rtimes \k[S_n]$-modules, we have $$ F_i(V) = \sum_{r=0}^n (-1)^r \bigg[ \bigoplus_{\jj\in I^n} \bigoplus_{ \substack{D\subset\Delta(\jj) \\ |D|=|\Delta(\jj)|-r} } V(\jj, D) \ot_\Z \det(\Delta(\jj)\setminus D) \bigg] . $$ \end{corollary} \begin{proof} This follows from (\ref{h0}) and Proposition \ref{highercohomology} by the Euler-Poincar\'e principle. \end{proof} The author does not know what happens if $\nu\neq 0$; but see Proposition \ref{flat2} below. We conjecture that in general $H^r(\CC^\bullet(V))$ are the right derived functors of $F_i$. One can also similarly define a complex using the maps $\mu_{\jj,p}$ instead of $\pi_{\jj,p}$. \subsection{} In this subsection, we let $\k:=\C$. We shall determine the set $\Lambda_i$. First, we recall some standard results on the representation theory of symmetric groups; see \cite[\S2.2]{EM} and \cite[\S2.4]{M1}. For a Young diagram $\mu$ corresponding to a partition of $n$, we write $\pi_\mu$ for the associated irreducible representation of $S_n$. For a cell $j$ in $\mu$, we let $\mathbf c(j)$ be the signed distance from $j$ to the diagonal. The content $\mathbf c(\mu)$ of $\mu$ is the sum of $\mathbf c(j)$ over all cells $j$ in $\mu$. Denote by $S_{n-1}$ the subgroup of $S_n$ which fixes $1$. It is known that $\pi_\mu\big|_{S_{n-1}} = \bigoplus \pi_{\mu-j}$, where the direct sum is taken over all corners $j$ of $\mu$, and $\mu-j$ is the Young diagram obtained from $\mu$ be removing the corner $j$. \begin{lemma} \label{corner} Let $C=s_{12}+s_{13}+\cdots+s_{1n}$. {\rm (i)} The element $C$ acts on $\pi_{\mu-j}$ by the scalar $\mathbf c(j)$, for each corner $j$ of the Young diagram $\mu$. {\rm (ii)} The element $C$ acts as a scalar on $\pi_\mu$ if and only if $\mu$ is a rectangle. If the rectangle has height $a$ and width $b$, then the scalar is $b-a$. \end{lemma} We omit the proof of the lemma, which can be found in \cite[\S2.2]{EM} and \cite[\S2.4]{M1}. \begin{proposition} \label{lambdai} We have $$\Lambda_i = \{(\la,\nu)\in B\times \C \mid \la_i \pm p\nu \neq 0 \mbox{ for } p = 0, 1, \ldots, n-1\}.$$ \end{proposition} \begin{proof} Let $r\in [1,n]$. The element $\la_i + \nu\sum_{m=2}^r s_{1m}$ is invertible in $\C[S_r]$ if and only if its eigenvalues on the irreducible representations of $S_r$ are nonzero. By Lemma \ref{corner}(i), the eigenvalues are the numbers $\la_i + \nu\mathbf c(j)$, where $j$ is a corner in a Young diagram. The numbers $\mathbf c(j)$ which occur are $0,\pm 1, \ldots, \pm(n-1)$. \end{proof} \subsection{} In this subsection, we let $\k:=\C[[U]]$, where $U$ is a finite dimensional vector space over $\C$. Let $\m$ be the unique maximal ideal of $\k$. If $V$ is a $\k$-module, we write $\overline V$ for $V/\m V$. A $\A_{n,\la,\nu}$-module $V$ is a flat formal deformation of a $\overline{\A_{n,\la,\nu}}$-module $V_0$ if $V\cong V_0[[U]]$ as $\k$-modules, and there is a given isomorphism $\overline V \cong V_0$ of $\overline{\A_{n,\la,\nu}}$-modules. For any $\k$-module $V$, its $\m$-filtration is the decreasing filtration $V\supset \m V \supset \m^2 V \supset \ldots$. We define $$\mathrm{Gr}_\m V := \prod_{h=0}^{\infty} \frac{\m^h V}{\m^{h+1}V}.$$ Let us also introduce the following notations. We shall write $\widetilde \k$ for $\C[U]$, $\widetilde B$ for $\oplus_{i\in I} \widetilde \k$, and $\widetilde E$ for the free $\widetilde \k$-module with basis the set of edges $\{a\in \overline Q\}$. Furthermore, let $$ \widetilde\B := \widetilde B^{\otimes n}, \qquad \widetilde \E := \bigoplus_{1\leq \ell \leq n} \widetilde B^{\otimes (\ell-1)} \otimes \widetilde E \otimes \widetilde B^{\otimes (n-\ell)}. $$ For any $\la\in\widetilde B$ and $\nu\in\C[U]$, we write $\widetilde{\A_{n,\la,\nu}}$ for the $\widetilde \B$-algebra defined as the quotient of $T_{\widetilde \B}\widetilde \E \rtimes \widetilde \k[S_n]$ by the relations (i) and (ii) in Definition \ref{algebra}. \begin{lemma} \label{pbw} Let $Q$ be a connected quiver without edge-loop, such that $Q$ is not a finite Dynkin quiver. Assume $\la\in\widetilde B$ and $\nu\in \C[U]$. Then $\mathrm{Gr}_\m \A_{n,\la,\nu} \cong \overline{\A_{n,\la,\nu}}[[U]]$ as algebras over $\k$. \end{lemma} \begin{proof} The algebra $\A_{n,\la,\nu}$ has an increasing filtration defined by setting elements of $\B\rtimes \k[S_n]$ to be of degree $0$, and elements of $\E$ to be of degree $1$. Similarly, $\widetilde{\A_{n,\la,\nu}}$ and $\overline{\A_{n,\la,\nu}}$ have increasing filtrations. Let $S'$ be a basis for $\overline \E$, and let $S$ be a set of words in the elements of $S'$ such that $S$ is a basis for $\overline{\A_{n,0,0}}$ over $\overline\B\rtimes \C[S_n]$. It was proved in \cite[Theorem 2.2.1]{GG} (see also \cite[Remark 2.2.6]{GG}) that, for any $\la_0\in\overline B$ and $\nu_0\in \C$, the natural map $\overline{\A_{n,0,0}}\to \mathrm{gr}\overline{\A_{n,\la_0,\nu_0}}$ is an isomorphism of graded algebras. Hence, $S$ is a basis for $\overline{\A_{n,\la_0,\nu_0}}$ over $\overline\B\rtimes \C[S_n]$. We have the natural epimorphism $$ \overline{\A_{n,0,0}} \ot_\C \C[U] = \widetilde{\A_{n,0,0}} \too \mathrm{gr}\widetilde{\A_{n,\la,\nu}}.$$ Thus, $S$ spans $\widetilde{\A_{n,\la,\nu}}$ as a module over $\widetilde \B\rtimes\widetilde \k[S_n]$. If there is a linear relation over $\widetilde \B\rtimes\widetilde \k[S_n]$ among elements of $S$ in $\widetilde{\A_{n,\la,\nu}}$, then by evaluation at some point of $U$, we obtain a linear relation over $\overline \B\rtimes \C[S_n]$, a contradiction. Hence, $S$ is a basis for $\widetilde{\A_{n,\la,\nu}}$ over $\widetilde \B\rtimes\widetilde \k [S_n]$. It follows that $S$ is a basis for $\A_{n,\la,\nu} = \widetilde{\A_{n,\la,\nu}} \ot_{\C[U]}\k$ over $\B\rtimes\k[S_n]$. Therefore, $\A_{n,\la,\nu} \cong \overline{\A_{n,\la,\nu}}[[U]]$ as $\k$-modules, and $\mathrm{Gr}_\m \A_{n,\la,\nu} \cong \overline{\A_{n,\la,\nu}}[[U]]$ as algebras. \end{proof} \begin{proposition} \label{flat} Let $Q$ and $\la,\nu$ be as in Lemma \ref{pbw}. Let $i\in I$ and suppose $(\la,\nu)\in \Lambda_i$. Let the $\A_{n,\la,\nu}$-module $V$ be a flat formal deformation of a $\overline{\A_{n,\la,\nu}}$-module $V_0$. Then $F_i(V)$ is a flat formal deformation of $F_i(V_0)$. \end{proposition} \begin{proof} Observe that $\mathrm{Gr}_\m V = V_0[[U]]$ as $\overline{\A_{n,\la,\nu}}[[U]]$-modules. We have $$\mathrm{Gr}_\m (F_i(V)) \subset F_i(\mathrm{Gr}_\m V).$$ By Theorem \ref{equivalence}, $$ \mathrm{Gr}_\m V =\mathrm{Gr}_\m (F_i F_iV) \subset F_i(\mathrm{Gr}_\m(F_i V)) \subset F_i(F_i(\mathrm{Gr}_\m V)) = \mathrm{Gr}_\m V .$$ Hence, $\mathrm{Gr}_\m(F_i(V)) = F_i(\mathrm{Gr}_\m V) = (F_i V_0)[[U]]$ as $\overline{\A_{n,r_i\la,\nu}}[[U]]$-modules, which implies $F_i(V) \cong (F_iV_0)[[U]]$ as $\k$-modules, and $\overline{F_i(V)}= F_i(V_0)$ as $\overline{\A_{n,r_i\la,\nu}}$-modules. \end{proof} \subsection{} In this subsection, we let $\k:=\C[U]$, the ring of regular functions on a connected smooth affine variety $U$. For any point $u\in U$, we denote by $\m_u$ the maximal ideal of functions vanishing at $u$, and if $V$ is a $\k$-module, then let $V^u:= V/\m_u V$. We shall write $\overline\B$ for $\B^u$. \begin{proposition} \label{flat2} Assume $Q$ is a connected quiver without edge-loop, such that $Q$ is not a finite Dynkin quiver. Let $i\in I$ and $(\la,\nu)\in \Lambda_i$. Let $V$ be a $\A_{n,\la,\nu}$-module, finitely generated over $\k$. Suppose $V$ is a flat $\k$-module. Then we have the following. {\rm (i)} $F_i(V)$ is a flat $\k$-module. {\rm (ii)} If $\nu$ vanishes at a point $o\in U$, then for any point $u\in U$, we have $$ F_i(V^u) = \sum_{r=0}^n (-1)^r \bigg[ \bigoplus_{\jj\in I^n} \bigoplus_{ \substack{D\subset\Delta(\jj) \\ |D|=|\Delta(\jj)|-r} } V^u(\jj, D) \ot_\Z \det(\Delta(\jj)\setminus D) \bigg] $$ in the Grothendieck group of the category of $\overline\B \rtimes \C[S_n]$-modules. \end{proposition} \begin{proof} (i) By Corollary \ref{fiber} and Proposition \ref{flat}, $F_i(V)$ is locally flat at all the points of $U$. Hence, it is flat over $U$. (ii) Since $V$ and $F_i(V)$ are flat over $U$, we have isomorphisms of $\overline\B\rtimes \C[S_n]$-modules: $V^o \cong V^u$ and $F_i(V)^o \cong F_i(V)^u$. Hence, the required formula follows from Corollary \ref{fiber} and Corollary \ref{eulerpoincare}. \end{proof} \section{\bf Symplectic reflection algebras for wreath products} \subsection{} Let $L$ be a 2-dimensional complex vector space, and $\omega_L$ a nondegenerate symplectic form on $L$. Let $\Gamma$ be a finite subgroup of $Sp(L)$, and let $\GG:=S_n\ltimes\Gamma^n$. Let $\mathscr L:=L^{\oplus n}$. For any $\ell\in [1,n]$ and $\g\in\Gamma$, we will write $\g_\ell\in\GG$ for $\g$ placed in the $\ell$-th factor $\Gamma$. Similarly, for any $u\in L$, we will write $u_\ell\in \mathscr L$ for $u$ placed in the $\ell$-th factor $L$. Fix a symplectic basis $\{x,y\}$ for $L$. Let $t,k\in\C$. Denote by $Z\Gamma$ the center of the group algebra $\C[\Gamma]$. Let $$c = \sum_{\g \in \Gamma\smallsetminus \{1\}} c_{\g}\cdot\g \in Z\Gamma, \quad\mbox{ where } c_\g\in\C.$$ The symplectic reflection algebra $\hh_{t,k,c}(\GG)$, introduced in \cite{EG}, is the quotient of $T\mathscr{L}\rtimes \C[\GG]$ by the following relations: \begin{align*} [x_\ell, y_\ell] =& t\cdot 1+ \frac{k}{2} \sum_{m\neq \ell}\sum_{\g\in\Gamma} s_{\ell m}\g_{\ell}\g_{m}^{-1} + \sum_{\g\in\Gamma\smallsetminus\{1\}} c_{\g}\g_{\ell} , \qquad \forall \ell\in[1,n]; \\ [u_\ell,v_m]=& -\frac{k}{2} \sum_{\g\in\Gamma} \omega_{L}(\g u,v) s_{\ell m}\g_{\ell}\g_{m}^{-1}, \qquad \forall u,v\in L,\ \ell,m\in [1,n],\ \ell\neq m. \end{align*} Let $N_i$ be the irreducible representation of $\Gamma$ corresponding to the vertex $i\in I$ of $Q$ (where $Q$ is associated to $\Gamma$ by the McKay correspondence) and let $f_i\in \mathrm{End} N_i$ be a primitive idempotent. We have $\C\Gamma=\bigoplus_{i\in I} \mathrm{End} N_i$. Let $f:=\sum_{i\in I}f_i\in\C\Gamma$. The element $f^{\ot n}$ is an idempotent in $\C[\Gamma^n] =(\C\Gamma)^{\ot n}$. It was proved in \cite[Theorem 3.5.2]{GG} that the algebra $f^{\ot n} \hh_{t,k,c}(\GG) f^{\ot n}$ is isomorphic to the algebra $\A_{n,\la,\nu}$ for the quiver $Q$. In particular, $\hh_{t,k,c}(\GG)$ is Morita equivalent to $\A_{n,\la,\nu}$. The parameter $\la_i$ is the trace of $t\cdot 1+c$ on $N_i$, and the parameter $\nu$ is $\frac{k|\Gamma|}{2}$. We shall reformulate and prove the main results of \cite{EM}, \cite{M1} and \cite{M2} in terms of the algebra $\A_{n,\la,\nu}$ via this Morita equivalence. We believe the results are more transparent in our reformulation. \subsection{} In this subsection, we let $\k:=\C[[U]]$, where $U$ is a finite dimensional vector space over $\C$. Let $\m$ be the unique maximal ideal of $\k$. Recall that if $V$ is a $\k$-module, we write $\overline V$ for $V/\m V$. If $V=\bigoplus_{i\in I}V_i$ is an $I$-graded complex vector space, then its dimension vector is the element $(\dim V_i)_{i\in I}\in\Z^I$. Let $\N_i$ be the complex vector space with dimension vector $\epsilon_i$. Let $\vec n=(n_1, \ldots, n_r)$ be a partition of $n$. Let $X=X_1 \otot X_r$ be a simple module of $S_{\vec n}:=S_{n_1}\times \cdots \times S_{n_r} \subset S_n$. Let $\{i_1, \ldots, i_r\}$ be a set of $r$ \emph{distinct} vertices of $Q$, and let $\N=\N_{i_1}^{\ot n_1} \otot \N_{i_r}^{\ot n_r}$. Then $X\ot\N$ is a simple module of $\overline \B \rtimes \C[S_{\vec n}]$. We write $X\ot\N\uparrow$ for the induced module $\mathrm{Ind}_{\overline{\B} \rtimes \C[S_{\vec n}]} ^{\overline{\B} \rtimes \C[S_n]} (X\ot \N)$ of $\overline{\B} \rtimes \C[S_n]$. It is known that any simple $\overline{\B} \rtimes \C[S_n]$-module is of the form $X\ot \N\uparrow$ (see \cite{Mac}: paragraph after (A.5)). We have $$ X\ot\N\uparrow = \bigoplus_\sigma \sigma(X\ot\N),$$ where $\sigma$ runs over a set of left coset representatives of $S_{\vec n}$ in $S_n$. The following lemma is equivalent to \cite[Theorem 4.1]{M1}. \begin{lemma} \label{trivial} Assume $Q$ has no edge-loop. Let the $\A_{n,\la,\nu}$-module $V$ be a flat formal deformation of the $\overline{\A_{n,\la,\nu}}$-module $\overline V$. If $\overline V$ is simple as a $\overline\B\rtimes\C[S_n]$-module, then all elements of $\E$ must act by $0$ on $V$. \end{lemma} \begin{proof} Since the algebra $\overline\B\rtimes\C[S_n]$ is semisimple, $V$ must be of the form $(X\ot\N\uparrow)[[U]]$ (as $\B\rtimes\k[S_n]$-modules). Let $(j_1,\ldots, j_n)\in I^n$ and $\sigma\in S_n$. If $j_m = j_{\sigma(m)}$ for all $m\neq \ell$, then $j_\ell = j_{\sigma(\ell)}$. It follows that since $Q$ has no edge-loop, $\E_\ell$ must act by $0$ on $\N$, hence $\E$ acts by $0$ on $V$. \end{proof} The next result is equivalent to \cite[Theorem 3.1]{M1}. \begin{theorem} \label{extend} Assume $Q$ has no edge-loop and $\nu\neq 0$. The $\B\rtimes \k[S_n]$-module $(X\ot\N\uparrow)[[U]]$ extends to a $\A_{n,\la,\nu}$-module if and only if the following conditions are satisfied: \begin{itemize} \item[{\rm (i)}] For all $\ell\in[1,r]$, the simple module $X_\ell$ of $S_{n_\ell}$ has rectangular Young diagram, of size $a_\ell \times b_\ell$. \item[{\rm (ii)}] No two vertices in $\{i_1, \ldots, i_r\}$ are joined by an edge in $Q$. \item[{\rm (iii)}] For all $\ell\in[1,r]$, one has $\la_{i_\ell} = (a_\ell-b_\ell)\nu$. \end{itemize} \end{theorem} \begin{proof} Suppose $(X\ot\N\uparrow)[[U]]$ extends to a $\A_{n,\la,\nu}$-module. By Lemma \ref{trivial}, the elements of $\E$ must act by $0$. Hence, by Lemma \ref{corner}(ii) and the relations of type (i) in Definition \ref{algebra}, the Young diagram of each $X_\ell$ must be a rectangle, of size $a_\ell\times b_\ell$ say, and $\la_{i_\ell} = (a_\ell-b_\ell)\nu$. By the relations of type (ii) in Definition \ref{algebra}, no two vertices in $\{i_1, \ldots, i_r\}$ can be joined by an edge in $Q$. Conversely, suppose the conditions are satisfied. Then it is clear (using Lemma \ref{corner}(ii) again) that if we let the elements of $\E$ act by $0$, the relations in Definition \ref{algebra} hold. \end{proof} From now on, we assume that $Q$ is an affine Dynkin quiver of type ADE, but not of type $\mathrm{A}_0$. Let $\delta=(\delta_i)_{i\in I}\in \Z^I$ be the minimal positive imaginary root of $Q$. We have $\delta_i = \dim N_i$. Let $\la_0\in\overline B$, and assume $\la_0\cdot\delta\neq 0$. We shall write $\Pi_{\la_0}$ for $\overline{\Pi_{\la_0}}$, and $\A_{n,\la_0,0}$ for $\overline{\A_{n,\la_0,0}}$. Let $\Sigma_{\la_0}$ be the set of dimension vectors of finite dimensional simple $\Pi_{\la_0}$-modules. By \cite[Lemma 7.2]{CBH} and \cite[Theorem 7.4]{CBH}, there exists an element $\la^+\in\overline B$ and an element $w\in W$ such that: \begin{itemize} \item $w$ is an element of minimal length with $w(\la_0)=\la^+$; \item $w\Sigma_{\la_0} = \{\epsilon_i \mid \la^{+}_i=0\}$. \end{itemize} By the minimality of its length, we can write $w=s_{j_h}\cdots s_{j_1}$ for some $j_1, \ldots, j_h\in I$, such that $\big(r_{j_g} \cdots r_{j_1}(\la_0)\big)_{j_g}\neq 0$ for all $g\in [1,n]$. Let $F_w$ be the composition of functors $F_{j_h}\cdots F_{j_1}$, and $F_{w^{-1}}$ be the composition of functors $F_{j_1}\cdots F_{j_h}$. Let $Y_1, \ldots, Y_r$ be a collection of pairwise non-isomorphic finite dimensional simple modules of $\Pi_{\la_0}$, and let $Y=Y_1^{\ot n_1}\otot Y_r^{\ot n_r}$. Then $X\ot Y$ is a simple module of $\Pi_{\la_0}^{\ot n} \rtimes \C[S_{\vec n}]$. We write $X\ot Y\uparrow$ for the induced module $\mathrm{Ind}_{\Pi_{\la_0}^{\ot n} \rtimes \C[S_{\vec n}]} ^{\Pi_{\la_0}^{\ot n} \rtimes \C[S_n]} (X\ot Y)$ of $\A_{n,\la_0,0}$. By \cite{Mac} (paragraph after (A.5)), it is known that any finite dimensional simple $\A_{n,\la_0,0}$-module is of the form $X\ot Y\uparrow$. The following theorem and its proof was explained to the author by Pavel Etingof. \begin{theorem} \label{deform} Let $\la\in B$. Assume that $\la_i\in U$ for all $i\in I$, and $0\neq \nu\in U$. The $\A_{n,\la_0,0}$-module $X\ot Y\uparrow$ has a flat formal deformation to a $\A_{n,\la_0+\la,\nu}$-module if and only if the following conditions are satisfied: \begin{itemize} \item[{\rm (i)}] For all $\ell\in[1,r]$, the simple module $X_\ell$ of $S_{n_\ell}$ has rectangular Young diagram, of size $a_\ell \times b_\ell$. \item[{\rm (ii)}] We have $\mathrm{Ext}^1_{\Pi_{\la_0}}(Y_\ell,Y_m)=0$ for any $\ell\neq m$. \item[{\rm (iii)}] For all $\ell\in[1,r]$, one has $\la \cdot \al_\ell = (a_\ell-b_\ell)\nu$, where $\alpha_\ell$ is the dimension vector of $Y_\ell$. \end{itemize} The deformation is unique when it exists. \end{theorem} \begin{proof} Let $\la^+$, $w$, $F_w$, and $F_{w^{-1}}$ be as defined above. We claim that $\big(r_{j_g}\cdots r_{j_1} (\la_0 +\la), \nu\big) \in \Lambda_{j_g}$ for $g=1,\ldots,h$. To see this, it is enough to show that $\big(r_{j_g}\cdots r_{j_1} (\la_0+\la)\big)_{j_g} + \nu C $ has an inverse in $\k[S_N]$, for any given $C\in \k[S_N]$. This is equivalent to solving a system of $N!$ linear equations in $N!$ variables, whose associated matrix is of the form $\big(r_{j_g}\cdots r_{j_1} (\la_0+\la)\big)_{j_g} Id_{N!} + \nu M$ for some matrix $M$. Since $\la$ and $\nu$ are $0$ modulo $\m$, the determinant of this matrix is nonzero modulo $\m$, and so it is invertible in $\k$. Hence, the matrix is invertible. This proved our claim. Now define $i_\ell\in I$ by $\epsilon_{i_\ell} = w(\al_\ell)$. We have $\la_0\cdot \al_\ell = \la^+ \cdot \epsilon_{i_\ell} =0$. By \cite[Theorem 5.1]{CBH}, we have $F_w(X\ot Y\uparrow)=X\ot\N\uparrow$. Suppose the conditions in the theorem are satisfied. Then by Theorem \ref{extend}, the $\B\rtimes \k[S_n]$-module $M := (X\ot\N\uparrow)[[U]]$ is a $\A_{n,\la^+ +w(\la),\nu}$-module (where elements of $\E$ act by $0$). Hence, by Proposition \ref{flat}, the $\A_{n,\la_0+\la,\nu}$-module $F_{w^{-1}} (M)$ is a flat formal deformation of $X\ot Y\uparrow$. Conversely, suppose a $\A_{n,\la_0+\la,\nu}$-module $V$ is a flat formal deformation of $X\ot Y\uparrow$. Then by Proposition \ref{flat}, the $\A_{n,\la^+ +w(\la),\nu}$-module $F_w(V)$ is a flat formal deformation of $X\ot\N\uparrow$. We have $F_w(V) = (X\ot\N\uparrow)[[U]]$ as $\B\rtimes\k[S_n]$-modules. It follows from Theorem \ref{extend} that the conditions in the theorem must hold. Moreover, by Lemma \ref{trivial}, the elements of $\E$ must act by $0$ on $F_w(V)$, so $F_w(V)$ is the unique flat formal deformation of $X\ot\N\uparrow$. This implies that $V$ is the unique flat formal deformation of $X\ot Y\uparrow$. \end{proof} The sufficiency of the conditions in Theorem \ref{deform} was first proved in \cite[Theorem 1.3(i)]{M2}; in the special case where the partition is $\vec n=(n)$, it was first proved in \cite[Theorem 3.1(i)]{EM}. \subsection{} Let $\la_0$, $\la^+$, $w$, $F_{w^{-1}}$, $j_1, \ldots, j_h$, and $X\ot Y\uparrow$ be as defined in the previous subsection. In particular, $\la_0\cdot\delta\neq 0$. Assume that conditions (i) and (ii) of Theorem \ref{deform} hold. Let $U$ be a finite dimensional complex vector space. Let $\la$ and $\nu$ be regular functions on $U$ such that condition (iii) of Theorem \ref{deform} hold. Moreover, assume there is a point $o\in U$ such that $\la$ specializes to $\la_0$ at $o$, and $\nu$ vanishes at $o$. Define $i_\ell\in I$ by $\epsilon_{i_\ell} = w(\al_\ell)$, and let $X\ot\N\uparrow$ be as defined in the previous subsection. We have $\la^+_{i_\ell}=\la_0\cdot\alpha_\ell =0$. Let $U'$ be the Zariski open subset of $U$ defined by $\big(r_{j_g} \cdots r_{j_1}(\la)\big)_{j_g} \pm p\nu \neq 0$ for all $g\in [1,n]$ and $p=0, \ldots, n-1$. Since $o\in U'$, the set $U'$ is nonempty. Let $\k:=\C[U']$ be the ring of regular functions on $U'$. For any point $u\in U'$, let $\m_u$ denote the maximal ideal of $\k$ consisting of functions vanishing at $u$. If $V$ is a $\k$-module, we write $V^u$ for $V/\m_u V$. We write $\overline\B$ for $\B^u$. The proof of the following theorem is similar to the proof of Theorem \ref{deform}. \begin{theorem} \label{deform2} There exists a $\A_{n,\la,\nu}$-module $V$ such that: \begin{itemize} \item[{\rm (i)}] $V^o=X\ot Y\uparrow$ as $\A_{n,\la_0,0}$-modules, and $V$ is flat over $U'$. \item[{\rm (ii)}] For any point $u\in U'$, $V^u$ is a finite dimension simple $\A_{n,\la,\nu}^u$-module, isomorphic to $X\ot Y\uparrow$ as a $\overline\B \rtimes \C[S_n]$-module. \end{itemize} \end{theorem} \begin{proof} Let the elements of $\E$ act by $0$ on the $\B\rtimes\k[S_n]$-module $(X\ot \N \uparrow)\ot_\C \k$. It follows from Lemma \ref{corner}(ii) that $(X\ot \N \uparrow)\ot_\C \k$ is a $\A_{n,w(\la),\nu}$-module. By Proposition \ref{lambdai}, we have $\big( r_{j_g} \cdots r_{j_1}(\la) ,\nu \big) \in\Lambda_{j_g}$ for $g=1,\ldots,h$. Let $V$ be the $\A_{n,\la,\nu}$-module $F_{w^{-1}}\big((X\ot \N \uparrow)\ot_\C \k\big)$. By Corollary \ref{fiber}, we have $V^o=X\ot Y \uparrow$. Moreover, by Theorem \ref{equivalence}, $V^u$ is a simple $\A_{n,\la,\nu}^u$-module for any $u\in U'$. By Proposition \ref{flat2}(i), $V$ is flat over $U'$. Hence, $V^u$ is isomorphic to $V^o$ as $\overline\B\rtimes\C[S_n]$-modules. \end{proof} We remark that the set $U'$ was not specified precisely in \cite[Theorem 3.1(iii)]{EM} and \cite[Theorem 1.3(iii)]{M2}. Let us also mention that there may exists finite dimensional simple modules of $\hh_{t,k,c}(\GG)$ (for complex parameters) which cannot be deformed to a flat family as $k$ varies; see \cite[\S4]{Ch} where this happens. The assumption that $\la_0\cdot\delta\neq 0$ is equivalent to the condition that $t\neq 0$ for the symplectic reflection algebra $\hh_{t,k,c}(\GG)$. When $t=0$, the representation theory of the symplectic reflection algebra is remarkably different; see \cite{CBH}, \cite{EG}, and \cite{GS}. \section*{\bf Acknowledgments} I am very grateful to Pavel Etingof for patiently explaining Theorem \ref{deform} and its proof to me, and for other useful discussions. I also thank Iain Gordon for his comments. This work was partially supported by NSF grant DMS-0401509. {\footnotesize {
129,906
Henry Pickering, 1781-1838, third son of Col. Timothy Pickering of the Revolution, and a native of Newburg, New York, was for some time a merchant in Salem, Mass., and subsequently removed to the city of New York, in which place, and in various portions of the State, he resided until his decease. He was the author of a number of poetical compositions, specimens of which will be found in Duyckinck's Cyc. of Amer. Lit., ii. 26.
399,516
Committing to writing involves more than just working away at a new Word file. It also requires the commitment to promote and market the eventual work as well. This does not come naturally to everyone, but this dedication to help spread the word about the work is equally important as the content itself. Whether it is a journal article, a monograph, a textbook, or some other form of academic communication, marketing is essential to the success of the material. [Read more…]
403,702
. A unique synthesis of contrastive linguistics and discourse analysis, providing a core text for undergraduates and postgraduates taking courses in language, applied linguistics, translation and cultural studies. Also of interest to language teachers, applied linguists, translators and interpreters.. This book sets out to examine why the world regards the Gulf as important. Chapters either treat the way in which individual countries view their vital interest in the Gulf, or deal with specific themes such as the question of militarization and the international arms-trade..
359,396
TITLE: Evaluating the (complex) integral $\int_\gamma \frac{e^{z+z^{-1}}}{z}\mathrm dz$ using residues. QUESTION [6 upvotes]: I am trying to evaluate the following integral. $$\int_\gamma \frac{e^{z+z^{-1}}}{z}\mathrm dz$$ where $\gamma$ is the path $\cos(t)+2i\sin(t)$ for $0\leq t <4\pi$. So, $\gamma$ is an ellipse running twice counterclockwise around $0$, which is where the function has a singularity. I'm sure I need to use the residue theorem to evaluate this. (for homework) I'm not good with the Residue theorem yet. Can I get a road map for the canonical solution to this problem? (i.e. the way I'm "probably supposed to" do it.) I can work through the details myself. (non-homework) Is it possible to solve this problem with the Laurent series approach from this answer using the residues for $e^z/z$ and $e^{-z}$? (or $e^z/\sqrt{z}$ and $e^{-z}/\sqrt{z}$, if that would be better.) To be clear about where I'm confused for part (1): I see that the hypothesis for the Residue theorem is met: the above function is analytic with an isolated singularity at $0$, we're goin around it twice, so $\int_\gamma f=4\pi i \operatorname{Res}(0,f)$. But from here I don't know how to perform the computations. REPLY [7 votes]: The only singularity for the integrand is at $z=0$, which is within the contour of integration. The integral is nothing but $$2 \pi i \cdot \left(\text{Residue at } z=0 \text{ of }\left(\dfrac{e^{z+1/z}}z \right) \right) \cdot \text{Number of times the closed curve goes about the origin}$$ Let us write the Laurent series about $z=0$. We then get $$e^{z+1/z} = e^z \cdot e^{1/z} = \sum_{k=0}^{\infty} \dfrac{z^k}{k!} \cdot \sum_{m=0}^{\infty} \dfrac1{z^m \cdot m!}$$ Hence, $$\dfrac{e^{z+1/z}}z = \dfrac{e^z \cdot e^{1/z}}z = \sum_{k=0}^{\infty} \sum_{m=0}^{\infty} \dfrac{z^{k-m-1}}{k! m!}$$ The term $z^{-1}$ in the series is when $k=m$. Hence, the coefficient of $\dfrac1z$ is $$\sum_{k=0}^{\infty} \dfrac1{(k!)^2}$$ Hence, your answer is $$4 \pi i \sum_{k=0}^{\infty} \dfrac1{(k!)^2} = 4 \pi iI_0(2)$$where $I_{\alpha}(z)$ is the modified Bessel's' function of the first kind given by $$I_{\alpha}(z) = \sum_{m=0}^{\infty} \dfrac1{m! \Gamma(m+\alpha+1)} \left(\dfrac{z}2 \right)^{2m+\alpha}$$
107,534
Devine looks for good Derry finish Declan Devine hopes Derry City can continue their push for a place in the Europa League by beating Bohemians at Dalymount Park on Friday. Derry ended a disappointing run by beating Bray Wanderers 2-0 at home last time out. Devine said injuries meant he would continue to put his faith in young upcoming players at the club.
197,841
\begin{document} \title{Yang-Yang functions, Monodromy and knot polynomials } \author{Peng Liu \and Wei-Dong Ruan } \institute{Peng Liu \at School Of Applied Science, Beijing Information Science and Technology University, Beijing,10010,China\\ \email{[email protected]} \and Wei-Dong Ruan \at Institute of Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, China } \date{Received: date / Accepted: date} \maketitle \begin{abstract} We derive a structure of $\mathbb{Z}[t,t^{-1}]$-module bundle from a family of Yang-Yang functions. For the fundamental representation of the complex simple Lie algebra of classical type, we give explicit wall-crossing formula and prove that the monodromy representation of the $\mathbb{Z}[t,t^{-1}]$-module bundle is equivalent to the braid group representation induced by the universal R-matrices of $U_{h}(g)$. We show that two transformations induced on the fiber by the symmetry breaking deformation and respectively the rotation of two complex parameters commute with each other. \keywords{Yang-Yang functions \and monodromy \and wall-crossing formula \and knot polynomials} \end{abstract} \section{Introduction} \label{sec:intro} Yang-Yang function was named by N. Nekrasov, A. Rosly and S. Shatashvili in \cite{Nekrasov}. Originated from C. N. Yang and C. P. Yang's paper \cite{Yang66one}\cite{Yang69}, it was used for the analysis of the non-linear Schr\"odinger model. Behind this function hides a quantum integrable system \cite{Nekrasov}\cite{V11}, thus it aroused much interest. Yang-Yang function can also be realized as the power in the correlation function of the free field realization of Virasoro vertex operators \cite{GW}. D. Gaitto and E. Witten used this realization to derive Jones polynomial for knots from four dimensional Chern-Simons gauge theory. Our interest is to figure out the structure underlying the derivation of knot invariants from the Yang-Yang function. Similar to the work of V. G. Drinfeld and T. Kohno \cite{Chari} \cite{Drinfeld1989c} \cite{Kohno87}, where the monodromy of Knizhnik-Zamolodchikov system was proved to be equivalent to the braid group representation, induced by the universal R-matrices of the quantum enveloping algebra $U_{h}(g)$ of semisimple Lie algebra $g$, we prove that the monodromy representation of the $ \mathbb{Z}[t,t^{-1}]$-module bundle constructed from a family of Yang-Yang functions associated with the fundamental representation of the classical complex simple Lie algebra is equivalent to the braid group representation, induced by the universal R-matrices of $U_{h}(g)$. Furthermore, transformations induced on the fiber by two parameter deformations, the symmetry breaking parameter $c$ from $c=0$ to $c\rightarrow \infty$ and respectively the rotation of two singular complex parameters $z_{1}$ and $z_{2}$, commute with each other. By studying the monodromy, creation and annihilation matrices from the parameter deformations, one can derive the HOMFLY-PT polynomial and Kauffman polynomial for knots. We expect the existence of new knot invariants, different from HOMFLY-PT and Kauffman polynomials, from general representations of Lie algebras. It will be investigated elsewhere. This paper is organized as follows. Firstly we give the definition of Yang-Yang function in general case and derive from it the $ \mathbb{Z}[t,t^{-1}]$-module bundle structure, then state two main theorems in section 2. In section 3, the parameter rotation for Yang-Yang function is considered. Four types of variations of critical point of Yang-Yang function are studied in detail. For the fundamental representation of classical complex simple Lie algebra, general wall-crossing formula and monodromy representation are derived. The first main theorem follows. In section 4, we discuss the relation between two monodromy representations of $c=0$ and $c\rightarrow +\infty$, which leads to a proof of the second main theorem. \section{Yang-Yang function, monodromy of its associated bundle and main theorems} \label{sec:main} \subsection{Yang-Yang function and its critical point} Let $g$ be a finite dimensional complex simple Lie algebra with Chevalley generators $\{E_{i},F_{i},H_{i}\},i=1,...,n$ and Cartan matrix $C_{ij}=\frac{2(\alpha_{i},\alpha_{j})}{(\alpha_{j},\alpha_{j})}$, where $\alpha_{i}$ primary roots of $g$ and $(,)$ the inner product induced from Killing form. Let $P^{+}$ be the positive Weyl chamber in the weight space of $g$. Fundamental weights $\omega_{i}$ are a set of bases of weight space satisfying $\frac{2(\omega_{i},\alpha_{j})}{(\alpha_{j},\alpha_{j})}=\delta_{ij}$. Weyl vector is defined as $\rho=\sum^{n}_{i=1}\omega_{i}$, thus $\frac{2(\rho,\alpha_{i})}{(\alpha_{i},\alpha_{i})}=1$ for any $i$ $(1\leq i\leq n)$. Let ${\bm \lambda}=(\lambda_{1},...,\lambda_{m})$ be a sequence of weights $\lambda_{a}\in P^{+}$. Denote by $V_{{\bm\lambda}}$ the representation $V_{\lambda_{1}}\otimes...\otimes V_{\lambda_{m}}$ and $\Omega_{{\bm \lambda}}$ the set of weights in $V_{{\bm \lambda}}$. ${\bm l}=(l_{1},...,l_{n})$ a sequence of nonnegative integers is just a $n$ partition of $l=\sum_{i=1}^{n}l_{i}$ and ${\bm 0}=(0,...,0)$. Yang-Yang function is defined by \begin{equation} \begin{split} {\bm W}({\bm w},{\bm z},{\bm\lambda},{\bm l})=&\sum_{j=1}^{l}\sum _{a=1}^{m}(\alpha_{i_{j}},\lambda _{a})\log(w_{j}-z_{a}) -\sum _{1\leq j<k\leq l}(\alpha_{i_{j}},\alpha_{i_{k}})\log(w_{j}-w_{k})\\ &-\sum _{1\leq a< b\leq m}(\lambda_{a},\lambda_{b})\log(z_{a}-z_{b}). \end{split} \end{equation} It is a function of complex variables ${\bm w}=(w_{1},...,w_{l})$, distinct complex parameters ${\bm z}=(z_{1},...,z_{m})$, weights ${\bm\lambda}$ and $n$ partition ${\bm l}$ of $l$. To each $z_{a}$ and $w_{j}$, a dominant integral weight $\lambda_{a}\in P^{+}$ and a primary root $\alpha_{i_{j}}$ are associated respectively, where $i_{j}\in \{ 1,...,n\}$. Let $l_{k}=\#\{j\mid 1\leq j\leq l, i_{j}=k\}$ be the number of $\alpha_{k}$ in $\{\alpha_{i_{j}}\}_{j=1,...,l}$. Denote by $\alpha({\bm l})$ the sum of all these primary roots $\alpha({\bm l})=l_{1}\alpha_{1}+...l_{n}\alpha_{n}$. Then ${\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm 0})$ is a constant function of ${\bm w}$. The critical points of ${\bm W}({\bm w},{\bm z},{\bm\lambda},{\bm l})$ satisfy \begin{equation} \frac{\partial {\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm l})}{\partial w_{j}}=0, j=1,...,l \end{equation} equivalently, \begin{equation} \sum_{a}\frac{(\alpha_{i_{j}},\lambda_{a})}{w_{j}-z_{a}}=\sum_{s\neq j}\frac{(\alpha_{i_{j}},\alpha_{i_{s}})}{w_{j}-w_{s}}, j=1,...,l. \end{equation} By definition, if ${\bm w}$ is a solution of $\frac{\partial {\bm W}}{\partial w_{j}}=0$ and $(\alpha_{i_{j}},\alpha_{i_{s}})\neq 0$, then $w_{j}\neq w_{s}$. Also if $(\alpha_{i_{j}},\lambda_{a})\neq 0$, then $w_{j}\neq z_{a}$. Under the permutation of all the coordinates $w_{j}$ associated with the same primary root, the critical point equation above is invariant, thus we do not distinguish these critical points. The critical points of Yang-Yang function have a close relation with the singular vectors of the tensor product space $V_{{\bm \lambda}}=V_{\lambda_{1}}\otimes V_{\lambda_{2}}\otimes ...\otimes V_{\lambda_{m}}$. Let $Sing V=\{v\in V \mid E_{i} v=0, \forall i \}$ be the subspace of singular vectors in $V$. The following fact is known. \begin{theorem}\label{LMV} If $\lambda_{1}+...+\lambda_{m}-\alpha({\bm l})\in P^{+}$, then for all nondegenerate critical points of Yang-Yang function ${\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm l})$, there exists a singular vector in $Sing V_{\bm \lambda}$. \end{theorem} If $\lambda_{1}+...+\lambda_{m}-\alpha({\bm l})\in P^{+}$, for each critical point of Yang-Yang function, Bethe vector can be constructed\cite{MV05norm}. Shapovarov form of Bethe vector is proved to be equal to Hessian of Yang-Yang function at the critical point\cite{V11}. Thus nondegeneration of critical point guarantees Bethe vector is nonzero. By theorem 11.1 in \cite{RV94}, each nonzero Bethe vector belongs to $Sing V_{{\bm \lambda}}$. For details of proof, we refer to \cite{V11}\cite{MV05norm}\cite{RV94}. The master function in the reference is just the exponential of Yang-Yang function ${\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm l})$. Formally denote by $w_{0}$ the critical point of constant function${\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm 0})$. \begin{theorem}\label{1sing} Let $g$ be a classical complex simple Lie algebra. If ${\bm z}=(z_{1},z_{2})$, ${\bm \lambda}=(\omega_{1},\omega_{1})$, the nondegenerate critical points of the family of Yang-Yang functions $$\{{\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm l})\mid V_{2\omega_{1}-\alpha({\bm l})} \subset V_{{\bm \lambda}}\}$$ together with the formal degenerate critical point $w_{0}$ of ${\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm 0})$ one to one correspond to the weight vectors in $SingV_{{\bm \lambda}}.$ \end{theorem} \begin{proof} By Littelmann-Littlewood-Richardson rule\cite{littelmann97}, the following direct sum decompositions can be derived for each case of classical Lie algebras. \begin{equation} \begin{split} A_{n}:&\quad V_{\omega_{1}}\otimes V_{\omega_{1}}=V_{2\omega_{1}}\oplus V_{2\omega_{1}-\alpha_{1}}\\ B_{n}:&\quad V_{\omega_{1}}\otimes V_{\omega_{1}}=V_{2\omega_{1}}\oplus V_{2\omega_{1}-\alpha_{1}}\oplus V_{2\omega_{1}-2\alpha_{1}-...-2\alpha_{n}}\\ C_{n}:&\quad V_{\omega_{1}}\otimes V_{\omega_{1}}=V_{2\omega_{1}}\oplus V_{2\omega_{1}-\alpha_{1}}\oplus V_{2\omega_{1}-2\alpha_{1}-...-2\alpha_{n-1}-\alpha_{n}}\\ D_{n}:&\quad V_{\omega_{1}}\otimes V_{\omega_{1}}=V_{2\omega_{1}}\oplus V_{2\omega_{1}-\alpha_{1}}\oplus V_{2\omega_{1}-2\alpha_{1}-...-2\alpha_{n-2}-\alpha_{n-1}-\alpha_{n}}\\ \end{split} \end{equation} For $B_{n}$ Lie algebra, the singular vectors of $Sing V_{\omega_{1}}\otimes V_{\omega_{1}}$ are $$v_{2\omega_{1}},v_{2\omega_{1}-\alpha_{1}},v_{2\omega_{1}-2\alpha_{1}-...-2\alpha_{n}}.$$ By theorem \ref{LMV}, there is an injective map between the set of the nondegenerate critical points and $SingV_{\omega_{1}}\otimes V_{\omega_{1}}$. Corresponding to each singular vector, the explicit solution of the critical point equation can be derived by lemma in section \ref{subsct:monodromy}. They are nondegenerate except when $l=0$. Thus the map is bijective on $SingV_{\omega_{1}}\otimes V_{\omega_{1}}\backslash v_{2\omega_{1}}$. When $l=0$, the formal degenerate critical point $w_{0}$ corresponds to the singular vector $v_{2\omega_{1}}$ in $SingV_{\omega_{1}}\otimes V_{\omega_{1}}$. Therefore, one to one correspondence is proved. For $A_{n}$, $C_{n}$, $D_{n}$, the proofs are similar. \end{proof} With an additional deformation parameter $c\in \mathbb{R}$, symmetry breaking Yang-Yang function is defined by \begin{equation} \begin{split} {\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})=&\sum _{j,a}(\alpha_{i_{j}},\lambda _{a})\log(w_{j}-z_{a}) -\sum _{j<k}(\alpha_{i_{j}},\alpha_{i_{k}})\log(w_{j}-w_{k})\\ &-\sum _{a< b}(\lambda_{a},\lambda_{b})\log(z_{a}-z_{b})-c\sum _{j}(\rho,\alpha_{i_{j}})w_{j}+c\sum _{a}(\rho,\lambda_{a})z_{a}. \end{split} \end{equation} It is clear that ${\bm W}_{0}({\bm w},{\bm z},{\bm \lambda},{\bm l})={\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm l})$. In fact, when $c\in \mathbb{Z}_{\geq0}$, ${\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})$ is just a limitation of the parameter deformation of $\eta$ with an additional parameter $z_{\infty}=1$ and the dominant integral weight $\lambda_{\infty}=c\rho\in P^{+}$ associated with it. \begin{equation}\label{lpt} \begin{split} &{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})\\ =&\lim_{\eta\rightarrow0} [\sum _{j,a}(\alpha_{i_{j}},\lambda _{a})\log(w_{j}-z_{a}) +\sum _{j}(\frac{c\rho}{\eta},\alpha_{i_{j}})\log(1-\eta w_{j})\\ &-\sum _{j<k}(\alpha_{i_{j}},\alpha_{i_{k}})\log(w_{j}-w_{k})-\sum _{0<a< b}(\lambda_{a},\lambda_{b})\log(z_{a}-z_{b}))\\ &-\sum _{a}(\frac{c\rho}{\eta},\lambda_{a})\log(1-\eta z_{a}) ]. \end{split} \end{equation} The critical point equation of ${\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})$ is \begin{equation}\label{CE2} \sum^{m}_{a=1}\frac{(\alpha_{i_{j}},\lambda_{a})}{w_{j}-z_{a}}=\sum_{s\neq j}\frac{(\alpha_{i_{j}},\alpha_{i_{s}})}{w_{j}-w_{s}}+c(\rho,\alpha_{i_{j}}), j=1,...,l. \end{equation} \begin{lemma}\label{split} If $w_{j}$ is the coordinate of the critical point of ${\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})$, then $$\lim_{c\rightarrow+\infty} w_{j}\in \{z_{a}\}_{a=1,...,m}.$$ \end{lemma} \begin{proof} Because $(\rho,\alpha_{i_{j}})>0$, it is clear for any $j$, $$\lim_{c\rightarrow +\infty} w_{j}\in \{z_{a}\}_{a=1,...,m}\cup \{w_{k}\}_{k\neq j}.$$ Divide the set $\{w_{1},...,w_{l}\}$ into a disjoint union of $Z_{1},...,Z_{m}$ and $M$, where $Z_{a}, a=1,...,m$ contains the coordinates $w_{j}$ which tend to $z_{a}$ and $M$ the rest of them. Assume $M=\{w_{m_{1}},...,w_{m_{p}}\}\neq \emptyset$, then $p>1$. Sum the equations of $j\in M$ \begin{equation}\label{sum} \sum_{j=m_{1}}^{m_{p}} \sum _{a}\dfrac {(\alpha_{i_{j}},\lambda_{a})} {w_{j}-z_{a}}=\sum_{j=m_{1}}^{m_{p}} \sum _{s\neq j}\dfrac {(\alpha_{i_{j}},\alpha_{i_{s}})} {w_{j}-w_{s}}+c\sum_{j=m_{1}}^{m_{p}}(\rho,\alpha_{i_{j}}). \end{equation} When $c\rightarrow +\infty$, the left hand side of \eqref{sum} is bounded. $$\sum_{j=m_{1}}^{m_{p}} \sum _{s\neq j}\dfrac {(\alpha_{i_{j}},\alpha_{i_{s}})} {w_{j}-w_{s}}=\sum_{w_{j}\in M} \sum _{w_{s}\in Z_{1}\cup...\cup Z_{m}}\dfrac {(\alpha_{i_{j}},\alpha_{i_{s}})} {w_{j}-w_{s}},$$ where the summation in $M$ is canceled, thus also bounded. \eqref{sum} leads to contradiction. Therefore, $M=\emptyset$. The lemma follows. \end{proof} Denote also by $w_{0}$ the critical point of the constant function${\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm 0})$. For ${\bm z}=z_{1}$ and ${\bm \lambda}=\omega_{1}$, we have the following theorem \begin{theorem}\label{1prm} Let $g$ be a classical complex simple Lie algebra. If $c\in \mathbb{Z}_{\geq0}$ and $c\geq2$, the nondegenerate critical points of the family of Yang-Yang functions $$\{{\bm W}_{c}({\bm w},z_{1},\omega_{1},{\bm l})\}_{ \omega_{1}-\alpha({\bm l})\in \Omega_{{\omega_{1}}}}$$ together with the formal degenerate critical point $w_{0}$ one to one correspond to the weight vectors in $V_{\omega_{1}}.$ \end{theorem} \begin{proof} By Littelmann-Littlewood-Richardson rule\cite{littelmann97}, if $c\in \mathbb{Z}_{\geq0}$ and $c\geq2$, the following direct sum decompositions can be derived. \begin{equation} \begin{split} A_{n}: V_{\omega_{1}}\otimes V_{c\rho}=&V_{\omega_{1}+c\rho}\oplus V_{\omega_{1}+c\rho-\alpha_{1}}\oplus ...\oplus V_{\omega_{1}+c\rho-\alpha_{1}-...-\alpha_{n}}\\ B_{n}: V_{\omega_{1}}\otimes V_{c\rho}=&V_{\omega_{1}+c\rho}\oplus V_{\omega_{1}+c\rho-\alpha_{1}}\oplus ...\oplus V_{\omega_{1}+c\rho-\alpha_{1}-...-\alpha_{n}}\oplus \\ &V_{\omega_{1}+c\rho-\alpha_{1}-...-2\alpha_{n}}\oplus V_{\omega_{1}+c\rho-\alpha_{1}-...-2\alpha_{n-1}-2\alpha_{n}} \oplus \\&...\oplus V_{\omega_{1}+c\rho-2\alpha_{1}-...-2\alpha_{n-1}-2\alpha_{n}}\\ C_{n}: V_{\omega_{1}}\otimes V_{c\rho}=&V_{\omega_{1}+c\rho}\oplus V_{\omega_{1}+c\rho-\alpha_{1}}\oplus ...\oplus V_{\omega_{1}+c\rho-\alpha_{1}-...-\alpha_{n}}\oplus \\ &V_{\omega_{1}+c\rho-\alpha_{1}-...-2\alpha_{n-1}-\alpha_{n}} \oplus ...\oplus V_{\omega_{1}+c\rho-2\alpha_{1}-...-2\alpha_{n-1}-\alpha_{n}}\\ D_{n}: V_{\omega_{1}}\otimes V_{c\rho}=&V_{\omega_{1}+c\rho}\oplus V_{\omega_{1}+c\rho-\alpha_{1}}\oplus V_{\omega_{1}+c\rho-\alpha_{1}-\alpha_{2}}\oplus ...\oplus \\ &V_{\omega_{1}+c\rho-\alpha_{1}-...-\alpha_{n}} \oplus V_{\omega_{1}+c\rho-\alpha_{1}-...-2\alpha_{n-2}-\alpha_{n-1}-\alpha_{n}} \oplus\\& ...\oplus V_{\omega_{1}+c\rho-2\alpha_{1}-...-2\alpha_{n-2}-\alpha_{n-1}-\alpha_{n}}\\ \end{split} \end{equation} For $A_{n}$, it is clear that the singular vectors of $SingV_{\omega_{1}}\otimes V_{c\rho}$ $$v_{\omega_{1}+c\rho},v_{\omega_{1}+c\rho-\alpha_{1}},...,v_{\omega_{1}+c\rho-\alpha_{1}-...-\alpha_{n}},$$ one to one correspond to the weight vectors $$v_{\omega_{1}},v_{\omega_{1}-\alpha_{1}},...,v_{\omega_{1}-\alpha_{1}-...-\alpha_{n}},$$ in $V_{\omega_{1}}$. By theorem \ref{LMV}, there is an injective map between the set of the nondegenerate critical points and $SingV_{\omega_{1}}\otimes V_{c\rho}$. Corresponding to each singular vectors, ${\bm l}$ satisfies the admissible condition and the explicit solutions of critical points with two parameters $z_{1}=0, z_{2}=1$ are given in \cite{LMV16}. They are nondegenerate except when $l=0$. Thus the map is bijective on $SingV_{\omega_{1}}\otimes V_{c\rho}\backslash v_{\omega_{1}+c\rho}$. When $l=0$, the formal degenerate critical point $w_{0}$ corresponds to the singular vector $v_{\omega_{1}+c\rho}$ in $SingV_{\omega_{1}}\otimes V_{c\rho}$. By \eqref{lpt}, the critical point of ${\bm W}_{c}({\bm w},z_{1},\omega_{1},{\bm l})$ are the limitation of the parameter deformation of those solutions. We give the explicit expressions of them in the lemma \ref{lem:exsol} in section \ref{subsubsct:B}. Therefore, one to one correspondence is proved. For $B_{n}$, $C_{n}$, $D_{n}$, the proofs are similar. \end{proof} \begin{remark} Except for $B_{n}$, the direct sum decompositions above are also correct if $c=1$ .\end{remark} \subsection{$V_{\lambda}\otimes V_{\lambda}$ realized as the fiber of $\mathbb{Z}[t,t^{-1}]$-module bundle} For any dominant integral weight $\lambda \in P^{+}$, there exists a linear automorphism of $V_{\lambda}\otimes V_{\lambda}$ called R-matrix: $$R:V_{\lambda}\otimes V_{\lambda}\rightarrow V_{\lambda}\otimes V_{\lambda}$$ satisfying Yang-Baxter equation \cite{Kassel} $$(R\otimes id_{V_{\lambda}})(id_{V_{\lambda}}\otimes R)(R\otimes id_{V_{\lambda}})=(id_{V_{\lambda}}\otimes R)(R\otimes id_{V_{\lambda}})(id_{V_{\lambda}}\otimes R).$$ To study R-matrix, $m=2$ is sufficient. In the following sections, we consider the fundamental representation $\lambda=\omega_{1}$ of complex simple Lie algebra of classical types. Assume that $z_{1}$ and $z_{2}$ have the same real part and $Im z_{1}> Im z_{2}$. \subsubsection{$V_{\lambda}\otimes V_{\lambda}$ and the thimble space of Yang-Yang functions} By lemma \ref{split}, when $c\rightarrow +\infty$, equations of \eqref{CE2} splits into two separated sets involving only $z_{1}$ or $z_{2}$ and its nondegenerate solutions space is just the tensor product of two nodegenerate solutions spaces with only one parameter $z_{1}$ or $z_{2}$. Therefore, by theorem\ref{1prm}, when $c\rightarrow +\infty$, the set of nondegenerate critical points of \eqref{CE2} together with formal critical point $w_{0}$ one to one correspond to the weight vectors in $V_{\omega_{1}}\otimes V_{\omega_{1}}$. The thimble of ${\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})$, defined \cite{HL1} as the cycle formulated by the gradient flow of the real part of Yang-Yang function starting from the critical point is a $l$ dimensional manifold. Therefore, for all vectors $v_{\omega_{1}-\alpha({\bm k})}\otimes v_{\omega_{1}-\alpha({\bm l})}\in V_{\omega_{1}}\otimes V_{\omega_{1}}$, there exists a unique thimble of ${\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm k+\bm l})$ denoted by $J_{{\bm k},{\bm l}}$ and $J_{{\bm k},{\bm l}}=J_{{\bm k}}\times J_{{\bm l}}$, where $J_{{\bm k}}$ and $J_{{\bm l}}$ are thimbles of ${\bm W}_{c}({\bm w},z_{1},\omega_{1},{\bm k})$ and ${\bm W}_{c}({\bm w},z_{2},\omega_{1},{\bm l})$ respectively. Therefore, we have \begin{theorem}\label{thmsb2c}When $c\rightarrow +\infty$, the basis of $V_{\omega_{1}}\otimes V_{\omega_{1}}$ one to one correspond to thimbles generated from a family of Yang-Yang functions $$\{{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm k+\bm l})\}_{ \omega_{1}-\alpha({\bm k}),\omega_{1}-\alpha({\bm l})\in \Omega_{{\omega_{1}}}}.$$ \end{theorem} Note that the solution of critical point equation \eqref{CE2} is not unique. It means that there exist different thimbles corresponding to different critical solutions of the same Yang-Yang function. We will see it in the example of section \ref{subsec:ex}. \subsubsection{The thimble space as a fiber bundle of $\mathbb{Z}[t,t^{-1}]$-module} As in \cite{ATY}, \cite{FFR} and \cite{Frenkel95}, Yang-Yang function appears naturally as an exponent in the correlation function of Wakimoto realization of Kac-Moody algebra at arbitrary level $\kappa$ : \begin{equation} \begin{split} &\int _{\Gamma }\prod_{j,a}(w_{j}-z_{a})^{-\frac{(\alpha_{i_{j}},\lambda_{a})}{\kappa+h^{\vee}}}\prod_{j<s}(w_{j}-w_{s})^{\frac{(\alpha_{i_{j}},\alpha_{i_{s}})}{\kappa+h^{\vee}}}(z_{1}-z_{2})^{\frac{(\lambda,\lambda)}{\kappa+h^{\vee}}}\prod _{j}dw_{j}\\ =&\int _{\Gamma }e^{-\frac{{\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm l})}{\kappa+h^{\vee}}}\prod _{j}dw_{j}, \end{split} \end{equation} where $h^{\vee}$ is the dual Coxeter number. The crucial problem is to figure out the transformation induced on the thimble space by parameter deformation of $e^{-\frac{{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})}{\kappa+h^{\vee}}}$, which leads us to study the extra structure of the thimble space. Thimbles of the real part of Yang-Yang function ${\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})$ are the same as thimbles of $\mid e^{-\frac{{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})}{\kappa+h^{\vee}}}\mid$, except that there are infinite number of pre-images with difference $2\pi \mathbbm{i}$ in their imaginary parts. Along the thimble of its real part, conservation law \cite{HL1} of the imaginary part of Yang-Yang function implies global invariance of the phase factor $e^{- \frac{Im {\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})}{\kappa+h^{\vee}}\mathbbm{i}}$. By theorem \ref{thmsb2c}, when $c\rightarrow +\infty$, the basis of $V_{\omega_{1}}\otimes V_{\omega_{1}}$ one to one correspond to the family of thimbles $\{J_{{\bm k},{\bm l}}\}_{ \omega_{1}-\alpha({\bm k}),\omega_{1}-\alpha({\bm l})\in \Omega_{{\omega_{1}}}}$ of $\{\mid e^{-\frac{{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm k+\bm l})}{\kappa+h^{\vee}}}\mid\}_{ \omega_{1}-\alpha({\bm k}),\omega_{1}-\alpha({\bm l})\in \Omega_{{\omega_{1}}}}$. Each of them has infinite pre-images with different global phases. Choose a branch from infinite pre-images for each thimble. Define $q=e^{\frac{2\pi \mathbbm{i}}{\kappa+h^{\vee}}}$. Continuous deformation of the parameters in $e^{-\frac{{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})}{\kappa+h^{\vee}}}$ will induce a global phase factor variation along the thimble of $\mid e^{-\frac{{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})}{\kappa+h^{\vee}}}\mid$. $\{J_{{\bm k},{\bm l}}\}_{ \omega_{1}-\alpha({\bm k}),\omega_{1}-\alpha({\bm l})\in \Omega_{{\omega_{1}}}}$ are elements of a certain relative homology group and have a natural integral structure \cite{GW}. Let $ \mathbb{Z}[t,t^{-1}]$ be the ring of Laurrent polynomials of $t$ over $\mathbb{Z}$, then these variations naturally make thimble space a module over ring $ \mathbb{Z}[t,t^{-1}]$, where $t$ is some minimal power of $q$ during the variation. From now on, we consider $V_{\omega_{1}}\otimes V_{\omega_{1}}$ as a $\mathbb{Z}[t,t^{-1}]$-module generated by the basis $\{v_{\omega_{1}-\alpha({\bm k})}\otimes v_{\omega_{1}-\alpha({\bm l})}\}_{ \omega_{1}-\alpha({\bm k}),\omega_{1}-\alpha({\bm l})\in \Omega_{{\omega_{1}}}}$. Let $\mathfrak{J}=\mathbb{Z}[t,t^{-1}]\{J_{{\bm k},{\bm l}}\}_{ \omega_{1}-\alpha({\bm k}),\omega_{1}-\alpha({\bm l})\in \Omega_{{\omega_{1}}}}$ be the module over $\mathbb{Z}[t,t^{-1}]$ generated by the corresponding thimbles, then we have a natural decomposition of $V_{\omega_{1}}\otimes V_{\omega_{1}}$ as a direct sum of $\mathbb{Z}[t,t^{-1}]$-submodules. \begin{lemma} \begin{equation}\label{11crspd} V_{\omega_{1}}\otimes V_{\omega_{1}}\cong \mathfrak{J}=\underset{\mathbbm{q}}{\oplus} E^{\mathbbm{q}}, \end{equation} where $E^{\mathbbm{q}}=\mathbb{Z}[t,t^{-1}]\{J_{{\bm k},{\bm l}}\}_{k+l=\mathbbm{q}}$ is a $\mathbb{Z}[t,t^{-1}]$-submodule generated by all the $\mathbbm{q}$ dimensional thimbles of $\mathfrak{J}$. \end{lemma} Because $\mathfrak{J}$ depends on the parameters $(z_{1},z_{2})$, there exists a bundle $\mathfrak{J}\overset{\pi}{\rightarrow} X_{2}$, where \begin{equation}\label{Confi} X_{2}=\{(z_{1},z_{2})\in \mathbb{C}^{2}\mid z_{1}\neq z_{2}\}/(z_{1},z_{2})\sim(z_{2},z_{1}) \end{equation} is the configuration space of the two complex parameters $z_{1}$ and $z_{2}$. The fiber $\pi^{-1}(z_{1},z_{2})=\mathfrak{J}({\bm z})$ is defined to be a $\mathbb{Z}[t,t^{-1}]$-module generated by all the thimbles of $\{\mid e^{-\frac{{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm k+\bm l})}{\kappa+h^{\vee}}}\mid\}_{\omega_{1}-\alpha({\bm k}),\omega_{1}-\alpha({\bm l})\in \Omega_{{\omega_{1}}}}$. Let $E^{\mathbbm{q}}\overset{\pi^{\mathbbm{q}}}{\rightarrow} X_{2}$ be the $\mathbb{Z}[t,t^{-1}]$-module sub-bundle of $\mathfrak{J}\overset{\pi}{\rightarrow} X_{2}$. Because of \eqref{11crspd}, we have the following decomposition of the $\mathbb{Z}[t,t^{-1}]$-module bundle. \begin{lemma} $$\pi=\underset{\mathbbm{q}}{\oplus}\pi^{\mathbbm{q}}.$$ \end{lemma} In sum, the thimble space of $\{e^{-\frac{{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm k+\bm l})}{\kappa+h^{\vee}}}\}_{\omega_{1}-\alpha({\bm k}),\omega_{1}-\alpha({\bm l})\in \Omega_{{\omega_{1}}}}$ as the fiber of $\mathbb{Z}[t,t^{-1}]$-module bundle $\mathfrak{J}\overset{\pi}{\rightarrow} X_{2}$ gives $V_{\omega_{1}}\otimes V_{\omega_{1}}$ a geometric realization. We will use it to derive $R$ matrix. \subsection{Monodromy representation of the $\mathbb{Z}[t,t^{-1}]$-module \\fiber bundle}\label{subsct:bundle} Fixing a point $P=(z_{1},z_{2})\in X_{2}$, we consider a continuous parameter transformation $T(s): X_{2}\rightarrow X_{2}$ defined by \begin{equation}\label{equ:T} T(s)\left( \begin{array}{c} z_{1} \\ z_{2} \\ \end{array} \right)=\left( \begin{array}{cc} \frac{1+e^{-\mathbbm{i}\pi s}}{2} & \frac{1-e^{-\mathbbm{i}\pi s}}{2} \\ \frac{1-e^{-\mathbbm{i}\pi s}}{2} & \frac{1+e^{-\mathbbm{i}\pi s}}{2} \\ \end{array} \right)\left( \begin{array}{c} z_{1} \\ z_{2} \\ \end{array} \right),s\in [0,1]. \end{equation} It is a clockwise rotation around the middle point of $z_{1}$ and $z_{2}$: $$T(1)P=T(1)\left( \begin{array}{c} z_{1} \\ z_{2} \\ \end{array} \right)=\left( \begin{array}{c} z_{2} \\ z_{1} \\ \end{array} \right)=P.$$ Thus, $T:S^{1}\rightarrow X_{2}$ generates a fundamental group $\pi_{1}(X_{2},P)$ of $X_{2}$ with base point $P$ and also induces a $\mathbb{Z}[t,t^{-1}]$-module transformation $$\sigma\in End(\mathfrak{J}(P),\mathbb{Z}[t,t^{-1}])=End(V_{\omega_{1}}\otimes V_{\omega_{1}},\mathbb{Z}[t,t^{-1}])$$ on the fiber $\mathfrak{J}(P)\cong V_{\omega_{1}}\otimes V_{\omega_{1}}$. $\sigma$ is called monodromy of the bundle and it generates a monodromy group. Its representation on the fiber space is called monodromy representation. Let $\boldsymbol{B}:V_{\omega_{1}}\otimes V_{\omega_{1}}\rightarrow V_{\omega_{1}}\otimes V_{\omega_{1}}$ be the monodromy representation of the bundle $\mathfrak{J}\overset{\pi}{\rightarrow} X_{2}$ induced by $T:S^{1}\rightarrow X_{2}$ and $\boldsymbol{B_{U_{h}(g)}}$ the braid group representation induced by the universal R-matrices of the quantum enveloping algebra $U_{h}(g)$. Our first main theorem is as following: \begin{theorem}For the fundamental representation $V_{\omega_{1}}$ of classical complex simple Lie algebra $g$, the monodromy representation $\boldsymbol{B_{YY}}$ of the $\mathbb{Z}[t,t^{-1}]$-module fiber bundle $\mathfrak{J}\overset{\pi}{\rightarrow} X_{2}$ generated by the family of functions $$\{e^{-\frac{{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm k+\bm l})}{\kappa+h^{\vee}}}\}_{\omega_{1}-\alpha({\bm k}),\omega_{1}-\alpha({\bm l})\in \Omega_{{\omega_{1}}}}$$ is equivalent to $\boldsymbol{B_{U_{h}(g)}}$ by a diagonal transformation $Q$, i.e. $$\boldsymbol{B_{YY}}=Q \boldsymbol{B_{U_{h}(g)}}Q^{-1}, Q\in End(V_{\omega_{1}}\otimes V_{\omega_{1}},\mathbb{Z}[t,t^{-1}]).$$ \end{theorem} \begin{remark} The advantage of considering thimbles as the basis for $\mathbb{Z}[t,t^{-1}]$-module $V_{\omega_{1}}\otimes V_{\omega_{1}}$ is that the imaginary part of holomorphic function is conserved along the thimble defined by the gradient flow of the real part of the function \cite{HL1}. When the parameter varies, there is a global variation for the thimble, which can be extracted from the critical point associated with it. Therefore, it is convenient to calculate the monodromy of $\mathfrak{J}\overset{\pi}{\rightarrow} X_{2}$ valued in $\mathbb{Z}[t,t^{-1}]$ only by considering the variation of the critical point of Yang-Yang function. \end{remark} Denote by $\mathfrak{J}_{0}\overset{\mu}{\rightarrow} X_{2}$ the thimble space generated by the family $$\{e^{-\frac{{\bm W}_{0}({\bm w},{\bm z},{\bm \lambda},{\bm l})}{\kappa+h^{\vee}}}\mid V_{2\omega_{1}-\alpha({\bm l})} \subset V_{\omega_{1}}\otimes V_{\omega_{1}}\}.$$ By theorem \ref{1sing}, it is a fiber bundle of $\mathbb{Z}[t,t^{-1}]$-module with its fiber $\mathfrak{J}_{0}(P)$ isomorphic to $Sing V_{\omega_{1}}\otimes V_{\omega_{1}}$. Denote also by $\boldsymbol{B_{YY}}$ the monodromy representation on it induced by clockwise rotation $T(1)$ in equation \eqref{equ:T}. In section \ref{sec:conclusion}, we will define transformation $\boldsymbol{S}:\mathfrak{J}_{0}(P)\rightarrow\mathfrak{J}(P)$ induced by the parameter deformations $c\rightarrow +\infty$ from $0$. Our second main theorem is as following: \begin{theorem}\label{thm2} $\boldsymbol{S}$ commutes with $\boldsymbol{B_{YY}}$ i.e. the following diagram commutes. $$\begin{array}[c]{ccc}\mathfrak{J}_{0}(P)&\stackrel{\boldsymbol{S}}{\rightarrow}&\mathfrak{J}(P)\\\downarrow\scriptstyle{\boldsymbol{B_{YY}}}&&\downarrow\scriptstyle{\boldsymbol{B_{YY}}}\\\mathfrak{J}_{0}(P)&\stackrel{\boldsymbol{S}}{\rightarrow}&\mathfrak{J}(P)\end{array}$$ \end{theorem} \section{Wall-crossing formula and monodromy representation}\label{sec:formula} In this section, we study the monodromy representation $\boldsymbol{B_{YY}}$ of $\mathfrak{J}\overset{\pi}{\rightarrow} X_{2}$ and prove the first main theorem. To simplify the notation, we use $l$ to label each Yang-Yang function in the following sections. For $D_{n}$ Lie algebra, there are two Yang-Yang functions of ${\bm l}=(1,...,1,1,0)$ and ${\bm l}=(1,...,1,0,1)$ with the same $l=n-1$. We will label $(1,...,1,1,0)$ by $n-1$ and $(1,...,1,0,1)$ by $n-1'$ and define an order for them. \subsection{A simple example of wall-crossing phenomena}\label{subsec:ex} The key for the derivation of the monodromy representation is to figure out the wall-crossing formula. To illustrate it, we start from an example of $A_{1}$ Lie algebra $g=sl(2,\mathbb{C})$, $\dim V_{\omega_{1}}=2$ and $\Omega_{\omega_{1}}=\{\omega_{1},\omega_{1}-\alpha\}$. The inner product on the weight space is $(a,b)=\frac{a\cdot b}{2}$. For $v_{1}\otimes v_{0},v_{0}\otimes v_{1}\in V_{\omega_{1}}\otimes V_{\omega_{1}}$, two Yang-Yang functions corresponding to them equal \begin{equation} \begin{split} &{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},1+0)\\=&{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},0+1)\\ =&\sum_{a}(\alpha,\omega_{1})\log\left( w-z_{a}\right) -(\omega_{1},\omega_{1})\log \left( z_{1}-z_{2}\right) -c(w-\frac{1}{2}\left( z_{1}+z_{2}\right)). \end{split} \end{equation} Its critical point equation is \begin{equation} \frac{1}{w-z_{1}}+ \frac{1}{w-z_{2}}=c, \end{equation} which has two solutions $w^{1}(c)$ and $w^{2}(c)$ for $c\geq 2$. Assume $$\lim_{c\rightarrow +\infty}w^{1}(c)=z_{1},\lim_{c\rightarrow +\infty}w^{2}(c)=z_{2}.$$ With large $c$, let $J_{1,0}$ and $J_{0,1}$ be two thimbles associated respectively to $w^{1}(c)$ and $w^{2}(c)$. The continuous clockwise transformation $T(1)$ induces a continuous deformation on the thimble $J_{1,0}$:$$J_{1,0}\rightarrow q^{-\frac{1}{2}(\omega_{1}-\alpha,\omega_{1})}J_{0,1}=q^{\frac{1}{4}}J_{0,1},$$ where $q^{-\frac{1}{2}(\omega_{1}-\alpha,\omega_{1})}$ is from the phase factor difference of the critical values and it is equal to the phase factor difference of$ (z_{1}-z_{2})^{\frac{(\omega_{1}-\alpha,\omega_{1})}{\kappa+h^{\vee}}}$ under the $\frac{1}{2}$ clockwise rotation. The transformation of $J_{0,1}$ is more interesting. In the process of clockwise rotation $T(s)$, $z_{1}$ will pass through $J_{0,1}$ from the right hand side of $z_{2}$, when $s=\frac{1}{2}$, the imaginary parts of two critical values equals: $$Im {\bm W}_{c}(w^{1},T(\frac{1}{2})P,{\bm \lambda},1)=Im {\bm W}_{c}(w^{2},T(\frac{1}{2})P,{\bm \lambda},1).$$ There is a gradient flow connecting $w^{2}$ to $w^{1}$ as shown in figure 14 of \cite{GW}. The homotopic class of $J_{0,1}$ after deformation is equivalent to a zig-zag $\mathbb{Z}[t,t^{-1}]$-linear combination of $J_{1,0}$ and $J_{0,1}$£º \begin{equation}\label{equ:a1wc} \boldsymbol{B}J_{0,1}=aJ_{1,0}+(c-b)J_{0,1}. \end{equation} Because in general situation thimble is of high dimension, it is convenient to just draw the variation of its critical point along the homotopic class of the thimble after rotation, rather than to draw the thimble itself. Here, the zig-zag thimble has three parts and the relations of their critical points are shown in figure \ref{fig:t10}. In the following, whenever we draw the figure of the variation of the critical point, we are showing the relations between the critical points of the different thimbles in the homotopic class of the thimble after the deformation. The coefficient $a$ in \eqref{equ:a1wc} is from the phase factor of $e^{-\frac{{\bm W}_{c}(w^{2},T(\frac{1}{2})P,{\bm \lambda},1)}{\kappa+h^{\vee}}}\sim (z_{1}-z_{2})^{\frac{(\omega_{1}-\alpha,\omega_{1})}{\kappa+h^{\vee}}}$. As shown in figure \ref{fig:t10}, $b$ differs from $a$ by an additional movement of the critical point from the right of $z_{2}$ to the right of $z_{1}$ and the negative sign before $b$ is from the orientation reverse of the thimble. $c$ differs from $b$ by an anti-clockwise rotation around $z_{1}$. Thus, $a=q^{-\frac{1}{2}(\omega_{1},\omega_{1}-\alpha)}$, $b=q^{\frac{1}{2}(\omega_{1},\alpha)}\cdot a$ and $c=q^{-(\omega_{1},\alpha)}\cdot b$. \begin{equation}\label{equ:a1wc2} \boldsymbol{B}J_{0,1}=q^{\frac{1}{4}}J_{1,0}+(q^{-\frac{1}{4}}-q^{\frac{3}{4}})J_{0,1}.\\ \end{equation} The minimal phase factor under $T$ is $t=q^{\frac{1}{4}}$ and therefore the monodromy is valued in $\mathbb{C}[q^{\frac{1}{4}},q^{-\frac{1}{4}}]$. In sum, \begin{equation} \boldsymbol{B}\left( \begin{array}{c} J_{1,0} \\ J_{0,1} \\ \end{array} \right)=\left( \begin{array}{cc} 0 & q^{\frac{1}{4}} \\ q^{\frac{1}{4}} & q^{-\frac{1}{4}}-q^{\frac{3}{4}} \\ \end{array} \right) \left( \begin{array}{c} J_{1,0} \\ J_{0,1} \\ \end{array} \right). \end{equation} \begin{figure} \centering\includegraphics[width=6cm]{mono-t10}\\ \caption{Variation of the critical point on ${\bm W}$ plane.} \label{fig:t10} \end{figure} The phenomena is called wall-crossing phenomena and the formula \eqref{equ:a1wc2} describing it is called wall-crossing formula. Terms with coefficients $b$ and $c$ are wall-crossing terms. The following properties are clear. Firstly, because critical points of the thimbles in the wall-crossing phenomena are different solutions from the same Yang-Yang function, thus the types and total number of primary roots will not be created or annihilated in the process of wall-crossing, but only be transfered from one point to another point. We call this property conservation law of wall-crossing. Secondly, clockwise transformation $T$ make a constraint on the direction of the primary roots transfer: primary roots only move in the direction of positive real axis from $z_{2}$ to $z_{1}$. Based on the above two properties, $E^{\mathbbm{q}}$ is a $\sigma$ invariant sub-module. The monodromy representation $\boldsymbol{B}$ is naturally decomposed into a direct sum of the sub-representations on $E^{\mathbbm{q}}$ and all matrices of the sub-representation are triangular and valued in $\mathbb{Z}[t,t^{-1}]$. \subsection{Variations of critical points, wall-crossing formula and monodromy representation}\label{subsct:monodromy} To derive wall-crossing formula, it is necessary to analyze the variation of critical points under the transformation $T$. As in the previous example, we focus on the homotopic class of the thimble after deformation $T$ and see how the primary roots move from $z_{2}$ to $z_{1}$. For the fundamental representation $V_{\omega_{1}}$ of $g\in A_{n},B_{n},C_{n},D_{n}$, we conclude as following four types of variations of critical points during wall-crossing and the details of derivation can be found in each case in section \ref{subsubsct:A}, \ref{subsubsct:B}, \ref{subsubsct:C} and \ref{subsubsct:D}. \begin{description} \item[Type I] \begin{figure} \centering\includegraphics[width=3.5cm]{mono-t1}\\ \caption{Type I.} \label{fig:t1} \end{figure} As shown in figure \ref{fig:t1}, coordinates of the critical point have the same real parts but different imaginary parts. This variation appears in the case of the finite dimensional irreducible representation of $A_{1}$ or in the case of $B_{n}$ Lie algebra. \item[Type II] \begin{figure} \centering\includegraphics[width=3.5cm]{mono-t2}\\ \caption{Type II.} \label{fig:t2} \end{figure} Coordinates of the critical point near $z_{2}$ have the same imaginary parts formulated as a straight line paralleled to the real axis. The variation is just translation as shown in figure \ref{fig:t2}. The homotopic class of the thimble after deformation is equivalent to three parts. There is only one dimension in the sub-thimble reversing its orientation, thus the sign before the coefficient $b$ is always minus in this case. \item[Type III] \begin{figure} \centering\includegraphics[width=4cm]{mono-t3}\\ \caption{Type III.} \label{fig:t3} \end{figure} Coordinates of the critical point near $z_{2}$ have the same imaginary parts formulated as a straight line paralleled to the real axis. The variation from $z_{2}$ to $z_{1}$ as shown in figure \ref{fig:t3} has a clockwise self rotation of $\pi$, then followed by a translation. In this case, the orientation of each dimension of the sub-thimble connecting $z_{2}$ to $z_{1}$ is reversed. Thus, the sign before $b$ is $(-1)^{j}$, where $j$ is the dimension of the sub-thimble or the number of primary roots moving from $z_{2}$ to $z_{1}$. Opposite to the sign before $b$, it is $(-1)^{j+1}$ before $c$ . \item[Type IV] \begin{figure} \centering\includegraphics[width=4cm]{mono-t41} \centering\includegraphics[width=4cm]{mono-t42} \centering\includegraphics[width=4cm]{mono-t43}\\ \caption{The origin of $c_{1}$, $c_{2}$ and $c_{3}$ of type IV.} \label{fig:t4} \end{figure} Coordinates of the critical point near $z_{2}$ have the same imaginary parts formulated as a straight line paralleled to the real axis. The variation leaves a trace along the homotopic class of the integration cycle like a "snake" keeping the relative order of the coordinates invariant during the moving. The coefficient $b$ is from the variation of primary roots moving from $z_{2}$ to $z_{1}$. $c_{1}$, $c_{2}$ and $c_{3}$ differs from $b$ respectively by $1$, $j-1$ and $j$ primary roots rotating around $z_{1}$ and other primary roots near $z_{1}$ in a specific manner, as is shown in the pictures of figure \ref{fig:t4}. Thus, the sign before $b$, $c_{1}$, $c_{2}$ and $c_{3}$ are $(-1)^{j}$, $(-1)^{j+1}$, $(-1)^{2j-1}=-1$ and $(-1)^{2j}=1$ respectively. \end{description} In the following, we derive the monodromy representations for $A_{n}$, $B_{n}$, $C_{n}$ and $D_{n}$ respectively. \subsubsection{$A_{n}$}\label{subsubsct:A} Denote by $\{\lambda^{i}\}_{i=0,1,...,n}$ the weights of $V_{\omega_{1}}$, where $\lambda^{i}=\omega_{1}-\sum_{j=1}^{i}\alpha_{j}$. For each weight vector $v_{\lambda^{j}}\in V_{\omega_{1}}$, the corresponding Yang-Yang function${\bm W}_{c}({\bm w},z,\omega_{1},j)$ with one parameter $z$ is \begin{equation} \begin{split}\label{equ:1zGYY in SB} {\bm W}_{c}({\bm w},z,\omega_{1},j) =\sum _{i=1}^{j}(\alpha_{i},\omega_{1})\log\left( w_{i}-z\right) &-\sum_{1\leq i< s\leq j}(\alpha_{i},\alpha_{s})\log \left( w_{i}-w_{s}\right)\\ &-c(\sum_{i=1}^{j}(\alpha_{i},\rho)w_{i}- (\omega_{1},\rho)z). \end{split} \end{equation} Since $(\omega_{1},\alpha_{i})=\delta_{i,1}$ and $$(\alpha_{i},\alpha_{k})=\left\{ \begin{array}{ll} 2, & \hbox{$i=k$;} \\ -1, &\hbox{$\mid i-k\mid=1$;}\\ 0, & \hbox{otherwise,} \end{array} \right.$$ \begin{equation} \begin{split} {\bm W}_{c}({\bm w},z,\omega_{1},j) =\log\left( w_{1}-z\right) &+\sum_{i=1}^{j-1}\log \left( w_{i}-w_{i+1}\right)-c(\sum_{i=1}^{j}w_{i}- \frac{n}{2}z). \end{split} \end{equation} Its critical point equation is as following: \begin{equation}\label{ceA} \left\{ \begin{aligned} \frac{1}{w_{1}-z}&=\frac{-1}{w_{1}-w_{2}}+c \\ 0&=\frac{-1}{w_{2}-w_{1}}+\frac{-1}{w_{2}-w_{3}}+c \\ ... \\ 0&=\frac{-1}{w_{j-1}-w_{j-2}}+\frac{-1}{w_{j-1}-w_{j}}+c \\ 0&=\frac{-1}{w_{j}-w_{j-1}}+c. \end{aligned} \right. \end{equation} It is obvious that $z$ and the solution $\{w_{i}=z+\sum_{k=0}^{i-1}\frac{1}{(j-k)c}\}_{i=1,..,j}$ are on the same horizontal line of ${\bm W}$ plane. \begin{figure} \centering\includegraphics[width=7.5cm]{mono-t0}\\ \caption{Coefficient $a$ from basic clockwise rotation.} \label{fig:t0} \end{figure} Therefore, for $v_{\lambda^{i}}\otimes v_{\lambda^{j}}\in V_{\omega_{1}}\otimes V_{\omega_{1}}$, the coordinates of the critical point are distributed respectively along two horizontal straight lines started from $z_{1}$ and $z_{2}$ on the ${\bm W}$ plane, as shown in the first picture of figure \ref{fig:t0}. When $i\geq j$, there is no wall-crossing under the transformation $T$. $$\boldsymbol{B}J_{i,j}=q^{-\frac{1}{2}(\lambda^{i},\lambda^{j})}J_{j,i}.$$ When $i<j$, the variation is moving primary roots $\{\alpha_{i+1},\alpha_{i+2},...,\alpha_{j}\}$ from $z_{2}$ to $z_{1}$. Let $w^{a}_{k}, a=1,2$ be the coordinates of $\alpha_{k}$. The type of variation can be seen from the deformation of parameter $c\rightarrow+\infty$ from $c=0$. The critical point equation with two singularities $w^{1}_{i}$ and $w^{2}_{i}$ is as following: \begin{equation}\label{equ:ANSB} \left\{ \begin{aligned} \frac{1}{w_{i+1}-w^{1}_{i}}+\frac{1}{w_{i+1}-w^{2}_{i}}&=\frac{-1}{w_{i+1}-w_{i+2}}\\ ... \\ 0&=\frac{-1}{w_{k}-w_{k-1}}+\frac{-1}{w_{k}-w_{k+1}} \\ ... \\ 0&=\frac{-1}{w_{j}-w_{j-1}}+c, \end{aligned} \right. \end{equation} where $i+2\leq k\leq j-1$. For $c\neq 0$, $$\begin{aligned} w_{i+1}&=\frac{2 + c w^{1}_{i} + c w^{2}_{i} \pm \sqrt{4 + c^{2} (w^{1}_{i}-w^{2}_{i})^{2}}}{2c},\\ w_{k}&=w_{i+1}+\frac{k-i-1}{c},\quad k\geq i+2.\\ \end{aligned}$$ $$\lim_{c\rightarrow 0}w_{i+1}=\frac{w^{1}_{i} + w^{2}_{i}}{2}\quad \hbox{and}\quad \lim_{c\rightarrow 0}w_{k}=+\infty, \quad i+2\leq k\leq j.$$ $$\lim_{c\rightarrow +\infty}w_{i+1}=w^{1}_{i} \quad \hbox{or} \quad w^{2}_{i}.$$ \begin{figure} \centering\includegraphics[width=7cm]{mono2-va}\\ \caption{Variation of critical points as $c\rightarrow +\infty$ from $0$.} \label{fig:va} \end{figure} Note that the process above is independent of the position of $w^{1}_{i}$ and $w^{2}_{i}$. As shown in figure \ref{fig:va}, when $c\rightarrow +\infty$, the coordinates of the critical point at $c=0$ are continuously moving to $w^{1}_{i}$ or $w^{2}_{i}$ within the same horizontal line on ${\bm W}$ plane. In the process of continuous transformation $T(s)$, when $s=1/2$, the imaginary parts of two critical values equals. Thus there will be two thimbles connecting $w^{2}_{i}$ to $w^{1}_{i}$ from above and below $z_{1}$ and they are homotopically equivalent to the thimbles at $c=0$ with $Im w^{1}_{i}<Im w^{2}_{i}$ and $Im w^{1}_{i}>Im w^{2}_{i}$ respectively. The variation of $\{w_{i+1},...,w_{j}\}$ is just a translation and thus a type $II$ variation. As shown in figure \ref{fig:a1}, the homotopic class of $\boldsymbol{B}J_{i,j}$ is equivalent to $\mathbb{Z}[t,t^{-1}]$-linear combination of three parts $$\boldsymbol{B}J_{i,j}=aJ_{j,i}+(c-b)J_{i,j}$$ and the difference of them coming from the translation of the primary roots $\{\alpha_{i+1},\alpha_{i+2},...,\alpha_{j}\}$. In fact, under the basic clockwise rotation as shown in figure \ref{fig:t0}, $a$ is from the phase factor difference of $e^{-\frac{{\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},i+j)}{k+h^{v}}}$ i.e. that of $(z_{1}-z_{2})^{(\lambda^{i},\lambda^{j})}$. Comparing with $a$, $b$ has an additional phase factor $q^{\frac{1}{2}(\lambda^{i},\lambda^{i}-\lambda^{j})}$ of translating $\{\alpha_{i+1},\alpha_{i+2},...,\alpha_{j}\}$ from $z_{2}$ to $z_{1}$. $c$ differs from $b$ by translation of $\{\alpha_{i+1},\alpha_{i+2},...,\alpha_{j}\}$ around $z_{1}$ and $\{\alpha_{1},\alpha_{2},...,\alpha_{i}\}$. Thus$$c=b\cdot q^{-(\lambda^{i},\lambda^{i}-\lambda^{j})}.$$ Therefore, we get the following wall-crossing formula for $V_{\omega_{1}}$: $$\boldsymbol{B}J_{i,j}=\left\{ \begin{array}{ll} q^{-\frac{1}{2}(\lambda^{i},\lambda^{j})}J_{j,i}, & \hbox{$i\geq j$;} \\ q^{-\frac{1}{2}(\lambda^{i},\lambda^{j})}(J_{j,i}+q^{\frac{1}{2}(\lambda^{i},\lambda^{i}-\lambda^{j})}(q^{-(\lambda^{i},\lambda^{i}-\lambda^{j})}-1)J_{i,j}), & \hbox{$j>i$.} \end{array} \right. $$ \begin{figure} \centering\includegraphics[width=4.5cm]{mono-a1}\\ \caption{In the fundamental representation of $A_{n}$ Lie algebra, wall-crossing coefficient $b$ is different from $a$ by the translation of $\{\alpha_{i+1},\alpha_{i+2},...,\alpha_{j}\}$ from $z_{2}$ to $z_{1}$ and $c$ different from $b$ by anti-clockwise translation around $z_{1}$ and $\{\alpha_{1},\alpha_{2},...,\alpha_{i}\}$.} \label{fig:a1} \end{figure} \subsubsection{$B_{n}$}\label{subsubsct:B} Denote the weights of the fundamental representation of $B_{n}$ Lie algebra by $$\lambda^{i}=\left\{ \begin{array}{ll} \omega_{1}-\sum_{j=1}^{i}\alpha_{j}, & \hbox{$i\leq n$;} \\ \omega_{1}-\sum_{j=1}^{n}\alpha_{j}-\sum_{j=2n+1-i}^{n}\alpha_{j}, & \hbox{$n<i\leq 2n$.} \end{array} \right. $$ Their inner products are: \begin{equation} (\lambda^{s},\lambda^{t})=\left\{ \begin{array}{ll} 1, & \hbox{$s+t\neq2n,s=t$;} \\ 0, & \hbox{$s+t\neq2n,s\neq t$;} \\ 0, & \hbox{$s+t=2n,s=t $;} \\ -1, & \hbox{$s+t=2n,s\neq t $,} \end{array} \right. \end{equation}where $s,t=0,1,..,2n$. For any weight vector $v_{\lambda^{l}}\in V_{\omega_{1}}$, the corresponding Yang-Yang function is as following: \begin{equation} \begin{split}\label{equ:BGYY in SB} {\bm W}_{c}({\bm w},z,\omega_{1},l) =\sum _{j=1}^{l}(\alpha_{i_{j}},\omega_{1})\log\left( w_{j}-z\right) -&\sum_{1\leq j< k\leq l}(\alpha_{i_{j}},\alpha_{i_{k}})\log \left( w_{j}-w_{k}\right)\\ -&c(\sum_{j=1}^{l}(\alpha_{i_{j}},\rho)w_{j}-(\omega_{1},\rho) z), \end{split} \end{equation}where $$i_{j}=\left\{ \begin{array}{ll} j, & \hbox{$j\leq n$;} \\ 2n-j+1, & \hbox{$n<j\leq 2n$.} \end{array} \right. $$ By this notation, when $l\geq n+1$, $\{w_{k}, w_{2n+1-k}\}_{2n+1-l\leq k\leq n}$ are pairs of symmetric coordinates of $\alpha_{k}$ in the function. Define $\bar{w}_{k}=w_{k}+w_{2n+1-k}$ and $\Delta_{k}=(w_{k}-w_{2n+1-k})^{2}$, $2n+1-l\leq k\leq n$. \begin{lemma}\label{lem:exsol} For the fundamental representation $V_{\omega_{1}}$ of $B_{n}$ Lie algebra, the solutions of the critical point equation \eqref{CE2} of the corresponding Yang-Yang functions ${\bm W}_{c}({\bm w},0,\omega_{1},l)$ are as following: When $l<n$, \begin{equation} w_{j}=\sum_{i=1}^{j}\frac{1}{c(l-i+1)}\quad j=1,...,l.\end{equation} When $l=n$, \begin{equation}w_{j}=\sum_{i=1}^{j}\frac{1}{c(l-i+1/2)}\quad j=1,...,l.\end{equation} When $l\geq n+1$, \begin{equation}w_{k}=\sum_{i=1}^{k}\frac{1}{c(l-i)}\end{equation} for $k=1,...,2n-l.$ For $2n+1-l\leq k\leq n$, \begin{equation}\begin{split}\bar{w}_{k}=&\frac{1}{c(l-r-1)}+\sum_{j=1}^{2n-l}\frac{2}{c(l-j)}+\sum_{j=2n-l+1}^{k-1}\frac{1}{c(l-j-1)}\\&+\sum_{j=2n-l+1}^{2n-k-1}\frac{1}{c(l-j-1)},\\ \Delta_{k}=&[\sum_{j=k}^{2n-k-1}\frac{1}{c(l-j-1)}]^2-\frac{1}{c^2(l-n-1)^2}.\end{split}\end{equation} \end{lemma} \begin{proof} The explicit solutions of the critical point equation of Yang-Yang function ${\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm l})$ with ${\bm z}=(0,1)$, ${\bm \lambda}=(\lambda,\omega_{1})$ are already known in \cite{LMV16}. If we denote them by $w_{1}(\lambda),...,w_{l}(\lambda)$, then the solutions $\tilde{w}_{1},...,\tilde{w}_{l}$ of the critical point equation \eqref{CE2} of Yang-Yang functions ${\bm W}_{c}({\bm w},0,\omega_{1},l)$ are just the limitation of the critical solutions of ${\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm l})$ with the data ${\bm z}=(0,\mu)$, ${\bm \lambda}=(\mu c\rho,\omega_{1})$: $$\tilde{w}_{j}=\lim_{\mu\rightarrow +\infty}\mu(1-w_{j}(\mu c\rho)),\quad j=1,...l.$$ By the limitation, the explicit solutions of equation \eqref{CE2} are straightforward. \end{proof} By this lemma, the following property is clear. \begin{lemma} $\Delta_{k}>0$ for $k=2n+1-l,...,n-1$ and $\Delta_{n}<0$. The coordinates of critical solutions satisfy the following order: When $l\leq n$, \begin{equation}0<w_{1}<w_{2}<...<w_{l};\end{equation} When $l\geq n+1$, assume $w_{k}<w_{2n+1-k}$ for $k=2n+1-l,...,n-1$, then \begin{equation}0<w_{1}<...<w_{2n-l}<w_{2n+1-l}<...<w_{n-1}<\frac{\bar{w}_{n}}{2}<w_{n+2}<...<w_{l}.\end{equation} \end{lemma} If the critical points of ${\bm W}_{c}({\bm w},0,\omega_{1},l)$ are ${\bm w}$, then the critical points of ${\bm W}_{c}({\bm w},z,\omega_{1},l)$ are $z+{\bm w}$. By the previous lemma, it is obvious that $z$ and $\{w_{k}\}_{k\neq n,n+1}$ are on the same horizontal line of ${\bm W}_{c}$ plane except $w_{n}$ and $w_{n+1}$ vertically symmetrical about the center point $\frac{\bar{w}_{n}}{2}$. In sum, we draw the distribution of the critical point near $z$ corresponding to $v_{\lambda^{i}}\in V_{\omega_{1}}$ in different cases in figure \ref{fig:db}. \begin{figure} \centering\includegraphics[width=2.5cm]{mono-db1} \centering\includegraphics[width=3cm]{mono-db2}\\ \centering\includegraphics[width=6cm]{mono-db3}\\ \caption{Coordinates distribution of the $B_{n}$ critical point on ${\bm W}_{c}$ plane near $z$.} \label{fig:db} \end{figure} In the following ,we derive monodromy representation for $v_{\lambda^{i}}\otimes v_{\lambda^{j}}\in V_{\omega_{1}}\otimes V_{\omega_{1}}$. it is convenient to consider the case $i+j\neq 2n$ firstly. i)$i+j\neq 2n\& i\geq j $ There is no wall-crossing, $$\boldsymbol{B}J_{i,j}=q^{-\frac{1}{2}(\lambda^{i},\lambda^{i})}J_{j,i}.$$ ii)$i+j\neq 2n\& i<j \& i\neq n$ By the same method used in the case of $A_{n}$, the variation is of type $II$. \begin{equation} \begin{split} &\boldsymbol{B}J_{i,j}\\ =&q^{-\frac{1}{2}(\lambda^{i},\lambda^{i})}(J_{j,i}+q^{\frac{1}{2}(\lambda^{i},\lambda^{i}-\lambda^{j})}(q^{-(\lambda^{i},\lambda^{i}-\lambda^{j})}-1)J_{i,j})\\ =&J_{j,i}+(q^{-\frac{1}{2}}-q^{\frac{1}{2}})J_{i,j}.\\ \end{split} \end{equation} \begin{figure} \centering \centering\includegraphics[width=3cm]{mono2-vanf}\\ \caption{Thimble with $c=0$ and its critical point $w^{1}_{n}$ and $w^{2}_{n}$.} \label{fig:vanf} \end{figure} \begin{figure} \centering\includegraphics[width=4.5cm]{mono2-vanf2}\\ \caption{Homotopic class of the thimble after rotation.} \label{fig:vanf2} \end{figure} \begin{figure} \centering\includegraphics[width=4.5cm]{mono-dbt1}\\ \caption{Type $I$ wall-crossing of $B_{n}$.} \label{fig:dbt1} \end{figure} \begin{figure} \centering\includegraphics[width=5.5cm]{mono-dbt12}\\ \caption{Combination of type $I$ and $II$.} \label{fig:dbt12} \end{figure} iii)$i+j\neq 2n\& i=n\&j=n+1 $ \begin{lemma}\label{rlemma} Let $\bar{w}_{i}=w^{1}_{i}+w^{2}_{i}$, $i=1,2$. Assume that $w^{1}_{1}\neq w^{2}_{1}$, $$\frac{2w^{1}_{1}-\bar{w}_{2}}{(w^{1}_{1})^{2}-\bar{w}_{2}w^{1}_{1}+w^{1}_{2}w^{2}_{2}}=A,\quad\frac{2w^{2}_{1}-\bar{w}_{2}}{(w^{2}_{1})^{2}-\bar{w}_{2}w^{2}_{1}+w^{1}_{2}w^{2}_{2}}=-A$$ and $$A(w^{1}_{1}-w^{2}_{1})\neq 2.$$ Then $$\bar{w}_{2}=\bar{w}_{1},\quad w^{1}_{2}w^{2}_{2}=w^{1}_{1}w^{2}_{1}+A^{-1}(w^{1}_{1}-w^{2}_{1}).$$ \end{lemma} \begin{proof} The equations imply $$A(w^{1}_{1}-w^{2}_{1})(\bar{w}_{1}-\bar{w}_{2})=2(\bar{w}_{1}-\bar{w}_{2}).$$ Then $A(w^{1}_{1}-w^{2}_{1})\neq 2$ implies $$\bar{w}_{2}=\bar{w}_{1},\quad w^{1}_{2}w^{2}_{2}=w^{1}_{1}w^{2}_{1}+A^{-1}(w^{1}_{1}-w^{2}_{1}).$$ \end{proof} By this lemma, the following critical point equation with $c=0$ can be solved: \begin{equation}\label{tsceB1} \left\{ \begin{aligned} \frac{1}{w^{1}_{n}-w^{1}_{n-1}}+\frac{1}{w^{1}_{n}-w^{2}_{n-1}}&=\frac{1}{w^{1}_{n}-w^{2}_{n}}\\ \frac{1}{w^{2}_{n}-w^{1}_{n-1}}+\frac{1}{w^{2}_{n}-w^{2}_{n-1}}&=\frac{1}{w^{2}_{n}-w^{1}_{n}}, \end{aligned} \right. \end{equation} where $w^{1}_{n},w^{2}_{n}$ are coordinates of $\alpha_{n}$ and $w^{1}_{n-1},w^{2}_{n-1}$ coordinates of $\alpha_{n-1}$. The solution is $\bar{w}_{n}=\bar{w}_{n-1},\quad w^{1}_{n}w^{2}_{n}=\frac{(\bar{w}_{n-1})^{2}-w^{1}_{n-1}w^{2}_{n-1}}{3}. $ If $Re w^{1}_{n-1}=Re w^{2}_{n-1}$, then $\Delta_{n}>0$. The thimble and its critical point are shown in figure \ref{fig:vanf}. When $c\rightarrow +\infty$, the condition $i=n\&j=n+1$ corresponds to the case of $w^{1}_{n},w^{2}_{n}$ tending to $w^{2}_{n-1}$. After rotation of $w^{1}_{n-1}$ and $w^{2}_{n-1}$, the homotopic class of the thimble connecting $w^{2}_{n-1}$ with $\infty$ is shown in figure \ref{fig:vanf2}. Therefore, as is shown in figure \ref{fig:dbt1}, the variation here is of type $I$ with only one primary root $\alpha_{n}$ moving. \begin{equation} \boldsymbol{B}J_{i,j} =aJ_{j,i}+(c_{1}+c_{2}-b_{1}-b_{2})J_{i,j}, \end{equation} where $$b_{1}+b_{2}=a\cdot q^{\frac{1}{2}(\lambda^{n-1},\alpha_{n})}(1+q^{-\frac{1}{2}(\alpha_{n},\alpha_{n})})$$ is the sum of phase factors of moving one of two $\alpha_{n}$ to the right of $z_{1}$ and $$c_{1}+c_{2}=(b_{1}+b_{2})\cdot q^{-(\lambda^{n-1},\alpha_{n})+\frac{1}{2}(\alpha_{n},\alpha_{n})}$$ from anti-clockwise rotation of $\alpha_{n}$, $2\pi$ around $\{z_{1}, \alpha_{1}, ... , \alpha_{n-1}\}$ and $\pi$ around $\alpha_{n}$. Minus before $b_{1}$ and $b_{2}$ is from the reverse of the direction of the dimension one sub-thimble connecting $z_{2}$ to $z_{1}$. Thus, \begin{equation} \boldsymbol{B}J_{i,j} =J_{j,i}+(q^{-\frac{1}{2}}-q^{\frac{1}{2}})J_{i,j}. \end{equation} iv)$i+j\neq 2n\& i=n\&j>n+1 $ The variation is combination of type $I$ and $II$. As shown in figure \ref{fig:dbt12}, similar to the previous case, there are two possible way of moving $\alpha_{n}$ from $z_{2}$ to $z_{1}$, but $\alpha_{n}$ is now accompanied by $\{\alpha_{n-1}, ..., \alpha_{2n+1-j}\}$ horizontally. \begin{equation} \begin{split} a&=q^{-\frac{1}{2}(\lambda^{i},\lambda^{i})};\\ b_{1}+b_{2}&=a\cdot q^{\frac{1}{2}(\lambda^{n-1},\alpha_{n})+\frac{1}{2}(\lambda^{n},\lambda^{n+1}-\lambda^{j})}(1+q^{-\frac{1}{2}(\alpha_{n},\alpha_{n})});\\ c_{1}+c_{2}&=(b_{1}+b_{2})\cdot q^{-(\lambda^{n-1},\alpha_{n})+\frac{1}{2}(\alpha_{n},\alpha_{n})-(\lambda^{n},\lambda^{n+1}-\lambda_{j})};\\ \boldsymbol{B}J_{i,j}&=aJ_{j,i}+(c_{1}+c_{2}-b_{1}-b_{2})J_{i,j}\\ &=J_{j,i}+(q^{-\frac{1}{2}}-q^{\frac{1}{2}})J_{i,j}.\\ \end{split} \end{equation} Minus is always from reverse of orientation of the dimension one sub-thimble. In sum, when $i+j\neq 2n$, \begin{equation} \boldsymbol{B}J_{i,j}=\left\{ \begin{array}{ll} q^{-\frac{1}{2}}J_{j,i}, & \hbox{$i+j\neq 2n\&i=j$;} \\ J_{j,i}, & \hbox{$i+j\neq 2n\&i> j$;} \\ J_{j,i}+(q^{-\frac{1}{2}}-q^{\frac{1}{2}})J_{i,j}, & \hbox{$i+j\neq 2n\&i<j$.} \end{array} \right. \end{equation} When $i+j=2n$, the situation is more interesting. The critical point equation of two singularities $z_{1}$ and $z_{2}$ with $c=0$ is as following: \begin{equation}\label{equ:BNSB} \left\{ \begin{aligned} 0&=\frac{-1}{w^{1}_{1}-z_{1}}+\frac{-1}{w^{1}_{1}-z_{2}}+\frac{2}{w^{1}_{1}-w^{2}_{1}}+\frac{-1}{w^{1}_{1}-w^{1}_{2}}+\frac{-1}{w^{1}_{1}-w^{2}_{2}}\\ 0&=\frac{-1}{w^{2}_{1}-z_{1}}+\frac{-1}{w^{2}_{1}-z_{2}}+\frac{2}{w^{2}_{1}-w^{1}_{1}}+\frac{-1}{w^{2}_{1}-w^{1}_{2}}+\frac{-1}{w^{2}_{1}-w^{2}_{2}}\\ ... \\ 0&=\frac{2}{w^{1}_{k}-w^{2}_{k}}+\frac{-1}{w^{1}_{k}-w^{1}_{k-1}}+\frac{-1}{w^{1}_{k}-w^{2}_{k-1}}+\frac{-1}{w^{1}_{k}-w^{1}_{k+1}}+\frac{-1}{w^{1}_{k}-w^{2}_{k+1}}\\ 0&=\frac{2}{w^{2}_{k}-w^{1}_{k}}+\frac{-1}{w^{2}_{k}-w^{1}_{k-1}}+\frac{-1}{w^{2}_{k}-w^{2}_{k-1}}+\frac{-1}{w^{2}_{k}-w^{1}_{k+1}}+\frac{-1}{w^{2}_{k}-w^{2}_{k+1}}\\ ... \\ 0&=\frac{1}{w^{1}_{n}-w^{2}_{n}}+\frac{-1}{w^{1}_{n}-w^{1}_{n-1}}+\frac{-1}{w^{1}_{n}-w^{2}_{n-1}}\\ 0&=\frac{1}{w^{2}_{n}-w^{1}_{n}}+\frac{-1}{w^{2}_{n}-w^{1}_{n-1}}+\frac{-1}{w^{2}_{n}-w^{2}_{n-1}},\\ \end{aligned} \right. \end{equation} where $2\leq k\leq n-1$ and $w^{1}_{i},w^{2}_{i}$ are coordinates of $\alpha_{i}$, $1\leq i\leq n$. Summing up all the equations except the first pair gives $$\frac{1}{w^{1}_{1}-w^{1}_{2}}+\frac{1}{w^{1}_{1}-w^{2}_{2}}+\frac{1}{w^{2}_{1}-w^{1}_{2}}+\frac{1}{w^{2}_{1}-w^{2}_{2}}=0.$$ Let $A=+\frac{2}{w^{1}_{1}-w^{2}_{1}}+\frac{-1}{w^{1}_{1}-w^{1}_{2}}+\frac{-1}{w^{1}_{1}-w^{2}_{2}}$, then $\frac{2}{w^{2}_{1}-w^{1}_{1}}+\frac{-1}{w^{2}_{1}-w^{1}_{2}}+\frac{-1}{w^{2}_{1}-w^{2}_{2}}=-A$. By using lemma \ref{rlemma} inductively, $\bar{w}_{1}=...=\bar{w}_{n}=z_{1}+z_{2}$. Then substituting $\bar{w}_{l}$ into \eqref{equ:BNSB}, we have the following solution: $$\bar{w}_{1}=...=\bar{w}_{n}=z_{1}+z_{2},\quad w^{1}_{l}w^{2}_{l}=z_{1}z_{2}+\frac{(z_{1}-z_{2})^{2}l(2n-l)}{4n^{2}-1},1\leq l\leq n.$$ It is clear that $\Delta_{k}<0$, $k=1,...,n-1$ and $\Delta_{n}>0$. The distribution of the coordinates of critical point is shown in figure \ref{fig:nsbb}. Similar to the case of one singularity in figure \ref{fig:db}, the coordinates are symmetric on the same line connecting $z_{1}$ and $z_{2}$ except $w^{1}_{n}$ and $w^{2}_{n}$ vertically in the middle. Assume $Im w^{1}_{i}>Im w^{2}_{i},\quad i=1,...,n-1$. The different imaginary parts of $\{w^{1}_{i},w^{2}_{i}\}$ and $z_{1}, z_{2}$ give the coordinates a partial order: $$z_{1}<w^{1}_{1}<...<w^{1}_{n-1}<w^{1}_{n},w^{2}_{n}<w^{2}_{n-1}<...<w^{2}_{1}<z_{2}.$$ Because of the existence of the thimble above, when $c\rightarrow +\infty$, there is a wall-crossing for every $i\neq 0$ during the transformation $T$ and its homotopic class will keep this order. To be precise, we redefine the index $i$ and $j$. Let $i$ be the number of primary roots near $z_{2}$ and $j$ be the number of primary roots crossing wall. Assume that \begin{equation}\label{equ:cof} \boldsymbol{B}J_{a,b}=\sum_{c,d}\boldsymbol{B}^{c,d}_{a,b}J_{c,d}, \end{equation} then from the properties of wall-crossing, \begin{equation}\label{equ:Bwc} \boldsymbol{B}J_{2n-i,i}=\sum_{j=0}^{i}\boldsymbol{B}^{i-j,2n-i+j}_{2n-i,i}J_{i-j,2n-i+j}. \end{equation} Phase factor from the rotation gives $$\boldsymbol{B}^{i,2n-i}_{2n-i,i}=q^{-\frac{1}{2}(\lambda^{i},\lambda^{2n-i})}.$$ The coefficients $\boldsymbol{B}_{2n-i,i}^{i-j,2n-i+j} (j>0)$ of wall-crossing formula can be computed as in the following cases. i) $0<j\leq i\leq n-1$ Keeping the order of the coordinates, moving of $\{\alpha_{i-j+1}, ..., \alpha_{i}\}$ from $z_{2}$ to $z_{1}$ is accompanied by the self clockwise rotation of $\pi$, thus the variation is of type $III$. \begin{equation} \boldsymbol{B}J_{2n-i,i} =aJ_{i,2n-i}+((-1)^{j}b+(-1)^{j+1}c)J_{i-j,2n-i+j}+ ... . \end{equation} $a=q^{-\frac{1}{2}(\lambda^{i},\lambda^{2n-i})}$ is from the rotation of $z_{1}$ and $z_{2}$. $$b=a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-i},\lambda^{i-j}-\lambda^{i})]+\frac{1}{2}(j-1)}$$ is from moving of $\{\alpha_{i-j+1}, ..., \alpha_{i}\}$ from $z_{2}$ to $z_{1}$. $c=b\cdot q^{-(\lambda^{2n-i},\lambda^{i-j}-\lambda^{i})}$ is from the anti-clockwise rotation around $z_{1}$ and $2n-i$ primary roots near $z_{1}$. Thus, \begin{equation}\label{eqbr1} \boldsymbol{B}_{2n-i,i}^{i-j,2n-i+j}=(-1)^{j}(b-c) =(-1)^{j}q^{\frac{j}{2}}(q^{\frac{1}{2}}-q^{-\frac{1}{2}}). \end{equation} \begin{figure} \centering\includegraphics[width=4.5cm]{cw-b1}\\ \caption{$0<j\leq i=n$.} \label{fig:cw-b1} \end{figure} ii)$0<j\leq i=n$ \begin{equation} \boldsymbol{B}J_{n,n} =aJ_{n,n}+((-1)^{j}b+(-1)^{j+1}c)J_{n-j,n+j}+ ... . \end{equation} $a=q^{-\frac{1}{2}(\lambda^{n},\lambda^{2n-n})}$ is from the rotation of $z_{1}$ and $z_{2}$. $$b=a\cdot q^{\frac{1}{4}[(\lambda^{n-j},\lambda^{n-j}-\lambda^{n})+(\lambda^{2n-n},\lambda^{n-j}-\lambda^{n})]+\frac{1}{4}(\alpha_{n},\alpha_{n})+\frac{1}{2}(j-1)}$$ is from moving of $\{\alpha_{n-j+1}, ..., \alpha_{n}\}$ from $z_{2}$ to the position indicated by the green dotted circles as shown in figure \ref{fig:cw-b1}. Additional anti-clockwise rotation to the position indicated by the red dotted circles gives $$c=b\cdot q^{-(\lambda^{2n-n},\lambda^{n-j}-\lambda^{n})-\frac{1}{2}(\alpha_{n},\alpha_{n})}.$$ This variation is different from type $III$ by the position of two $\alpha_{n}$. The variation of $\alpha_{n}$ is of type $I$. In this sense, we call the variation a combination of type $I$ and $III$. Thus, \begin{equation} \boldsymbol{B}_{n,n}^{n-j,n+j}=(-1)^{j}(b-c) =(-1)^{j}q^{\frac{j}{2}}(1-q^{-\frac{1}{2}}). \end{equation} \begin{figure} \centering\includegraphics[width=6cm]{cw-b2}\\ \caption{ $i>n\& i-j=n$.} \label{fig:cw-b2} \end{figure} iii) $i>n\& i-j=n$ As shown in figure \ref{fig:cw-b2}, because of two $\alpha_{n}$, there are two possible cases of moving. They give coefficients $b_{1}$ and $b_{2}$. \begin{equation} \boldsymbol{B}J_{2n-i,i} =aJ_{i,2n-i}+((-1)^{j}(b_{1}+b_{2})+(-1)^{j+1}(c_{1}+c_{2}))J_{n,n}+ .... \end{equation} $b_{1}+b_{2}=a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-i},\lambda^{i-j}-\lambda^{i})]+\frac{1}{2}(j-1)}(q^{\frac{1}{4}(\alpha_{n},\alpha_{n})}+q^{-\frac{1}{4}(\alpha_{n},\alpha_{n})})$. Additional anti-clockwise rotation around $z_{1}$ and other $2n-i$ primary roots gives $c_{1}+c_{2}=(b_{1}+b_{2})\cdot q^{-(\lambda^{2n-i},\lambda^{i-j}-\lambda^{i})}$. The variation is also a combination of type $I$ and $III$. Thus, \begin{equation} \boldsymbol{B}_{2n-i,i}^{n,n} =(-1)^{j}q^{\frac{j}{2}+\frac{1}{4}}(q^{\frac{1}{4}}+q^{-\frac{1}{4}})(1-q^{-1}). \end{equation} iv) $i>n\& i-j>n$ The variation is of type $III$. $\boldsymbol{B}_{2n-i,i}^{i-j,2n-i+j}$ is exactly the same as \eqref{eqbr1}. $$\boldsymbol{B}_{2n-i,i}^{i-j,2n-i+j} =(-1)^{j}q^{\frac{j}{2}}(q^{\frac{1}{2}}-q^{-\frac{1}{2}}). $$ v) $i>n\& i-j<n\&i-j\neq 2n-i$ The variation is of type $III$, $b=a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-i},\lambda^{i-j}-\lambda^{i})]+\frac{1}{2}(j-2)}$ and $c=b\cdot q^{-(\lambda^{2n-i},\lambda^{i-j}-\lambda^{i})}$. \begin{equation} \boldsymbol{B}_{2n-i,i}^{i-j,2n-i+j} =(-1)^{j}q^{\frac{j}{2}}(1-q^{-1}). \end{equation} vi) $i>n\& i-j=2n-i\& i\neq n+1$ The variation is of type $IV$. $i-j=2n-i$ means that the moving primary roots are of double multiplicity and symmetrically distributed. Same as type $III$, moving of them from $z_{2}$ to $z_{1}$ is accompanied by the self clockwise rotation of $\pi$. But because of double multiplicity and symmetric distribution, the order of these primary roots will give three additional terms in the wall-crossing formula, as is shown in figure \ref{fig:t4}. \begin{equation} \begin{split} \boldsymbol{B}J_{2n-i,i} &=aJ_{i,2n-i}+((-1)^{j}b+(-1)^{j+1}c_{1}-c_{2}+c_{3})J_{2n-i,i}+ ...;\\ b&=a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-i},\lambda^{i-j}-\lambda^{i})]+n-i+j-\frac{3}{2}};\\ c_{1}&=b\cdot q^{-(\lambda^{2n-i},\lambda^{2n-i}-\lambda^{2n-i+1})};\\ c_{2}&=c_{3}\cdot q^{-(\lambda^{2n-i},\lambda^{2n-i}-\lambda^{2n-i+1})};\\ c_{3}&=a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-i},\lambda^{i-j}-\lambda^{i})]-(\lambda^{2n-i},\lambda^{i-j}-\lambda^{i})};\\ \boldsymbol{B}_{2n-i,i}^{2n-i,i} &=((-1)^{j}q^{n-i+j-\frac{1}{2}}-1)(q^{\frac{1}{2}}-q^{-\frac{1}{2}}).\\ \end{split} \end{equation} vii) $i=n+1\& j=2$ The variation is of type $I$. \begin{equation} \begin{split} \boldsymbol{B}J_{n-1,n+1}=&aJ_{n+1,n-1}+((-1)^{2}b+(-1)^{3}(c_{1}+c_{2})+(-1)^{4}c_{3})J_{n-1,n+1}+ ...;\\ b=&a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-i},\lambda^{i-j}-\lambda^{i})]-\frac{1}{2}};\\ c_{1}+c_{2}=&b\cdot q^{-(\lambda^{2n-i},\lambda^{2n-i}-\lambda^{2n-i+1})}(1+q^{\frac{1}{2}(\alpha_{n},\alpha_{n})});\\ c_{3}=&b\cdot q^{-2(\lambda^{2n-i},\lambda^{2n-i}-\lambda^{2n-i+1})+\frac{1}{2}(\alpha_{n},\alpha_{n})};\\ \boldsymbol{B}_{n-1,n+1}^{n-1,n+1} =&q-1-q^{\frac{1}{2}}+q^{-\frac{1}{2}}.\\ \end{split} \end{equation} In summarization, \begin{equation}\label{Bn1} \begin{split} &\boldsymbol{B}_{2n-i,i}^{i-j,2n-i+j}\\ =&\left\{ \begin{array}{ll} (-1)^{j}q^{\frac{j}{2}}(q^{\frac{1}{2}}-q^{-\frac{1}{2}}) , & \hbox{$i>n\& i-j>n$;} \\ & \hbox{$0<j\leq i\leq n-1$;} \\ (-1)^{j}q^{\frac{j}{2}}(1-q^{-\frac{1}{2}}), & \hbox{$0<j\leq i= n$;} \\ (-1)^{j}q^{\frac{j}{2}+\frac{1}{4}}(q^{\frac{1}{4}}+q^{-\frac{1}{4}})(1-q^{-1}), & \hbox{$i>n\& i-j=n$;} \\ (-1)^{j}q^{\frac{j-1}{2}}(q^{\frac{1}{2}}-q^{-\frac{1}{2}}), & \hbox{$i>n\& i-j<n\&i-j\neq2n-i$;} \\ (q^{\frac{1}{2}}-q^{-\frac{1}{2}})((-1)^{j}q^{n-i+j-\frac{1}{2}}-1), & \hbox{$i>n\& i-j=2n-i$.} \end{array} \right. \end{split} \end{equation} Monodromy representation is as following: \begin{equation}\label{Bn2} \begin{split} &\boldsymbol{B}_{2n-a,a}^{2n-b,b}\\ =&\left\{ \begin{array}{ll} (-1)^{a+n}q^{\frac{a-n}{2}}(q^{\frac{1}{2}}-q^{-\frac{1}{2}})(1+q^{-\frac{1}{2}}), & \hbox{$a>n\&b=n$;} \\ (-1)^{a+b}(q^{\frac{1}{2}}-q^{-\frac{1}{2}})q^{-n+\frac{a+b}{2}}, & \hbox{$a<n\&b>n \hbox{ or } a>n\&b<n$;} \\ (q^{\frac{1}{2}}-q^{-\frac{1}{2}})((-1)^{a+b}q^{-n+\frac{a+b-1}{2}}-\delta_{a,b}), & \hbox{$a>n\&b>n$;} \\ (-1)^{n+b}q^{\frac{b-n}{2}}(1-q^{-\frac{1}{2}}) , & \hbox{$a=n\&b>n$.} \end{array} \right. \end{split} \end{equation} We have given the description for four types of variations in detail. In the following cases of $C_{n}$ and $D_{n}$, the methods for proofs are similar, so we omit them except for some extraordinary cases. \subsubsection{$C_{n}$}\label{subsubsct:C} Denote the weights of the fundamental representation of $C_{n}$ Lie algebra by $$\lambda^{i}=\left\{ \begin{array}{ll} \lambda-\sum_{j=1}^{i}\alpha_{j}, & \hbox{$i\leq n$;} \\ \lambda-\sum_{j=1}^{n}\alpha_{j}-\sum_{j=2n-i}^{n}\alpha_{j}, & \hbox{$i> n$.} \end{array} \right.$$ The inner products of them are: \begin{equation} (\lambda^{s},\lambda^{t})=\left\{ \begin{array}{ll} \frac{1}{2}, & \hbox{$s=t$;} \\ 0, & \hbox{$s+t\neq2n-1,s\neq t$;} \\ -\frac{1}{2}, & \hbox{$s+t=2n-1$,} \end{array} \right. \end{equation}where $s,t=0,1,..,2n-1$. For any weight vector $v_{\lambda^{l}}\in V_{\omega_{1}}$, Yang-Yang function ${\bm W}_{c}({\bm w},z,\omega_{1},l)$ corresponding to it is as following: \begin{equation} \begin{split} {\bm W}_{c}({\bm w},z,\omega_{1},l) =\sum _{j=1}^{l}(\alpha_{i_{j}},\omega_{1})\log \left( w_{j}-z\right) -&\sum_{1\leq j< k\leq l}(\alpha_{i_{j}},\alpha_{i_{k}}) \log \left( w_{j}-w_{k}\right)\\ -&c(\sum_{j=1}^{l}(\alpha_{i_{j}},\rho)w_{j}-(\omega_{1},\rho)z), \end{split} \end{equation}where $$i_{j}=\left\{ \begin{array}{ll} j, & \hbox{$j\leq n$;} \\ 2n-j, & \hbox{$n<j\leq 2n-1$.} \end{array} \right. $$ By this notation, when $l\geq n+1$, $\{w_{k}, w_{2n-k}\}_{2n-l\leq k\leq n-1}$ are pairs of symmetric coordinates of $\alpha_{k}$ in the function. Define $\bar{w}_{k}=w_{k}+w_{2n-k}$ and $\Delta_{k}=(w_{k}-w_{2n-k})^{2}$, $2n-l\leq k\leq n-1$. \begin{lemma} For the fundamental representation $V_{\omega_{1}}$ of $C_{n}$ Lie algebra, the solutions of the critical point equation \eqref{CE2} of the corresponding Yang-Yang functions ${\bm W}_{c}({\bm w},0,\omega_{1},l)$ are as following: When $l<n$, $$w_{j}=\sum_{i=1}^{j}\frac{1}{c(l-i+1)}\quad j=1,...,l.$$ When $l=n$, $$w_{j}=\sum_{i=1}^{j}\frac{1}{c(n-i+2)}\quad j=1,...,n-1,$$ $$w_{n}=\frac{1}{c}+\sum_{i=1}^{n-1}\frac{1}{c(n-i+2)}.$$ When $l\geq n+1$, $$w_{k}=\sum_{i=1}^{j}\frac{1}{c(l+2-i)}$$ for $k=1,...,2n-l-1,$ $$w_{n}=\sum_{i=1}^{2n-l-1}\frac{1}{c(l+2-i)}+\sum_{i=2n-l}^{n}\frac{1}{c(l+1-i)}.$$ For $2n-l\leq k\leq n-1$, \begin{equation} \begin{split} \bar{w}_{k}=&\sum_{i=1}^{2n-l-1}\frac{2}{c(l-i+2)}+\sum_{i=2n-l}^{k-l}\frac{2}{c(l-i+1)}+\sum_{i=k}^{n}\frac{1}{c(l-i+1)}\\&+\sum_{i=k}^{n}\frac{1}{c(l-2n+i+1)}, \end{split}\end{equation} \begin{equation} \begin{split} \Delta_{k}=&[\sum_{i=k}^{n}\frac{1}{c(l-i+1)}+\sum_{i=k}^{n}\frac{1}{c(l-2n+i+1)}]\\ &\times[\sum_{i=k}^{n}\frac{1}{c(l-i+1)}+\sum_{i=k}^{n}\frac{1}{c(l-2n+i+1)}-\frac{4}{c(2l-2n+1)}]. \end{split}\end{equation} \end{lemma} \begin{lemma} For any $2n-l\leq k\leq n-1$, $\Delta_{k}>0$. Assume $w_{k}<w_{2n-k}$ for $k=2n-l,...,n-1$, then for any $l>0$, the coordinates of critical solutions satisfy the following order: $$0<w_{1}<w_{2}<...<w_{l}.$$ \end{lemma} By this lemma, $z$ and the critical coordinates $\{w_{j}\}_{j=1,...,l}$ of ${\bm W}_{c}({\bm w},z,\omega_{1},l)$ are on the same horizontal line of ${\bm W}$ plane as shown in figure \ref{fig:dc}. \begin{figure} \centering\includegraphics[width=3cm]{mono-dc1} \centering\includegraphics[width=6cm]{mono-dc2}\\ \caption{Coordinates distribution of the $C_{n}$ critical point on ${\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})$ plane near $z$ .} \label{fig:dc} \end{figure} To derive monodromy representation, firstly we consider the case $v_{\lambda^{i}}\otimes v_{\lambda^{j}}\in V_{\omega_{1}}\otimes V_{\omega_{1}}, \quad i+j\neq 2n-1$. i) When $i+j\neq 2n-1\& i\geq j $, there is no wall-crossing, $$\boldsymbol{B}J_{i,j}=q^{-\frac{1}{2}(\lambda^{i},\lambda^{j})}J_{j,i}.$$ ii) When $i+j\neq 2n-1\& i<j$, the variation of critical point in wall-crossing is of type $II$. \begin{equation} \begin{split} &\boldsymbol{B}J_{i,j}\\ =&q^{-\frac{1}{2}(\lambda^{i},\lambda^{j})}(J_{j,i}+q^{\frac{1}{2}(\lambda^{i},\lambda^{i}-\lambda^{j})}(q^{-(\lambda^{i},\lambda^{i}-\lambda^{j})}-1)J_{i,j})\\ =&J_{j,i}+(q^{-\frac{1}{2}}-q^{\frac{1}{2}})J_{i,j}.\\ \end{split} \end{equation} In sum, \begin{equation} \boldsymbol{B}J_{i,j}=\left\{ \begin{array}{ll} q^{-\frac{1}{4}}J_{j,i}, & \hbox{$i+j\neq 2n-1\&i=j$;} \\ J_{j,i}, & \hbox{$i+j\neq 2n-1\&i> j$;} \\ J_{j,i}+(q^{-\frac{1}{4}}-q^{\frac{1}{4}})J_{i,j}, & \hbox{$i+j\neq 2n-1\&i<j$.} \end{array} \right. \end{equation} When $i+j=2n-1$, the critical point equation of two singularities $z_{1}$ and $z_{2}$ with $c=0$ is as following: \begin{equation}\label{equ:CNSB} \left\{ \begin{aligned} 0&=\frac{-\frac{1}{2}}{w^{1}_{1}-z_{1}}+\frac{-\frac{1}{2}}{w^{1}_{1}-z_{2}}+\frac{1}{w^{1}_{1}-w^{2}_{1}}+\frac{-\frac{1}{2}}{w^{1}_{1}-w^{1}_{2}}+\frac{-\frac{1}{2}}{w^{1}_{1}-w^{2}_{2}}\\ 0&=\frac{-\frac{1}{2}}{w^{2}_{1}-z_{1}}+\frac{-\frac{1}{2}}{w^{2}_{1}-z_{2}}+\frac{1}{w^{2}_{1}-w^{1}_{1}}+\frac{-\frac{1}{2}}{w^{2}_{1}-w^{1}_{2}}+\frac{-\frac{1}{2}}{w^{2}_{1}-w^{2}_{2}}\\ ... \\ 0&=\frac{1}{w^{1}_{k}-w^{2}_{k}}+\frac{-\frac{1}{2}}{w^{1}_{k}-w^{1}_{k-1}}+\frac{-\frac{1}{2}}{w^{1}_{k}-w^{2}_{k-1}}+\frac{-\frac{1}{2}}{w^{1}_{k}-w^{1}_{k+1}}+\frac{-\frac{1}{2}}{w^{1}_{k}-w^{2}_{k+1}}\\ 0&=\frac{1}{w^{2}_{k}-w^{1}_{k}}+\frac{-\frac{1}{2}}{w^{2}_{k}-w^{1}_{k-1}}+\frac{-\frac{1}{2}}{w^{2}_{k}-w^{2}_{k-1}}+\frac{-\frac{1}{2}}{w^{2}_{k}-w^{1}_{k+1}}+\frac{-\frac{1}{2}}{w^{2}_{k}-w^{2}_{k+1}}\\ ... \\ 0&=\frac{1}{w^{1}_{n-1}-w^{2}_{n-1}}+\frac{-\frac{1}{2}}{w^{1}_{n-1}-w^{1}_{n-2}}+\frac{-\frac{1}{2}}{w^{1}_{n-1}-w^{2}_{n-2}}+\frac{-1}{w^{1}_{n-1}-w_{n}}\\ 0&=\frac{1}{w^{2}_{n-1}-w^{1}_{n-1}}+\frac{-\frac{1}{2}}{w^{2}_{n-1}-w^{1}_{n-2}}+\frac{-\frac{1}{2}}{w^{2}_{n-1}-w^{2}_{n-2}}+\frac{-1}{w^{2}_{n-1}-w_{n}}\\ 0&=\frac{-1}{w_{n}-w^{1}_{n-1}}+\frac{-1}{w_{n}-w^{2}_{n-1}},\\ \end{aligned} \right. \end{equation} where $2\leq k\leq n-2$. By lemma \ref{rlemma}, its solution is as following: $$\bar{w}_{1}=\bar{w}_{k}=\bar{w}_{n-1}=z_{1}+z_{2},\quad \bar{w}_{n}=\frac{z_{1}+z_{2}}{2}$$ and $$w^{1}_{l}w^{2}_{l}=z_{1}z_{2}+\frac{(z_{1}-z_{2})^{2}l(2n+1-l)}{4n(n+1)},\quad 1\leq l\leq n-1.$$ The distribution of the coordinates of critical point is shown in figure \ref{fig:nsbc}. Similar to the case of one singularity in figure \ref{fig:dc}, the coordinates are symmetric on the same line connecting $z_{1}$ and $z_{2}$, which gives coordinates of the thimble and singularities $z_{1}$ and $z_{2}$ an order:$$z_{1}<w^{1}_{1}<...<w^{1}_{n-1}<w_{n}<w^{2}_{n-1}<...<w^{2}_{1}<z_{2}.$$ Because of the existence of the thimble above, there is a wall-crossing whenever $i\neq 0$ and its homotopic class will keep this order. For $v_{\lambda^{i}}\otimes v_{\lambda^{j}}\in V_{\omega_{1}}\otimes V_{\omega_{1}}, \quad i+j=2n-1$, the coefficients of equation \eqref{equ:cof} are $$\boldsymbol{B}_{2n-1-i,i}^{i,2n-1-i}=q^{-\frac{1}{2}(\lambda^{2n-1-i},\lambda^{i})}=q^{\frac{1}{4}}$$ and $\boldsymbol{B}_{2n-1-i,i}^{i-j,2n-1-i+j} (j>0)$ can be computed as in the following cases. i) $0<j\leq i<n$ The variation is of type $III$. \begin{equation}\label{eqbr1C} \begin{split} b&=a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})]+\frac{1}{4}(j-1)};\\ c&=b\cdot q^{-(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})};\\ \boldsymbol{B}_{2n-1-i,i}^{i-j,2n-1-i+j} &=(-1)^{j}q^{\frac{j+1}{4}}(1-q^{-\frac{1}{2}}).\\ \end{split} \end{equation} ii)$1<j\leq i=n$ The variation is of type $III$. \begin{equation}\label{eqbr2C} \begin{split} b&=a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})]+\frac{1}{4}j};\\ c&=b\cdot q^{-(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})};\\ \boldsymbol{B}_{2n-1-i,i}^{i-j,2n-1-i+j} &=(-1)^{j}q^{\frac{j+2}{4}}(1-q^{-\frac{1}{2}}).\\ \end{split} \end{equation} iii) $i=n\& j=1$ The variation is of type $III$. \begin{equation} \begin{split} b&=a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})]};\\ c&=b\cdot q^{-(\lambda^{n-1},\lambda^{n-1}-\lambda^{n})};\\ \boldsymbol{B}_{n-1,n}^{n-1,n} &=q^{\frac{3}{4}}(-1+q^{-1}).\\ \end{split} \end{equation} iv) $n<i\leq 2n-1\& i-j\geq n$ Same as \eqref{eqbr1C}. v) $n<i\leq 2n-1\& i-j=n-1$ Same as \eqref{eqbr2C}. vi) $i>n\& i-j<n-1\& i-j\neq 2n-1-i$ The variation is of type $III$. \begin{equation} \begin{split} b&=a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})]+\frac{1}{4}j};\\ c&=b\cdot q^{-(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})};\\ \boldsymbol{B}_{2n-1-i,i}^{i-j,2n-1-i+j} &=(-1)^{j}q^{\frac{j+2}{4}}(1-q^{-\frac{1}{2}}).\\ \end{split} \end{equation} vii) $i>n \& i-j<n-1\&i-j=2n-1-i$ The variation is of type $IV$. \begin{equation} \begin{split} b=&a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})]+\frac{1}{2}(n-i+j-1)};\\ c_{1}=&b\cdot q^{-(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i-j+1})};\\ c_{2}=&a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})]}\\ &\cdot q^{-(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i-j+1})};\\ c_{3}=&a\cdot q^{\frac{1}{4}[(\lambda^{i-j},\lambda^{i-j}-\lambda^{i})+(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})]-(\lambda^{2n-1-i},\lambda^{i-j}-\lambda^{i})};\\ \boldsymbol{B}_{2n-1-i,i}^{i-j,2n-1-i+j} =&(-1)^{j}q^{\frac{1}{2}(n-i+j)+\frac{1}{4}}(1-q^{-\frac{1}{2}})+q^{-\frac{1}{4}}(1-q^{\frac{1}{2}}).\\ \end{split} \end{equation} In sum, \begin{equation}\label{Cn1} \begin{split} &\boldsymbol{B}_{2n-1-i,i}^{i-j,2n-1-i+j}\\ =&\left\{ \begin{array}{ll} q^{\frac{j+1}{4}}((-1)^{j}+(-1)^{j+1}q^{-\frac{1}{2}}), & \hbox{$0<j\leq i<n$;} \\ q^{\frac{3}{4}}(q^{-1}-1), & \hbox{$1=j\leq i=n$;} \\ q^{\frac{j+2}{4}}((-1)^{j}+(-1)^{j+1}q^{-\frac{1}{2}}), & \hbox{$1<j\leq i=n$;} \\ q^{\frac{j+1}{4}}((-1)^{j}+(-1)^{j+1}q^{-\frac{1}{2}}), & \hbox{$n<i\leq 2n-1\&j>0\& i-j\geq n$;} \\ q^{\frac{j+2}{4}}((-1)^{j}+(-1)^{j+1}q^{-\frac{1}{2}}), & \hbox{$n<i\leq 2n-1\& i-j=n-1$;} \\ q^{\frac{j+2}{4}}((-1)^{j}+(-1)^{j+1}q^{-\frac{1}{2}}), & \hbox{$n<i\leq 2n-1$} \\ & \hbox{$\& 2n-1-i<i-j<n-1$;} \\ (1-(-1)^{j}q^{\frac{n-i+j}{2}})(q^{-\frac{1}{4}}-q^{\frac{1}{4}}), & \hbox{$n<i\leq 2n-1$} \\ & \hbox{$\&2n-1-i=i-j<n-1$;} \\ q^{\frac{j+2}{4}}((-1)^{j}+(-1)^{j+1}q^{-\frac{1}{2}}), & \hbox{$n<i\leq 2n-1\& i-j<2n-1-i$.} \end{array} \right. \end{split} \end{equation} Monodromy representation is as following: \begin{equation}\label{Cn2} \begin{split} &\boldsymbol{B}_{2n-1-a,a}^{2n-1-b,b}\\ =&\left\{ \begin{array}{ll} (-1)^{a+b+1}q^{-\frac{2n-a-b-1}{4}}(q^{\frac{1}{4}}-q^{-\frac{1}{4}}), & \hbox{$a\leq n-1\& b\geq n$;} \\ &\hbox{$a\geq n\& b\leq n-1$;} \\ (-1)^{a+b+1}q^{-\frac{2n-a-b-2}{4}}(q^{\frac{1}{4}}-q^{-\frac{1}{4}})\\ -\delta_{a,b}(q^{\frac{1}{4}}-q^{-\frac{1}{4}}), & \hbox{$a\geq n\& b\geq n$.} \end{array} \right. \end{split} \end{equation} \subsubsection{$D_{n}$}\label{subsubsct:D} Denote the weights of the fundamental representation of $D_{n}$ Lie algebra by \begin{align} &\lambda^{i}=\left\{ \begin{array}{ll} \lambda-\sum_{j=1}^{i}\alpha_{j}, & \hbox{$i\leq n$;} \\ \lambda-\sum_{j=1}^{n}\alpha_{j}-\sum_{j=2n-i}^{n}\alpha_{j}, & \hbox{$i> n$.}\\ \end{array} \right.\\ &\lambda^{{n-1}^{'}}=\lambda-\sum_{i=1}^{n-2}\alpha_{i}-\alpha_{n}. \end{align} For convenience of discussion, we define the order of $i=0,1,..,n-2,n-1,n-1',n,..2n-2$ as following \begin{equation} o(i)=\left\{ \begin{array}{ll} i, & \hbox{$i=0,1,..,n-2,n-1$;} \\ n, & \hbox{$i=n-1'$;} \\ i+1, & \hbox{$i=n,n+1,..,2n-2$.} \end{array} \right. \end{equation} The inner products of them are: \begin{equation} (\lambda^{s},\lambda^{t})=\left\{ \begin{array}{ll} 1, & \hbox{$o(s)+o(t)\neq 2n-1,s=t$;} \\ 0, & \hbox{$o(s)+o(t)\neq 2n-1,s\neq t$;} \\ -1, & \hbox{$o(s)+o(t)=2n-1$,} \end{array} \right. \end{equation}where $s,t=0,1,..,n-2,n-1,n-1^{'},n,n+1,..,2n-2$. Every weight vector $v_{\lambda^{l}}\in V_{\lambda}$ has Yang-Yang function ${\bm W}_{c}({\bm w},z,\omega_{1},l)$ corresponding to it. When $l\neq n-1^{'}$, \begin{equation} \begin{split} {\bm W}_{c}({\bm w},z,\omega_{1},l) =\sum _{j=1}^{l}(\alpha_{i_{j}},\omega_{1})\log \left( w_{j}-z\right) -&\sum_{1\leq j< k\leq l}(\alpha_{i_{j}},\alpha_{i_{k}}) \log \left( w_{j}-w_{k}\right)\\ -&c(\sum_{j=1}^{l}(\alpha_{i_{j}},\rho)w_{j}-(\omega_{1},\rho) z), \end{split} \end{equation}where $$i_{j}=\left\{ \begin{array}{ll} j, & \hbox{$j\leq n$;} \\ 2n-1-j, & \hbox{$n+1\leq j\leq 2n-2$.} \end{array} \right. $$ When $l= n-1^{'}$, \begin{equation} \begin{split} {\bm W}_{c}({\bm w},z,\omega_{1},l) =\sum _{j=1}^{n-1}(\alpha_{i_{j}},\omega_{1})\log \left( w_{j}-z\right) -&\sum_{ j< k\leq n-1}(\alpha_{i_{j}},\alpha_{i_{k}}) \log \left( w_{j}-w_{k}\right)\\ -&c(\sum_{j=1}^{n-1}(\alpha_{i_{j}},\rho)w_{j}-(\omega_{1},\rho) z), \end{split} \end{equation}where $$i_{j}=\left\{ \begin{array}{ll} j, & \hbox{$j=1,...,n-2$;} \\ n, & \hbox{$j=n-1$.} \end{array} \right. $$ By this notation, when $l\geq n+1$, $\{w_{k}, w_{2n-1-k}\}_{2n-1-l\leq k\leq n-2}$ are pairs of symmetric coordinates of $\alpha_{k}$ in the function. Define $\bar{w}_{k}=w_{k}+w_{2n-1-k}$ and $\Delta_{k}=(w_{k}-w_{2n-1-k})^{2}$, $2n-1-l\leq k\leq n-2$. \begin{lemma} For the fundamental representation $V_{\omega_{1}}$ of $D_{n}$ Lie algebra, the solutions of the critical point equation \eqref{CE2} of the corresponding Yang-Yang functions ${\bm W}_{c}({\bm w},0,\omega_{1},l)$ are as following: When $l\leq n-1$, $$ w_{j}=\sum_{i=1}^{j}\frac{1}{c(l-i+1)}\quad j=1,...,l;$$ When $l=n-1^{'}$, $$w_{j}=\sum_{i=1}^{j}\frac{1}{c(n-i)}\quad j=1,...,n-1;$$ When $l=n$, $$w_{j}=\sum_{i=1}^{j}\frac{1}{c(n+1-i)}\quad j=1,...,n-2,$$ and $$w_{n-1}=w_{n}=\frac{1}{c}+\sum_{i=1}^{n-2}\frac{1}{c(n+1-i)};$$ When $l\geq n+1$, $$w_{k}=\sum_{i=1}^{k}\frac{1}{c(l+1-i)}$$ for $k=1,...,2n-2-l,$ $$w_{n-1}=w_{n}=\sum_{i=1}^{2n-2-l}\frac{1}{c(l+1-i)}+\sum_{i=2n-1-l}^{n-2}\frac{1}{c(l-i)}+\frac{1}{c(l+1-n)},$$ \begin{equation}\begin{split} \bar{w}_{k}=&\sum_{i=1}^{2n-l-2}\frac{2}{c(l+1-i)}+\sum_{i=2n-1-l}^{k-1}\frac{1}{c(l-i)}+\sum_{i=2n-1-l}^{n-2}\frac{1}{c(l-i)}\\ &+\frac{2}{c(l-n+1)}+\sum_{i=k}^{n-2}\frac{1}{c(l+i+2-2n)},\end{split}\end{equation} and \begin{equation}\begin{split}\Delta_{k}=&[\sum_{i=k}^{n-2}\frac{1}{c(l-i)}+\frac{2}{c(l-n+1)}+\sum_{i=k}^{n-2}\frac{1}{c(l+i+2-2n)}]^2\\&+\sum_{i=k}^{n-2}\frac{1}{c(l-i)} +\frac{2}{c(l-n+1)}+\sum_{i=k}^{n-2}\frac{1}{c(l+i+2-2n)}-\frac{4}{c(2l-2n+1)},\end{split}\end{equation} for $2n-1-l\leq k\leq n-2$. \end{lemma} \begin{lemma} $\Delta_{k}>0$ for $k=2n-1-l,...,n-2$. Assume $w_{k}<w_{2n-1-k}$, then the coordinates of critical solutions satisfy the following order: When $l\leq n-1$, $$0<w_{1}<w_{2}<...<w_{l};$$ When $l=n-1^{'}$, $$0<w_{1}<w_{2}<...<w_{n-1};$$ When $l\geq n$, $$0<w_{1}<...<w_{n-2}<w_{n-1}=w_{n}<w_{n+1}<...<w_{l}.$$ \end{lemma} By this lemma, $z$ and the critical coordinates $\{w_{j}\}_{j=1,...,l}$ of ${\bm W}_{c}({\bm w},z,\omega_{1},l)$ are on the same horizontal line of ${\bm W}$ plane as shown in figure \ref{fig:dd}. When $l=2n-2$, the critical point equation of two singularities $z_{1}$ and $z_{2}$ with $c=0$ is as following: \begin{equation}\label{equ:DNSB} \left\{ \begin{aligned} 0&=\frac{-1}{w^{1}_{1}-z_{1}}+\frac{-1}{w^{1}_{1}-z_{2}}+\frac{2}{w^{1}_{1}-w^{2}_{1}}+\frac{-1}{w^{1}_{1}-w^{1}_{2}}+\frac{-1}{w^{1}_{1}-w^{2}_{2}}\\ 0&=\frac{-1}{w^{2}_{1}-z_{1}}+\frac{-1}{w^{2}_{1}-z_{2}}+\frac{2}{w^{2}_{1}-w^{1}_{1}}+\frac{-1}{w^{2}_{1}-w^{1}_{2}}+\frac{-1}{w^{2}_{1}-w^{2}_{2}}\\ ... \\ 0&=\frac{2}{w^{1}_{k}-w^{2}_{k}}+\frac{-1}{w^{1}_{k}-w^{1}_{k-1}}+\frac{-1}{w^{1}_{k}-w^{2}_{k-1}}+\frac{-1}{w^{1}_{k}-w^{1}_{k+1}}+\frac{-1}{w^{1}_{k}-w^{2}_{k+1}}\\ 0&=\frac{2}{w^{2}_{k}-w^{1}_{k}}+\frac{-1}{w^{2}_{k}-w^{1}_{k-1}}+\frac{-1}{w^{2}_{k}-w^{2}_{k-1}}+\frac{-1}{w^{2}_{k}-w^{1}_{k+1}}+\frac{-1}{w^{2}_{k}-w^{2}_{k+1}}\\ 0&=\frac{2}{w^{1}_{n-2}-w^{2}_{n-2}}+\frac{-1}{w^{1}_{n-2}-w^{1}_{n-3}}+\frac{-1}{w^{1}_{n-2}-w^{2}_{n-3}}\\&+\frac{-1}{w^{1}_{n-2}-w_{n-1}}+\frac{-1}{w^{1}_{n-2}-w_{n}}\\ 0&=\frac{2}{w^{2}_{n-2}-w^{1}_{n-2}}+\frac{-1}{w^{2}_{n-2}-w^{1}_{n-3}}+\frac{-1}{w^{2}_{n-2}-w^{2}_{n-3}}\\&+\frac{-1}{w^{2}_{n-2}-w_{n-1}}+\frac{-1}{w^{2}_{n-2}-w_{n}}\\ 0&=\frac{-1}{w_{n-1}-w^{1}_{n-2}}+\frac{-1}{w_{n-1}-w^{2}_{n-2}}\\ 0&=\frac{-1}{w_{n}-w^{1}_{n-2}}+\frac{-1}{w_{n}-w^{2}_{n-2}},\\ \end{aligned} \right. \end{equation} where $2\leq k\leq n-3$. By lemma \ref{rlemma}, the solution is as following: $$\bar{w}_{1}=\bar{w}_{k}=\bar{w}_{n-2}=z_{1}+z_{2},w_{n-1}=w_{n}=\frac{z_{1}+z_{2}}{2}$$ and $$w^{1}_{l}w^{2}_{l}=z_{1}z_{2}+\frac{(z_{1}-z_{2})^{2}l(2n-1-l)}{4n(n-1)},\quad 1\leq l\leq n-2.$$ The distribution of the coordinates of critical point is shown in figure \ref{fig:nsbd}. Similar to the case of one singularity in figure \ref{fig:dd}, the coordinates are symmetric on the same line connecting $z_{1}$ and $z_{2}$, which gives coordinates of the thimble and singularities $z_{1}$ and $z_{2}$ a total order:$$z_{1}<w^{1}_{1}<w^{1}_{k}<w^{1}_{k+1}<w^{1}_{n}=w^{2}_{n}<w^{2}_{k+1}<w^{2}_{k}<w^{2}_{1}<z_{2}.$$ Because of the existence of the thimble above, there is a wall-crossing whenever $o(i)\neq 0$ and its homotopic class will keep this order. \begin{figure} \centering\includegraphics[width=3cm]{mono-dd1} \centering\includegraphics[width=3.9cm]{mono-dd2}\\ \centering\includegraphics[width=6.5cm]{mono-dd3}\\ \caption{Coordinates distribution of the $D_{n}$ critical point on ${\bm W}_{c}({\bm w},{\bm z},{\bm \lambda},{\bm l})$ plane near $z$ .} \label{fig:dd} \end{figure} Firstly, we consider the case $v_{\lambda^{i}}\otimes v_{\lambda^{j}}\in V_{\omega_{1}}\otimes V_{\omega_{1}}, \quad o(i)+o(j)\neq 2n-1$. i)$o(i)+o(j)\neq 2n-1\& o(i)\geq o(j) $ There is no wall-crossing, $$\boldsymbol{B}J_{i,j}=q^{-\frac{1}{2}(\lambda^{i},\lambda^{i})}J_{j,i}.$$ ii)$o(i)+o(j)\neq 2n-1\& i<j \& i\neq n$ The variation is of type $II$. \begin{equation} \begin{split} &\boldsymbol{B}J_{i,j}\\ =&q^{-\frac{1}{2}(\lambda^{i},\lambda^{i})}(J_{j,i}+q^{\frac{1}{2}(\lambda^{i},\lambda^{i}-\lambda^{j})}(q^{-(\lambda^{i},\lambda^{i}-\lambda^{j})}-1)J_{i,j})\\ =&J_{j,i}+(q^{-\frac{1}{2}}-q^{\frac{1}{2}})J_{i,j}.\\ \end{split} \end{equation} iii)$o(i)+o(j)\neq 2n-1\& i=n\&j=n+1 $ The variation is of type $I$. As is shown in figure \ref{fig:dbt1}, $$b_{1}+b_{2}=a\cdot q^{\frac{1}{2}(\lambda^{n-1},\alpha_{n})}(1+q^{-\frac{1}{2}(\alpha_{n},\alpha_{n})})$$ is the sum of phase factors of moving one of two $\alpha_{n}$ to the right of $z_{1}$. $$c_{1}+c_{2}=(b_{1}+b_{2})\cdot q^{-(\lambda^{n-1},\alpha_{n})+\frac{1}{2}(\alpha_{n},\alpha_{n})}$$ is from anti-clockwise rotation of $\alpha_{n}$, $2\pi$ around $\{z_{1},\alpha_{1}, ... , \alpha_{n-1}\}$ and $\pi$ around $\alpha_{n}$. \begin{equation} \begin{split} &\boldsymbol{B}J_{i,j}\\ =&aJ_{j,i}+(c_{1}+c_{2}-b_{1}-b_{2})J_{i,j}\\ =&q^{-\frac{1}{2}(\lambda^{i},\lambda^{i})}(J_{j,i}+q^{\frac{1}{2}(\lambda^{n-1},\alpha_{n})}(1+q^{-\frac{1}{2}(\alpha_{n},\alpha_{n})})(q^{-(\lambda^{n-1},\alpha_{n})+\frac{1}{2}(\alpha_{n},\alpha_{n})}-1)J_{i,j})\\ =&J_{j,i}+(q^{-\frac{1}{2}}-q^{\frac{1}{2}})J_{i,j}.\\ \end{split} \end{equation} iv)$o(i)+o(j)\neq 2n-1\& i=n\&j>n+1 $ The variation is a combination of type $I$ and $II$. As shown in figure \ref{fig:dbt12}, there are two possible ways of moving $\alpha_{n}$ from $z_{2}$ to $z_{1}$. But at this time, $\alpha_{n}$ is followed by $\{\alpha_{n-1}, ..., \alpha_{2n+1-j}\}$. \begin{equation} \begin{split} a=&q^{-\frac{1}{2}(\lambda^{i},\lambda^{i})};\\ b_{1}+b_{2}=&a\cdot q^{\frac{1}{2}(\lambda^{n-1},\alpha_{n})+\frac{1}{2}(\lambda^{n},\lambda^{n+1}-\lambda^{j})}(1+q^{-\frac{1}{2}(\alpha_{n},\alpha_{n})});\\ c_{1}+c_{2}=&(b_{1}+b_{2})\cdot q^{-(\lambda^{n-1},\alpha_{n})+\frac{1}{2}(\alpha_{n},\alpha_{n})-(\lambda^{n},\lambda^{n+1}-\lambda_{j})};\\ \boldsymbol{B}J_{i,j} =&aJ_{j,i}+(c_{1}+c_{2}-b_{1}-b_{2})J_{i,j}\\ =&J_{j,i}+(q^{-\frac{1}{2}}-q^{\frac{1}{2}})J_{i,j}.\\ \end{split} \end{equation} In sum, \begin{equation} \boldsymbol{B}J_{i,j}=\left\{ \begin{array}{ll} q^{-\frac{1}{2}}J_{j,i}, & \hbox{$i+j\neq 2n-1\&i=j$;} \\ J_{j,i}, & \hbox{$i+j\neq 2n-1\&i> j$;} \\ J_{j,i}+(q^{-\frac{1}{2}}-q^{\frac{1}{2}})J_{i,j}, & \hbox{$i+j\neq 2n-1\&i<j$.} \end{array} \right. \end{equation} For $v_{\lambda^{i}}\otimes v_{\lambda^{j}}\in V_{\omega_{1}}\otimes V_{\omega_{1}}, \quad o(i)+o(j)= 2n-1$, the coefficients of equation \eqref{equ:cof} are $$\boldsymbol{B}_{o^{-1}(2n-1-o(i)),i}^{i,o^{-1}(2n-1-o(i))}=q^{-\frac{1}{2}(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{i})}=q^{\frac{1}{2}}.$$ Let $j>0$ be the difference of the order, then $\boldsymbol{B}_{o^{-1}(2n-1-o(i)),i}^{o^{-1}(o(i)-j),o^{-1}(2n-1-o(i)+j)}$ can be computed as in the following cases. i) $0<j\leq o(i)<n$ or $o(i)>n\&o(i)-j\geq n$ The variation is of type $III$ and $j$ is the number of moving primary roots. \begin{equation} \begin{split} a=&q^{-\frac{1}{2}(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{i})};\\ b=&a\cdot q^{\frac{1}{4}[(\lambda^{o^{-1}(o(i)-j)},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})+(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})]+\frac{j-1}{2}};\\ c=&b\cdot q^{-(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})}.\\ \end{split} \end{equation} $$\boldsymbol{B}_{o^{-1}(2n-1-o(i)),i}^{o^{-1}(o(i)-j),o^{-1}(2n-1-o(i)+j)} =(-1)^{j}q^{\frac{j+1}{2}}(1-q^{-1}).$$ ii) $n<o(i)\leq 2n-1\& o(i)-j<n-1\& 2o(i)-j\neq 2n-1$ The variation is of type $III$. At this time, the number of primary roots moving from $z_{2}$ to $z_{1}$ is not $j$, but $j-1$. The sign before $b$ is $(-1)^{j-1}$ and the phase factor from $-\pi$ self rotation of $j-1$ primary roots equals to $q^{\frac{j-2}{2}}$. Thus, \begin{equation} \begin{split} b=&a\cdot q^{\frac{1}{4}[(\lambda^{o^{-1}(o(i)-j)},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})+(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})]+\frac{j-2}{2}};\\ c=&b\cdot q^{-(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})}.\\ \end{split} \end{equation} $$\boldsymbol{B}_{o^{-1}(2n-1-o(i)),i}^{o^{-1}(o(i)-j),o^{-1}(2n-1-o(i)+j)} =(-1)^{j-1}q^{\frac{j}{2}}(1-q^{-1}).$$ iii) $n<o(i)\& o(i)-j<n-1\& 2o(i)-j= 2n-1$ The variation is of type $IV$. The number of primary roots moving from $z_{2}$ to $z_{1}$ is $j-1$. Thus, \begin{equation} \begin{split} b=&a q^{\frac{1}{4}[(\lambda^{o^{-1}(o(i)-j)},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})+(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})]+n-o(i)+j-2};\\ c_{1}=&b\cdot q^{-(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{o^{-1}(o(i)-j+1)})};\\ c_{2} =&a\cdot q^{\frac{1}{4}[(\lambda^{o^{-1}(o(i)-j)},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})+(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})]}\\ &q^{{-(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})+(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{o^{-1}(o(i)-j+1)})}};\\ c_{3}=&a\cdot q^{\frac{1}{4}[(\lambda^{o^{-1}(o(i)-j)},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})+(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})]}\\ &q^{-(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})}.\\ \end{split} \end{equation} $$\boldsymbol{B}_{o^{-1}(2n-1-o(i)),i}^{o^{-1}(o(i)-j),o^{-1}(2n-1-o(i)+j)} =(-1)^{j-1}q^{\frac{j}{2}}(1-q^{-1})+q^{-\frac{1}{2}}(1-q).$$ iv)$n<o(i)\& o(i)-j=n-1$ The variation is of type $III$. The number of primary roots moving from $z_{2}$ to $z_{1}$ is $j-1$. Thus, \begin{equation} \begin{split} b=&a\cdot q^{\frac{1}{4}[(\lambda^{o^{-1}(o(i)-j)},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})+(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})]+\frac{j-2}{2}};\\ c=&b\cdot q^{-(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})}.\\ \end{split} \end{equation} \begin{equation} \begin{split} &\boldsymbol{B}_{o^{-1}(2n-1-o(i)),i}^{o^{-1}(o(i)-j),o^{-1}(2n-1-o(i)+j)}\\ =&\boldsymbol{B}_{o^{-1}(2n-1-o(i)),i}^{n-1,n}\\ =&(-1)^{j-1}q^{\frac{j}{2}}(1-q^{-1}).\\ \end{split} \end{equation} v)$o(i)=n\& j\geq 2$ The variation is of type $III$. The number of primary roots moving from $z_{2}$ to $z_{1}$ is $j-1$. Thus, \begin{equation} \begin{split} b=&a\cdot q^{\frac{1}{4}[(\lambda^{o^{-1}(o(i)-j)},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})+(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})]+\frac{j-2}{2}};\\ c=&b\cdot q^{-(\lambda^{o^{-1}(2n-1-o(i))},\lambda^{o^{-1}(o(i)-j)}-\lambda^{i})}.\\ \end{split} \end{equation} $$\boldsymbol{B}_{o^{-1}(2n-1-o(i)),i}^{o^{-1}(o(i)-j),o^{-1}(2n-1-o(i)+j)} =(-1)^{j-1}q^{\frac{j}{2}}(1-q^{-1}).$$ \begin{equation}\label{Dn1} \begin{split} &\boldsymbol{B}^{o^{-1}(o(i)-j),o^{-1}(2n-1-o(i)+j)}_{o^{-1}(2n-1-o(i)),i}\\ =&\left\{ \begin{array}{ll} q^{\frac{j+1}{2}}(-1)^{j}(1-q^{-1}), & \hbox{$o(i)>n\&o(i)-j\geq n\hbox{ or }0<j\leq o(i)<n$;} \\ q^{\frac{j}{2}}(-1)^{j-1}(1-q^{-1}), & \hbox{$o(i)>n\&o(i)-j<n-1\&2o(i)-j\neq 2n-1$;} \\ q^{\frac{j}{2}}(-1)^{j-1}(1-q^{-1})\\ +q^{-\frac{1}{2}}-q^{\frac{1}{2}}, & \hbox{$o(i)>n\&o(i)-j<n-1\&2o(i)-j= 2n-1$;} \\ q^{\frac{j}{2}}(-1)^{j-1}(1-q^{-1}), & \hbox{$o(i)>n\&o(i)-j=n-1$;} \\ q^{\frac{j}{2}}(-1)^{j-1}(1-q^{-1}), & \hbox{$o(i)=n\&j\geq 2$.} \end{array} \right. \end{split} \end{equation} \begin{equation}\label{Dn2} \begin{split} &\boldsymbol{B}^{o^{-1}(o(i)-j),o^{-1}(2n-1-o(i)+j)}_{o^{-1}(2n-1-o(i)),i}\\ =&\left\{ \begin{array}{ll} q^{\frac{j+1}{2}}(-1)^{j}(1-q^{-1}), & \hbox{$o(i)>n\&o(i)-j\geq n\hbox{ or }0<j\leq o(i)<n$;} \\ q^{\frac{j}{2}}(-1)^{j-1}(1-q^{-1})\\ +\delta_{2o(i)-j,2n-1}(q^{-\frac{1}{2}}-q^{\frac{1}{2}}), & \hbox{$o(i)\geq n\&o(i)-j\leq n-1$;} \end{array} \right. \end{split} \end{equation} Monodromy representation is as following: \begin{equation}\label{Dn3} \begin{split} &\boldsymbol{B}_{o^{-1}(2n-1-o(a)),a}^{o^{-1}(2n-1-o(b)),b}\\ =&\left\{ \begin{array}{ll} (-1)^{o(a)+o(b)-1}q^{-n+\frac{o(a)+o(b)+1}{2}}(q^{\frac{1}{2}}-q^{-\frac{1}{2}}), & \hbox{$o(a)\leq n-1\& o(b)\geq n$;} \\ &\hbox{$o(a)> n\& o(b)\leq n-1$;} \\ (-1)^{o(a)+o(b)-2}q^{-n+\frac{o(a)+o(b)}{2}}(q^{\frac{1}{2}}-q^{-\frac{1}{2}})\\ -\delta_{o(a),o(b)}(q^{\frac{1}{2}}-q^{-\frac{1}{2}}), & \hbox{$o(a)\geq n\& o(b)\geq n$.} \end{array} \right. \end{split} \end{equation} \subsection{Similarity transformation $Q$} By comparing the monodromy representation with the braid group representation induced by the universal R-matrices (see 7.3B and 7.3C of \cite{Chari}) of quantum group $U_{h}(g)$, we have $\boldsymbol{B_{YY}}=Q\cdot \boldsymbol{B_{U_{h}(g)}}\cdot Q^{-1}$, $Q\in End(V_{\omega_{1}}\otimes V_{\omega_{1}})$. For $A_{n}$, $Q=Id$; For $B_{n}$, $Q=\left( \begin{array}{ccc} Id & & \\ & Q_{2n+1} & \\ & & Id \\ \end{array} \right) $, where \begin{equation} Q_{2n+1} =\left( \begin{array}{ccccccccc} (-1)^{0}& & & & & & \\ & ...& & & & & \\ & & (-1)^{n-1} & & & & \\ & & & \frac{(-1)^{n}}{q^{\frac{1}{4}}+q^{-\frac{1}{4}}} & & & \\ & & & & (-1)^{n+1}& & \\ & & & & &... & \\ & & & & & & (-1)^{2n} \\ \end{array} \right); \end{equation} For $C_{n}$ and $D_{n}$, $Q=\left( \begin{array}{ccc} Id & & \\ & Q_{2n} & \\ & & Id \\ \end{array} \right) $, where\begin{equation} Q_{2n} =\left( \begin{array}{ccccccccc} (-1)^{0}& & & & & & \\ & ...& & & & & \\ & & (-1)^{n-1} & & & & \\ & & & (-1)^{n-1}& & \\ & & & & ... & \\ & & & & & (-1)^{2n-2} \\ \end{array} \right). \end{equation} This proves the first main theorem. \section{Commutation of two parameter deformations}\label{sec:conclusion} In the previous section, we have studied the transformation $T(1)$ of parameters $z_{1}$ and $z_{2}$. In this section, we consider the continuous deformation of the real parameter $c$ and its relation with $T(1)$. As shown in theorem \ref{1sing}, for the fundamental representation $V_{\omega_{1}}$ of each classical complex simple Lie algebra, the lowest weight vectors in $SingV_{\omega_{1}}\otimes V_{\omega_{1}}$ are \begin{equation} \begin{split} A_{n}:&\quad v_{2\omega_{1}-\alpha_{1}}\\ B_{n}:&\quad v_{2\omega_{1}-2\alpha_{1}-...-2\alpha_{n}}\\ C_{n}:&\quad v_{2\omega_{1}-2\alpha_{1}-...-2\alpha_{n-1}-\alpha_{n}}\\ D_{n}:&\quad v_{2\omega_{1}-2\alpha_{1}-...-2\alpha_{n-2}-\alpha_{n-1}-\alpha_{n}}. \end{split} \end{equation} For $A_{n}$, the Yang-Yang function with $c=0$ corresponding to the singular vector $v_{2\omega_{1}-\alpha_{1}}$ is $${\bm W}({\bm w},{\bm z},{\bm \lambda},1)=\sum_{a=1}^{2}(\alpha_{1},\omega_{1})\log(w_{1}-z_{a})-(\omega_{1},\omega_{1})\log(z_{1}-z_{2}).$$ Its unique critical point is $w_{1}=\frac{z_{1}+z_{2}}{2}$. For $B_{n}$, $C_{n}$ and $D_{n}$, the corresponding critical point equations \eqref{equ:BNSB} \eqref{equ:CNSB} \eqref{equ:DNSB} are already solved. Thus, corresponding to the lowest weight vector in $SingV_{\omega_{1}}\otimes V_{\omega_{1}}$ of each classical Lie algebra, there exists an unique thimble $J\subset \mathfrak{J}_{0}(P)$ for $\|e^{-\frac{{\bm W}}{k+h^{v}}}\|$ connecting $z_{1}$ and $z_{2}$ . The distributions of the coordinates of the critical point on ${\bm W}$ plane are shown in figures \ref{fig:nsba}, \ref{fig:nsbb}, \ref{fig:nsbc} and \ref{fig:nsbd} respectively. As shown in section \ref{subsct:bundle}, $\mathfrak{J}_{0}(P)$ is a $\mathbb{Z}[t,t^{-1}]$-module. Thus, $J$ naturally generates a one dimensional $\mathbb{Z}[t,t^{-1}]$-submodule of $\mathfrak{J}_{0}(P)$. We denote also by $\boldsymbol{B_{YY}}$ the monodromy representation induced by $T(1)$ on the one dimensional $\mathbb{Z}[t,t^{-1}]$-submodule generated by $J$. \begin{figure} \centering\includegraphics[width=2.5cm]{cw-nsba}\\ \caption{Coordinates distribution of the $A_{n}$ critical point on ${\bm W}$ plane at $c=0$.} \label{fig:nsba} \end{figure} \begin{figure} \centering\includegraphics[width=3.5cm]{cw-nsbb}\\ \caption{Coordinates distribution of the $B_{n}$critical point on ${\bm W}$ plane at $c=0$.} \label{fig:nsbb} \end{figure} \begin{figure} \centering\includegraphics[width=3cm]{cw-nsbc}\\ \caption{Coordinates distribution of the $C_{n}$ critical point on ${\bm W}$ plane at $c=0$.} \label{fig:nsbc} \end{figure} \begin{figure} \centering\includegraphics[width=4cm]{cw-nsbd}\\ \caption{Coordinates distribution of the $D_{n}$ critical point on ${\bm W}$ plane at $c=0$.} \label{fig:nsbd} \end{figure} Let $\alpha_{I}=\{\alpha_{i_{1}},\alpha_{i_{2}}, ...,\alpha_{i_{m}}\}$ be the set of all the primary roots in the lowest weight $2\omega_{1}-\alpha_{i_{1}}-\alpha_{i_{2}}...-\alpha_{i_{m}}$ and $I=\{i_{1},i_{2},...,i_{m}\}$ the set of their indexes. Define $SI(\alpha_{I})=\sum_{i_{j},i_{s}\in I\&j<s}(\alpha_{i_{j}},\alpha_{i_{s}})$ as the sum of the pairwise inner products of the primary roots in $\alpha_{I}$. By a direct calculation for the phase factor difference from the rotation $T(1)$, we have \begin{lemma} \begin{equation}\label{equ:c0} \boldsymbol{B_{YY}}J={\bm d} J, \quad {\bm d}=q^{-\frac{1}{2}[(\omega_{1},\omega_{1})-2(\omega_{1},\sum_{i_{j}\in I} \alpha_{i_{j}})+SI(\alpha_{I})]}. \end{equation} For $A_{n}$ Lie algebra, $m=1$, $i_{1}=1$, $$SI(\alpha_{I})=0.$$ For $B_{n}$ Lie algebra, $m=2n$, $i_{j}=\left\{ \begin{array}{ll} j, & \hbox{$j\leq n$;} \\ 2n+1-j, & \hbox{$j>n$,} \end{array} \right. $ $$SI(\alpha_{I})=-2(n-1)+1=-2n+3.$$ For $C_{n}$ Lie algebra, $m=2n-1$, $i_{j}=\left\{ \begin{array}{ll} j, & \hbox{$j\leq n$;} \\ 2n-j, & \hbox{$j>n$,} \end{array} \right.$ $$SI(\alpha_{I})=-(n-2)-1=-n+1.$$ For $D_{n}$ Lie algebra, $m=2n-2$, $i_{j}=\left\{ \begin{array}{ll} j, & \hbox{$j\leq n$;} \\ 2n-1-j, & \hbox{$j>n$,} \end{array} \right.$ $$SI(\alpha_{I})=-2(n-1)=-2n+2.$$ \end{lemma} Although theorem \ref{1prm} demands that $c\in \mathbb{Z}_{\geq0}$, the thimble structure can be defined continuously on $c\geq0$, thus we can consider the continuous deformation of $c\rightarrow+\infty$ from $c=0$. For any $J\subset \mathfrak{J}_{0}(P)$, when $c\rightarrow +\infty$, by lemma \ref{split}, the coordinates of its critical point tend either to $z_{1}$ or to $z_{2}$, thus it gives several possible thimbles in $J_{a,b}\subset \mathfrak{J}(P)$. Multiply to each thimble $J_{a,b}$ the phase factor generated from the deformation and sum them up. Define the symmetry breaking transformation $\boldsymbol{S}$ by $$\boldsymbol{S}:\mathfrak{J}_{0}(P)\rightarrow\mathfrak{J}(P)$$ \begin{equation}\label{equ:bt} \boldsymbol{S} J=\sum_{a,b}e^{a,b}J_{a,b}, \end{equation} where coefficients $e^{a,b}$ are the phase factor difference of $e^{-\frac{{\bm W}({\bm w},{\bm z},{\bm \lambda},{\bm l})}{\kappa+h^{\vee}}}$ in the process of $c\rightarrow +\infty$ from $0$. For $J\subset \mathfrak{J}_{0}(P)$ corresponding to the lowest weight vector in $Sing V_{\omega_{1}}\otimes V_{\omega_{1}}$, by a direct calculation, we have \begin{lemma} For $A_{n}$, $$\boldsymbol{S} J=q^{-\frac{1}{4}(\omega_{1},\alpha_{1})}J_{n,0}+q^{\frac{1}{4}(\omega_{1},\alpha_{1})}J_{0,n}=q^{-\frac{1}{4}}J_{n,0}+q^{\frac{1}{4}}J_{0,n}.$$ For $B_{n}$, \begin{equation} \begin{split} \boldsymbol{S} J=&\sum_{i<n}(-1)^{i}q^{-\frac{n-i}{2}+\frac{1}{4}}J_{2n-i,i}+(-1)^{n}(q^{-\frac{1}{4}+\frac{1}{4}})J_{n,n}\\ &+\sum_{i>n}(-1)^{i}q^{-\frac{n-i}{2}-\frac{1}{4}}J_{2n-i,i}. \end{split} \end{equation} For $C_{n}$, $$\boldsymbol{S} J=\sum_{i<n}(-1)^{i}q^{-\frac{n-i}{4}}J_{2n-1-i,i}+\sum_{i\geq n}(-1)^{i}q^{-\frac{n-i-1}{4}}J_{2n-1-i,i}.$$ For $D_{n}$, \begin{equation} \begin{split} \boldsymbol{S} J=&\sum_{i<n-1}(-1)^{i}q^{-\frac{n-i-1}{2}}J_{2n-2-i,i}+\sum_{i\geq n}^{2n-2}(-1)^{i}q^{-\frac{n-1-i}{2}}J_{2n-2-i,i}\\ &+(-1)^{n-1}(J_{n-1,n-1'}+J_{n-1',n-1}). \end{split} \end{equation} \end{lemma} By the monodromy representation $\boldsymbol{B}_{a,b}^{c,d}$ in section \ref{subsct:monodromy} and the lemma above, it is straight forward to prove that $e^{a,b}$ is the eigenvector of the monodromy representation. \begin{lemma} \begin{equation}\label{eigenvector} \sum_{a,b}e^{a,b}\boldsymbol{B}^{c,d}_{a,b}={\bm d}e^{c,d}, \end{equation} where ${\bm d}$ is defined in \eqref{equ:c0}. \end{lemma} By this lemma, $$\boldsymbol{B_{YY}} \boldsymbol{S} J=\boldsymbol{B_{YY}}\sum_{a,b}e^{a,b}J_{a,b}={\bm d}\sum_{a,b,c,d}e^{a,b}\boldsymbol{B}^{c,d}_{a,b}J_{c,d}=\boldsymbol{S}\boldsymbol{B_{YY}}J.$$ The second main theorem \ref{thm2} follows. From the equation \eqref{equ:bt}, it is natural to define a $ \mathbb{Z}[t,t^{-1}]$ linear operator $$M: \mathbb{Z}[t,t^{-1}]\rightarrow V_{\omega_{1}}\otimes V_{\omega_{1}},$$ satisfying $$M(1)=\sum_{a,b}e^{a,b}v_{\lambda^{a}}\otimes v_{\lambda^{b}}.$$ We can also define a creation matrix associated with $M$ as $$M^{a,b}=e^{a,b}.$$ Annihilation matrix $M_{a,b}$ is its inverse satisfying $$\sum_{b}M_{a,b}M^{b,c}=\delta_{a}^{c}.$$ With these two matrices, quantum trace of any $(m,m)$ tensor $T_{i_{1}i_{2}...i_{m}}^{j_{1}j_{2}...j_{m}}$ is just as following: $$Tr_{q}T=\sum_{i_{1},i_{2},...i_{m},j_{1},j_{2},...,j_{m}}T_{i_{1}i_{2}...i_{m}}^{j_{1}j_{2}...j_{m}}\eta_{j_{1}}^{i_{1}}\eta_{j_{2}}^{i_{2}}...\eta_{j_{m}}^{i_{m}},$$ where $\eta^{i_{k}}_{j_{k}}=\sum_{l}M^{i_{k},l}M_{l,j_{k}}$. By Alexander's theorem (p91 I.7 of \cite{Kauffman}), the ambient knots invariants defined by contraction of $\boldsymbol{B}_{a,b}^{c,d}$, $M_{a,b}$ and $M^{a,b}$ in the decomposition of some knot projection diagram coincide with the quantum trace of the $(m,m)$ tensor associated with the braid. In \cite{HL1} and \cite{HL2}, knots invariants associated with the fundamental representation of $A_{n}$ Lie algebra and $B_{n},C_{n},D_{n}$ Lie algebra are proved to be HOMFLY-PT polynomial and Kauffman polynomial respectively. \begin{remark} Knots invariant depends on representations of Lie algebra. Different representations may give different knots invariants. Generally, the corresponding knots invariant is not necessary to be HOMFLY-PT polynomial or Kauffman polynomial. We will focus on this point elsewhere. \end{remark} \bibliography{Yang-Yang} \bibliographystyle{unsrt} \end{document}
165,188
Jan Paul ten Bruggencate, one of our Dutch readers/contributors (see 1209) and a frequent pilgrim to the Holy Mountain, just came back from a 13-day visit (!) to Athos during Pentecost. He was willing to share some of his pictures with us and in the coming days I will show them to you. Today I will start with the most remarkable photos. Novices (?) from the Athonias-school at the Serail/Ag. Andreou playing a mini soccer game! And why not ? School kids need exercise ! As long as they’re not planning to compete in the European or World cup! One of the places mr Ten bruggencate visited was kellion Marouda. Here picture of him (80 years!) with a monk/Elder Makarios of the small settlement (thanks Giannis). Wim, 27/6
286,507
« My 12 Best Marketing Conversations of 2007 | Main | Adam Salamon: A Connection Made » TrackBack URL for this entry: Listed below are links to weblogs that reference Happy New Year - Possibility is Timeless: The comments to this entry are closed. @Joe -- Happy New Year to you. @Joanna -- there is so much wisdom built into children stories. I hope we will not stop reading them, even if there are no children to read them to. @Ricardo -- thank you for reading and linking. There will be a conversation on real estate here. Posted by: Valeria Maltoni | January 03, 2008 at 11:42 AM Happy New Year Valeria! Posted by: Ricardo Bueno | January 01, 2008 at 02:46 PM Happy New Year Valeria! Thanks for kicking it off with one of my favourite quotes, from one of my favourite books Joanna Posted by: Joanna Young | January 01, 2008 at 02:28 PM
322,135
Today I photographed Jack and his new sister Ava, all of 7 weeks old. They were both a delight to work with, Jack especially being so well behaved. At times there seemed to be a wisdom beyond his years in Jack’s expression, which is what I love so much about the first image of him, the contemplative look as he addresses the camera. Ava looks as cute as a button wrapped up in the pink blankie, reminds me of a rose bud. Can’t wait to show you the rest of the images next week. Craig
104,426
I believe our lives have soundtracks. I know a few scratchy vinyl albums from the 70s did more to shape my world view than 12 years of Catholic school. Walking through the Best Buy parking lot, CD in hand, I smile down at a familiar scene—laughing, red-bearded man drinking tea, silhouette of a woman with outstretched arms, big orange sun, distant thunderstorm. When I insert the disc and press Play, I’m no longer 45 years old in a Toyota Land Cruiser; I’m ten years old, in my cool older sister’s bedroom, listening to the first words of a new album she just bought from Korvettes. (Well I think it’s fine/Building jumbo planes) Turning onto I-95N, I’m in 5th grade, nestled under my Bambi quilt, gazing out at a familiar scene—bamboo curtains from Pier 1 in the doorway, Breyer horses arranged neatly in their barn, guinea pigs in a cage, Harriet the Spy on my nightstand. I stare off into space, just listening. (So on and on we go/As seconds tick the time out) At the Route 100 Exit, I’m in 6th grade, playing my album for the class at lunch. No one really likes it; they’re all into bubble-gummy bands like the Osmonds and the Jackson Five. I feel very groovy and sophisticated. The boys are sniggering at the lyrics. (Mary dropped her pants by the sand/And let a Parson come and take her hand) Turning left on Meadowridge Road, I’m in 8th grade, home from a dance at St. Johns. I’m buzzed from the Boone’s Farm Strawberry Hill I swigged from a mason jar in the church bathroom. I sit on the floor and pet the dog and stare at my lava lamp. (Miles from nowhere, guess I’ll take my time/Oh yeah, to reach there) Turning right on Landing Road, I’m in college, watching Harold and Maude for the first time in a bar with my best friend and her boyfriend. Later that year she’ll turn to me suddenly, on a beach in Mexico, pupils huge and black from LSD, and ask me why I slept with him. (Seagulls sing your hearts away/Cause while the sinners sin, the children play) Pulling into the Ilchester Elementary school parking lot, I’m back in the present. My nine-year old jumps in the car, casually tossing the CD case onto the floor, eager to show me her latest gimp creation. On the drive home she listens to the unfamiliar songs, shyly singing along with the easy parts. I smile—half ruefully, half hopefully—wondering what the soundtrack of her life will be. (Oooh, baby, baby, it’s a wild world….) Donate If you enjoyed this essay, please consider making a tax-deductible contribution to This I Believe, Inc.
246,880
Mills and Grinding Quick, efficient pulverization and homogenization High sample throughput due to short grinding times and two grinding stations Reproducible results by digital preselection of grinding time and vibrational frequency Large range of grinding jars Item No. Description Quantity MU-RMM400 Mixer Mill MM400 for 100-240V, 50/60Hz 1 fluxana mills and grinding machine. Fluxana Mills And Grinding Machine.fluxana mills and grinding machine. 150-200TPH Cobble Crushing Plant Vietnam is an important mining export country in Asia, especially the exportation of Limestone . Get Price And Support Online; appliions ofvibratory mill- …Get best price fluxana mills Fluxana Mills And Grinding Machine. Fluxana mills and grinding machine fluxana mills and grinding machine jiayitworg mali is an rising african market and a lot of customers need crusher and grinding mill from tenshion every year obtener m225s azucar rectificadora amazon lavadora de arena molinos fluxana y maquina rectificadora outsourcemediabizrectificadora molino la compra deGet best price ...Get best price Nov 07, 2019 Fluxana Mills And Grinding Machine In Nigeria. Milling Equipment: Fluxana mills and grinding machine in nigeria - A class of machinery and equipment that can be used to meet the production requirements of coarse grinding, fine grinding and super fine grinding in the field of industrial grinding.The finished product can be controlled freely from 0 to 3000 mesh.Get best price CGOLDENWALL 700g Safety Upgraded Electric Grain Grinder Mill High-speed Spice Herb Mill Commercial Powder Machine Dry Cereals Grain Herb Grinder CE (700g Swing Type) 4.3 out of 5 stars 405 2 offers from $155.00Get best price Fluxana Mills And Grinding Machine Reviews Ghana. Jul 31, 2020nbsp018.Get best) 697-6847. Get ratings, reviews, hours, phone numbers, and directions.Get best price Retsch Ultra Centrifugal Mill ZM 200. best price Flottweg Germany Mill Machinery. Ball grinding mill operation best price If you have any problems about our product and service,please feel free to give your inquiry in the form below. We will reply you within 24 hours as soon as possible.Thank you!
337,990
\section{Deferred Proofs from Section~\ref{sec:sym}}\label{sec:deferred} \begin{proof}[Proof of Lemma~\ref{lemma:lengths}] Let $k_a$ be the length of the bitstring at node $a$. For any edge $e = (a, b)$ where $a$ is the parent, $k_b$ is equal to $k_a$ plus the difference between two binomial variables with $k_a$ trials and probability of success $\pID(e)$. Applying Azuma's inequality shows that $|k_b - k_a|$ is at most $2 \log n \sqrt{k_a}$, with probability $1 - n^{-\Omega(\log n)}$. Then fixing any node $a$ and applying this high probability statement to at most $\log n$ edges on the path from the root to any node $a$ gives that $k < k_a < 4k$ with probability $1 - n^{-\Omega(\log n)}$. The lemma then follows from a union bound over all $n$ nodes. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:bitshifts}] Note that conditioned on the $j$th bit of $a$ not being deleted on an edge $e = (a, b)$ where $a$ is the parent, the number of bits by which it shifts on the edge $(a, b)$ is the difference between two binomial variables with $j$ trials and probability of success $\pID(e)$, which is at most $2 \log n\sqrt{j}$ with probability $1 - n^{-\Omega(\log n)}$. By Lemma~\ref{lemma:lengths}, with probability $1 - n^{-\Omega(\log n)}$ we know that $j \leq 4k$ so this is at most $4\log n \sqrt{k}$. Then, fixing any path and applying this observation to the at most $2 \log n$ edges on the path, by union bound we get that the sum of shifts is at most $4 \log^2 n \sqrt{k}$ with probability $1 - n^{-\Omega(\log n)}$. Applying union bound to all $O(n^2)$ paths in the tree gives the lemma. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:uniformblocks}] For a fixed node $a$ and any consecutive sequence of length $m$, note that each bit in the bitstring is equally likely to be 0 or 1, by symmetry, and furthermore, each bit is independent since they cannot be inherited from the same bit in ancestral bitstrings. The number of zeros in the sequence, $S_0$, can be expressed as a sum of $m$ i.i.d. Bernoulli variables. Therefore, by Azuma's inequality, we have $$\prob(S_0 - m/2 \geq t\sqrt{m}) \leq \exp(-\Omega(t^2))$$ Therefore, setting $t = O(\log n)$ and applying a union bound over all polynomially many nodes and consecutive sequences of at most $O(\plog)$ length by using Lemma~\ref{lemma:lengths} gives the result. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:unbiasedestimator}] Fixing any block $i$ of nodes $a, b$ and letting $\sigma_{a,j}$ denote the $j$th bit of the bitstring at $a$: \todo{Efe: Use $\pm 1$ instead of $0-1$ bits to simplify presentation} $$\ex[s_{a, i}s_{b, i}] = \frac{1}{l}\ex\left[\left(\sum_{j = (i-1)l+1}^{il} \sigma_{a,j} - \frac{l}{2}\right)\left(\sum_{j' = (i-1)l+1}^{il} \sigma_{b,j'} - \frac{l}{2}\right)\right]$$ $$= \frac{1}{l}\ex\left[\left(\sum_{j = (i-1)l+1}^{il} \left(\sigma_{a,j} - \frac{1}{2}\right)\right)\left(\sum_{j' = (i-1)l+1}^{il} \left(\sigma_{b,j'} - \frac{1}{2}\right)\right)\right]$$ $$ = \frac{1}{l}\ex\left[\sum_{j = (i-1)l+1}^{il}\sum_{j' = (i-1)l+1}^{il}\left(\sigma_{a,j} - \frac{1}{2}\right)\left(\sigma_{b,j'} - \frac{1}{2}\right)\right] $$ $$ = \frac{1}{l}\sum_{j = (i-1)l+1}^{il}\sum_{j' = (i-1)l+1}^{il}\ex\left[\left(\sigma_{a,j} - \frac{1}{2}\right)\left(\sigma_{b,j'} - \frac{1}{2}\right)\right] $$ Note that if bit $j$ of $a$'s bitstring and bit $j'$ of $b$'s bitstring are not shared, then their values are independent and in particular, since $\ex[\sigma_{a,j}] = 1/2$ for any $a, j$: $$\ex\left[\left(\sigma_{a,j} - \frac{1}{2}\right)\left(\sigma_{b,j'} - \frac{1}{2}\right)\right] = \ex\left[\left(\sigma_{a,j} - \frac{1}{2}\right)\right]\ex\left[\left(\sigma_{b,j'} - \frac{1}{2}\right)\right] = 0$$ So we get: $$\ex[s_{a, i}s_{b, i}] = \frac{1}{l}\sum_{j,j'\textnormal{ shared by }a,b}\ex\left[\left(\sigma_{a,j} - \frac{1}{2}\right)\left(\sigma_{b,j'} - \frac{1}{2}\right)\right]$$ Now, note that $\left(\sigma_{a,j} - \frac{1}{2}\right)\left(\sigma_{b,j'} - \frac{1}{2}\right)$ is $1/4$ if $\sigma_{a,j}, \sigma_{b,j'}$ are the same and $-1/4$ otherwise. Since bit $j$ of $a$ and bit $j'$ of $b$ descended from the same bit, it is straightforward to show (\cite{warnow-textbook}) that the probability $a, b$ are the same is $\frac{1}{2}(1 + \prod_{e \in P_{a,b}} (1 - 2\pS(e)))$, giving that $\ex\left[\left(\sigma_{a,j} - \frac{1}{2}\right)\left(\sigma_{b,j'} - \frac{1}{2}\right)\right] = \frac{1}{4}\prod_{e \in P_{a,b}} (1 - 2\pS(e))$ if $j, j'$ are shared bits of $a, b$. By the law of total probability, $\ex[s_{a, i}s_{b, i}]$ is the expected number of shared bits in block $i$ of $a$ and block $i$ of $b$ times $\frac{1}{4l}\prod_{e \in P_{a,b}} (1 - 2\pS(e))$. So all we need to do is compute the expected number of shared bits which are in block $i$ of $a$ and $b$. The $j$th bit in $lca(a,b)$ is in block $i$ of $a$ and $b$ if and only if two conditions are met: \begin{itemize} \item The $j$th bit is not deleted on the path from $lca(a,b)$ to $a$ or the path from $lca(a,b)$ to $b$. \item If the $j$th bit is not deleted on the paths from $lca(a,b)$ to $a$ and $b$, one of bits $(i-1)l+1$ to $il$ of both $a$'s bitstring and $b$'s bitstring is inherited from the $j$th bit of $lca(a,b)$. \end{itemize} Note that the second condition can be expressed independently of the $j$th bit being deleted, by considering the indel process on the paths from $lca(a,b)$ to $a$ and $b$, restricted to the first $j-1$ bits of $lca(a,b)$'s bitstring. If the restricted process produces a bitstring of length between $(i-1)l$ and $il-1$ (inclusive) at both $a$ and $b$, then this guarantees that if the $j$th bit is not deleted, it appears in a position between $(i-1)l+1$ and $il$. Thus the two conditions occur independently, and so the probability of both occurring is the product of the probabilities they occur individually. The probability of the $j$th bit not being deleted on the path from $lca(a,b)$ to $a$ or the path from $lca(a,b)$ to $b$ is $\prod_{e \in P_{a,b}} (1 - \pD(e))$. We claim that the probability of the second condition occurring is close to 1 for most bits. \todo{Efe: Including a figure might help} For our fixed block $i$, call the $j$th bit of $lca(a,b)$ a \textit{good} bit if $j$ is between $(i-1)l+ 4\log^2 n\sqrt{k}$ and $il-4\log^2 n\sqrt{k}$ inclusive. Call the $j$th bit an \textit{okay} bit if $j$ is between $(i-1)l-4\log^2 n\sqrt{k}$ and $il+4\log^2 n\sqrt{k}$ inclusive but is not a good bit. If the $j$th bit is not good or okay, call it a \textit{bad} bit. Note that $4\log^2 n\sqrt{k} \leq l \cdot O(\log^{-\kappa\zeta+2} n)$, which is $o(l)$ if $\kappa$ is sufficiently large and $\zeta$ is chosen appropriately. Then, there are $l*(1-O(\log^{-\kappa\zeta+2} n))$ good bits and $l \cdot O(\log^{-\kappa\zeta+2} n)$ okay bits for block $i$. Lemma~\ref{lemma:bitshifts} gives that with probability at least $1-n^{-\Omega(\log n)}$, the second condition holds for all good bits. Similarly, the second condition holds for bad bits with probability at most $n^{-\Omega(\log n)}$. For okay bits, we can lazily upper and lower bound the probability the second condition holds by $[0, 1]$. Together, the expected number of shared bits is $l \cdot (1 \pm O(\log^{-\kappa\zeta+2} n))\cdot \prod_{e \in P_{a,b}} (1 - \pD(e))$. Combining this with the previous analysis gives that $$ \ex[s_{a, i}s_{b, i}]=\frac{1}{4}(1 \pm O(\log^{-\kappa\zeta+2} n))\prod_{e \in P_{a,b}}(1-2\pS(e))(1-\pD(e)) $$ Rewriting this in exponential form and using the definition of $\lambda(e)$ and $d(a,b) = \sum_{e \in P_{a,b}} \lambda(e)$ concludes our proof. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:estimatorconcentration}] We show how to bound the probability of the error in one direction, the other direction follows similarly. By rearranging terms and applying Lemma~\ref{lemma:unbiasedestimator} we get: \begin{equation}\label{eq:sdtest1} \begin{aligned} \prob&[-\ln(4\Dt(u, v)) > d(u,v) + \epsilon] \\ =\prob&[4\Dt(u, v) < e^{- d(u,v) - \epsilon}] \\ =\prob&[\Dt(u, v) < \frac{1}{4}e^{- d(u,v)} - \frac{1}{4}(1 - e^{- \epsilon})e^{- d(u,v)}] \\ =\prob&[\Dt(u, v) < \frac{1}{4}(1 \pm O(\log^{-\kappa\zeta+2} n))e^{- d(u,v)} - \frac{1}{4}(1 - e^{- \epsilon} \pm O(\log^{-\kappa\zeta+2} n))e^{- d(u,v)}] \\ =\prob&[\Dt(u, v) < \ex[\Dt(u,v)] - \frac{1}{4}(1 - e^{- \epsilon} \pm O(\log^{-\kappa\zeta+2} n))e^{- d(u,v)}] \\ \end{aligned} \end{equation} We now want to apply Azuma's inequality to the martingale $X_{i'} = \ex[\Dt(u, v) | \{s_{a,2i+1}s_{b,2i+1}\}_{0 \leq i < i'}]$ (note that $X_0 = \ex[\Dt(u, v)]$ and $X_{L/2+1} = \Dt(u, v)$). However, the values e.g. $s_{a, i}s_{b, i}$ and $s_{a, i'}s_{b, i'}$ are not independent since block $i$ of $a$ and block $i'$ of $b$ might share a bit (or vice-versa). Furthermore, the trivial upper bound on the difference in $\ex[\Dt(u, v)]$ induced by conditioning on one extra value of $s_{a, 2j+1}s_{b, 2j+1}$ is $l/2L$, which is too large for Azuma's to give the desired bound. Instead, note that conditioned on $\cond_{reg}$, the values $s_{a, i}s_{b, i}$ and $s_{a, i'}s_{b, i'}$ are independent if $|i - i'| \geq 2$, since block $i$ of any node and block $i'$ of any node cannot share a bit, i.e. the value of any bit in block $i$ of any node is independent of the value of any bit in block $i'$ of any node conditioned on $\cond_{reg}$. In addition, conditioned on $\cond_{reg}$ no $s_{a, i}s_{b, i}$ exceeds $O(\log^2 n)$ in absolute value. Then, since $\Dt(u, v)$ never exceeds $\log^{O(1)} n$ in magnitude, $\ex[\Dt(u, v)]$ and $\ex[\Dt(u, v)|\cond_{reg}]$ differ by $n^{-\Omega(\log n)}$, giving: \begin{equation}\label{eq:sdtest2} \begin{aligned} &\prob[\Dt(u, v) < \ex[\Dt(u, v)] - (1 - e^{- \epsilon} \pm O(\log^{-\kappa\zeta+2} n))(\frac{1}{4}e^{- d(u,v)})]\\ =&\prob[\Dt(u, v) < \ex[\Dt(u, v) | \cond_{reg}] - (1 - e^{- \epsilon} \pm O(\log^{-\kappa\zeta+2} n))(\frac{1}{4}e^{- d(u,v)}) ] \\ \leq &\prob[\Dt(u, v) < \ex[\Dt(u, v) | \cond_{reg}] - (1 - e^{- \epsilon} \pm O(\log^{-\kappa\zeta+2} n))(\frac{1}{4}e^{- d(u,v)}) |\cond_{reg}] + \prob[\lnot \cond_{reg}]\\ \leq &\prob[\Dt(u, v) < \ex[\Dt(u, v) | \cond_{reg}] - (1 - e^{- \epsilon} \pm O(\log^{-\kappa\zeta+2} n))(\frac{1}{4}e^{- d(u,v)}) |\cond_{reg}] + n^{-\Omega(\log n)} \end{aligned} \end{equation} Now conditioned on $\cond_{reg}$, the difference in $\ex[\Dt(u, v)]$ induced by conditioning on an additional value of $s_{a, 2i+1}s_{b, 2i+1}$ is $O(\log^2 n/L)$, and now Azuma's and an appropriate choice of $\kappa$ gives: \begin{equation}\label{eq:sdtest3} \begin{aligned} & \prob[\Dt(u, v) < \ex[\Dt(u, v) | \cond_{reg}] - (1 - e^{- \epsilon} \pm O(\log^{-\kappa\zeta+2} n))\frac{1}{4}e^{- d(u,v)} |\cond_{reg}] \\ \leq &\exp\left(-\frac{((1 - e^{- \epsilon} \pm O(\log^{-\kappa\zeta+2} n))\frac{1}{4}e^{- d(u,v)} )^2}{(L/ 2 - 1)O(\log^2 n/L)^2}\right) \\ = & \exp(-\Omega(L \epsilon e^{-2 d(u,v)}/\log^4 n))\\ \leq & \exp(-\Omega(\epsilon \log^{\kappa(1/2-\zeta)-2 \delta - 4}n)) \\ \leq& n^{-\Omega(\log n)} \end{aligned} \end{equation} Combining \eqref{eq:sdtest1}, \eqref{eq:sdtest2}, and \eqref{eq:sdtest3} gives the desired bound. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:recursiveestimator}] We will implicitly condition on $\cond_{reg}$ in all expectations in the proof of the lemma. Let $S_1$ be the set of bits in block $i$ of $a$ which appear in $x$'s bitstring. Let $S_2$ be the set of bits which appeared anywhere in $a$'s bitstring except block $i$ of $a$'s bitstring and which are in block $i$ of $x$'s bitstring. Let $S_3$ be the set of bits from block $i$ of $a$ which appear in $x$'s bitstring outside of block $i$ (note that $S_3$ is a subset of $S_1$). For $j \in \{1, 2, 3\}$, consider the values of the bits in $S_j$ in $x$'s bitstring. Let $s_{x, i}^{(j)}$ denote the number of zeroes in the bits in $S_j$ minus $|S_j|/2$, all times $1/\sqrt{l}$. Note that because all bits which are present in $x$ but not in $a$ are uniformly random conditioned on $\mathcal{E}$, $\ex[s_{x,i}] = \ex[s_{x,i}^{(1)} + s_{x,i}^{(2)} - s_{x,i}^{(3)}]$. Informally, this equality says that the bits determining $s_{x,i}$ are going to be those in block $i$ of $a$ that survive the deletion process, except those that are moved out of block $i$ by the indel process, and also including bits moved into block $i$ by the indel process. By a similar argument as in Lemma~\ref{lemma:unbiasedestimator}, $\ex[s_{x,i}^{(1)}]$ is exactly $s_{a,i}e^{- d(a,x)}$, by taking into account the probability a bit survives $h$ levels of the indel process times the decay in the expectation of every bit's contribution induced by the the substitution process. Now, consider the bits in $S_2$. For bit $j$ of $a$'s bitstring, such that bit $j$ is not in the $i$th block, let $\sigma_{a,j}$ be the value of this bit in $a$'s bitstring as before. Let $\rho_j$ denote the probability that this bit appears in block $i$ of $x$, conditioned on $j$ not being deleted between $a$ and $x$. The expected contribution of the bit $j$ to $s_{x,i}^{(2)}$ is then $(\sigma_{a,j}-1/2)\rho_j e^{-d(a,x)} / \sqrt{l}$ (the $e^{-d(a,x)}$ is again due to decay in expected contribution induced by substitution and deletion). Now, by linearity of expectation: $$\ex[s_{x,i}^{(2)}] = \frac{e^{-d(a,x)}}{\sqrt{l}}\left[\sum_{j < (i-1)l+1} (\sigma_{a,j}-1/2)\rho_j + \sum_{j > il} (\sigma_{a,j}-1/2)\rho_j\right]$$ We will restrict our attention to showing that the sum $\sum_{j > il} (\sigma_{a,j}-1/2)\rho_j$ is sufficiently small, the analysis of the other sum is symmetric. Since $\prob(\cond_{reg}|\cond) > 1- O(n^{-\Omega(\log n)})$, we know that for $j > il + 4\log^{2}(n)\sqrt{k} =: j^*$, $\rho_j = n^{-\Omega(\log n)}$. So: $$\sum_{j > il} (\sigma_{a,j}-1/2)\rho_j = \sum_{j = il+1}^{j^*} (\sigma_{a,j}-1/2)\rho_j + n^{-\Omega(\log n)}$$ Then define $d_j := \rho_j - \rho_{j+1}$, $\sigma^*_{j'} = \sum_{j = il+1}^{j'} (\sigma_{a,j} - 1/2)$, and note that by the regularity assumptions $\cond_{reg}$, $\sigma_{j}^*$ is at most $\log n \sqrt{j - il + 1} \leq \log n \sqrt{j^* - il + 1} \leq 2 \log^2 n k^{1/4}$ for all $j$ in the above sum. Also note that for all $j$ in the above sum, $\rho_j$ is decreasing in $j$, so the $d_j$ are all positive and their sum is at most 1. Then: $$\sum_{j > il} (\sigma_{a,j}-1/2)\rho_j = \sum_{j = il+1}^{j^*} d_j \sigma^*_{j'} + n^{-\Omega(\log n)}$$ $$\implies |\sum_{j > il} (\sigma_{a,j}-1/2)\rho_j| \leq \max_{il+1 \leq j \leq j^*} |\sigma^*_{j'}| + n^{-\Omega(\log n)} \leq 2 \log^2 n k^{1/4}$$ Thus $|\ex[s_{x,i}^{(2)}]| \leq 4 \log^2 n k^{1/4}/\sqrt{l}$. A similar argument shows $|\ex[s_{x,i}^{(3)}]| \leq 4 \log^2 n k^{1/4}/\sqrt{l}$, completing the proof. \end{proof} \section{Deferred Proofs from Section~\ref{sec:assym}}\label{sec:deferred-assym} \todo{Efe: Avoid rewrites} \begin{proof}[Proof of Lemma~\ref{lemma:unbiasedestimator-asym}] Fixing any block $i$ of nodes $a, b$ and letting $\sigma_{a,j}$ denote the $j$th bit of the bitstring at $a$. Then analogously to the symmetric case: $$\ex[s_{a, i}s_{b, i}] = \frac{1}{\sqrt{l_a}\sqrt{l_b}}\sum_{j = (i-1)l_a+1}^{il_a}\sum_{j' = (i-1)l_b+1}^{il_b}\ex\left[\left(\sigma_{a,j} - \frac{1}{2}\right)\left(\sigma_{b,j'} - \frac{1}{2}\right)\right] $$ And if the $j$th bit of $a$, $j'$th bit of $b$ are not shared, $\ex\left[\left(\sigma_{a,j} - \frac{1}{2}\right)\left(\sigma_{b,j'} - \frac{1}{2}\right)\right] = 0$, so we get: $$\ex[s_{a, i}s_{b, i}] = \frac{1}{\sqrt{l_a}\sqrt{l_b}}\sum_{j,j'\textnormal{ shared by }a,b}\ex\left[\left(\sigma_{a,j} - \frac{1}{2}\right)\left(\sigma_{b,j'} - \frac{1}{2}\right)\right] =$$ $$\frac{1}{\sqrt{l_a}\sqrt{l_b}}\sum_{j,j'\textnormal{ shared by }a,b}\frac{1}{4}\prod_{e \in P_{a,b}} (1 - 2\pS(e))$$ So as before, we compute the expected number of shared bits which are in block $i$ of $a$ and $b$ and then apply the law of total probability. The probability that the $j$th bit in the least common ancestor of $a, b$, $a \wedge b$, is in block $i$ of $a$ and $b$ is the probability that it is not deleted on the paths from $a \wedge b$ to $a, b$ times the probability that the indels on the paths from $a \wedge b$ to $a, b$ cause it to appear in the $i$th block of $a, b$. Recall that the $j$th bit in $lca(a,b)$ is in block $i$ of $a$ and $b$ if and only if the following two conditions are met: \begin{itemize} \item The $j$th bit is not deleted on the path from $lca(a,b)$ to $a$ or the path from $lca(a,b)$ to $b$. \item If the $j$th bit is not deleted on the paths from $lca(a,b)$ to $a$ and $b$, one of bits $(i-1)l_a+1$ to $il_a$ of $a$'s bitstring and one of bits $(i-1)l_b+1$ to $il_b$ of $a$'s bitstring is inherited from the $j$Th bit of $a \wedge b$'s bitstring. \end{itemize} The probability of the $j$th bit not being deleted on the path from $a \wedge b$ to $a$ or the path from $a \wedge b$ to $b$ is again $\prod_{e \in P_{a,b}} (1 - \pD(e))$. We again classify bits in the bitstring at $a \wedge b$ as good, okay, or bad. For block $i$, call the $j$th bit of $a \wedge b$ a \textit{good} bit if $j$ is between $(i-1)l_{a \wedge b}+ 4\log^2 n\sqrt{k}\eta(a \wedge b)$ and $il_{a \wedge b}- 4\log^2 n\sqrt{k}\eta(a \wedge b)$ inclusive. Call the $j$th bit an \textit{okay} bit if $j$ is between $(i-1)l_{a \wedge b}- 4\log^2 n\sqrt{k}\eta(a \wedge b)$ and $il_{a \wedge b}+ 4\log^2 n\sqrt{k}\eta(a \wedge b)$ inclusive but is not a good bit. If the $j$th bit is not good or okay, call it a \textit{bad} bit. Note that $4\log^2 n \sqrt{k}\eta(a \wedge b) \leq l_{a \wedge b} \cdot O(\log^{-\kappa\zeta+2} n)$, which is $o(l_{a \wedge b})$ if $\kappa$ is sufficiently large and $\zeta$ is chosen appropriately. Then, there are $l_{a \wedge b}*(1-O(\log^{-\kappa\zeta+2} n))$ good bits and $l_{a \wedge b} \cdot O(\log^{-\kappa\zeta+2} n)$ okay bits for block $i$. Lemma~\ref{lemma:bitshifts-asym} gives that with probability at least $1-n^{-\Omega(\log n)}$, the second condition holds for all good bits. Similarly, the second condition holds for bad bits with probability at most $n^{-\Omega(\log n)}$. For okay bits, we can lazily upper and lower bound the probability the second condition holds by $[0, 1]$. Together, the expected number of shared bits is $l_{a \wedge b} \cdot (1 \pm O(\log^{-\kappa\zeta+2} n))\cdot \prod_{e \in P_{a,b}} (1 - \pD(e))$. Note that since the multiset union of $P_{r, a}$ and $P_{r, b}$ contains every edge in $P_{r, a \wedge b}$ twice and every edge in $P_{a, b}$ once: $$l_{a \wedge b} =\sqrt{l_{a} l_{b} \cdot \prod_{e \in P_{a,b}} (1 + \pI(e) - \pD(e))^{-1}}$$ Then combined with the previous analysis this gives: $$ \ex[s_{a, i}s_{b, i}]=\frac{1}{4}(1 \pm O(\log^{-\kappa\zeta+2} n))\prod_{e \in P_{a,b}}(1-2\pS(e))(1-\pD(e))(1 + \pI(e) - \pD(e))^{-1/2} $$ Rewriting this in exponential form and using the definition of $\lambda(e)$ and $d(a,b) = \sum_{e \in P_{a,b}} \lambda(e)$ concludes our proof. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:recursiveestimator-asym}] We will implicitly condition on $\mathcal{E}$ in all expectations in the proof of the lemma. Analogously to the proof of Lemma~\ref{lemma:recursiveestimator}, let $S_1$ be the set of bits in block $i$ of $a$ which appear in $x$'s bitstring, let $S_2$ be the set of bits which appeared anywhere in $a$'s bitstring except block $i$ of $a$'s bitstring and which are in the $i$th pseudo-block of $x$'s bitstring, and let $S_3$ be the set of bits from block $i$ of $a$ which appear in $x$'s bitstring outside of $i$th pseudo-block (note that $S_3$ is a subset of $S_1$). For $j \in \{1, 2, 3\}$, consider the values of the bits in $S_j$ in $x$'s bitstring. Let $s_{x, i}^{(j)}$ denote the number of zeroes in the bits in $S_j$ minus $|S_j|/2$, all times $1/\sqrt{l_x}$. Note that because all bits which are present in $x$ but not in $a$ are uniformly random conditioned on $\mathcal{E}$, $\ex[\st_{x,i}] = \frac{\sqrt{l_x}}{\sqrt{l_x'}} \ex[s_{x,i}^{(1)} + s_{x,i}^{(2)} - s_{x,i}^{(3)}]$. Informally, this equality says that the bits determining $s_{x,i}$ are going to be those in block $i$ of $a$ that survive the deletion process, except those that do not appear in pseudo-block $i$ because of the indel process, and also including bits moved into pseudo-block $i$ by the indel process. By Lemma ~\ref{lemma:lengths-asym}, $\frac{\sqrt{l_x}}{\sqrt{l_x'}} = (1 \pm O(\log^{1/2-\kappa/4}(n))$. So it suffices to prove $\ex[s_{x,i}^{(1)} + s_{x,i}^{(2)} - s_{x,i}^{(3)}] = e^{-d(x,a)}(s_{a,i} + \delta_{a,i}')$ where $|\delta_{a,i}'| = O(\log^{5/2-\kappa\zeta/2} (n))$, since conditioned on $\cond_{reg}$ $s_{a,i}$ is at most $\log n$ in absolute value, so $ \frac{\sqrt{l_x}}{\sqrt{l_x'}}(s_{a,i} + \delta_{a,i}') = (s_{a,i} + \delta_{a,i})$ where $|\delta_{a,i}| = O(\log^{5/2-\kappa\zeta/2} (n))$. By a similar argument to Lemma~\ref{lemma:unbiasedestimator-asym}, $\sqrt{l_x}\ex[s_{x,i}^{(1)}] = \sqrt{l_a} s_{a,i}\prod_{e \in P_{a,x}}(1-2\pS(e))(1-\pD(e))$ and thus $\ex[s_{x,i}^{(1)}] = s_{a,i}e^{-d(x,a)}$. Now, consider the bits in $S_2$. For bit $j$ of $a$'s bitstring, such that bit $j$ is not in the $i$th block, let $\sigma_{a,j}$ be the value of this bit in $a$'s bitstring as before. Let $\rho_j$ denote the probability that this bit appears in pseudo-block $i$ of $x$, conditioned on $j$ not being deleted between $a$ and $x$. The expected contribution of the bit $j$ to $\sqrt{l_x}s_{x,i}^{(2)}$ is then $(\sigma_{a,j}-1/2)\rho_j \prod_{e \in P_{a,x}}(1-2\pS(e))(1-\pD(e))$. Now, by linearity of expectation: $$\ex[s_{x,i}^{(2)}] = \frac{\prod_{e \in P_{a,x}}(1-2\pS(e))(1-\pD(e))}{\sqrt{l_x}}\left[\sum_{j < (i-1)l_a+1} (\sigma_{a,j}-1/2)\rho_j + \sum_{j > il_a} (\sigma_{a,j}-1/2)\rho_j\right]$$ $$= e^{-d(x,a)} * \frac{1}{\sqrt{l_a}} \left[\sum_{j < (i-1)l_a+1} (\sigma_{a,j}-1/2)\rho_j + \sum_{j > il_a} (\sigma_{a,j}-1/2)\rho_j\right]$$ We will restrict our attention to showing that the sum $\sum_{j > il_a} (\sigma_{a,j}-1/2)\rho_j$ is sufficiently small, the analysis of the other sum is symmetric. Since $\prob(\cond_{reg}|\cond) > 1- O(n^{-\Omega(\log n)})$, and using Lemma~\ref{lemma:lengths-asym}, we know that for $j > il_a + 4\log^{2}(n)\sqrt{k_r}\eta(a) + L \cdot l_a \cdot \frac{\log^{\beta/2+2} n}{\log^{\kappa/2 - 2} n} =: j^*$, $\rho_j = n^{-\Omega(\log n)}$ (recall from the proof of Lemma~\ref{lemma:signatureestimation} that the term $L \cdot l_x \cdot \frac{\log^{\beta/2+2} n}{\log^{\kappa/2 - 2} n}$ is an upper bound on the offset between the $i$th block and pseudo-block of $x$ conditioned on $\cond_{reg}$. $L \cdot l_a \cdot \frac{\log^{\beta/2+2} n}{\log^{\kappa/2 - 2} n}$ is this offset, rescaled by the ratio of block lengths $l_a/l_x$). So: $$\sum_{j > il_a} (\sigma_{a,j}-1/2)\rho_j = \sum_{j = il_a+1}^{j^*} (\sigma_{a,j}-1/2)\rho_j + n^{-\Omega(\log n)}$$ Then define $d_j := \rho_j - \rho_{j+1}$, $\sigma^*_{j'} = \sum_{j = il_a+1}^{j'} (\sigma_{a,j} - 1/2)$, and note that by the regularity assumptions $\cond_{reg}$, $\sigma_{j}^*$ is at most $\log n \sqrt{j - il_a + 1} \leq \log n \sqrt{j^* - il_a + 1} \leq O(\log^{\beta/4+3} (n)) k_r^{1/4} \sqrt{\eta(a)}$ for all $j$ in the above sum. Also note that for all $j$ in the above sum, we can assume $\rho_j$ is decreasing in $j$ - otherwise, we can reorder the values of $j$ in the sum so that $\rho_j$ is decreasing in this order, and redefine the $d_j$ values accordingly. By this assumption, the $d_j$ are all positive and their sum is at most 1. Then: $$\sum_{j > il_a} (\sigma_{a,j}-1/2)\rho_j = \sum_{j = il_a+1}^{j^*} d_j \sigma^*_{j'} + n^{-\Omega(\log n)}$$ $$\implies |\sum_{j > il_a} (\sigma_{a,j}-1/2)\rho_j| \leq \max_{il_a+1 \leq j \leq j^*} |\sigma^*_{j}| + n^{-\Omega(\log n)} \leq O(\log^{\beta/4+3} (n) k_r^{1/4} \sqrt{\eta(a)})$$ Thus $|\ex[s_{x,i}^{(2)}]| \leq e^{-d(a,x)} O(\log^{\beta/4+3} (n)) k_r^{1/4} \cdot \sqrt{\eta(a)}/\sqrt{l_a}$. Note that $\eta(a)/l_a = 1/l_r = O(\log^{-\kappa/2-\kappa\zeta} (n))$, giving that $|\ex[s_{x,i}^{(2)}]| = e^{-d(a,x)} \cdot O(\log^{\beta/4+3-\kappa\zeta/2} (n))$. A similar argument shows $|\ex[s_{x,i}^{(3)}]| = e^{-d(a,x)} \cdot O(\log^{\beta/4+3-\kappa\zeta/2} (n))$, completing the proof. \end{proof}
194,463
Batman versus the Evil Definitions Batman once said, “It’s not who I am underneath, but what I do that defines me.” Can we apply this same reasoning when defining key terms of our data models? It can be a daunting task to define a simple term, such as “Customer,” but it could be a lot easier (and maybe just as effective) to instead define what these terms do. So can we define something in terms of what that something does, instead of what something “is”? For example, someone recently shared with me that because their project could not come up with an agreed upon definition for Customer, he defined Customer as any organization who opens a contract with his company. So any organization with a date in the Contract Open Date field is a customer. (I am simplifying this example, but you get the idea.) The Challenge Does defining the actions something performs solve our definition issues? Or are we instead adding complexities, for example, assigning more than one meaning to the same data element (e.g., Contract Open Date is used both for the actual contract date and to distinguish customers from other organization roles)? Have you ever defined a term by what that term does instead of what that term “is”? If yes, were you satisfied with the outcome? The Response The responses from the Design Challengers can be grouped into three categories: those that recommend defining something by its actions, those that recommend defining something by its actions as part of the solution and those that do not recommend defining something by its actions. Below I chose two responses from each of these three categories, followed by a summary of what I learned from this challenge. Defining Something by its Actions is an Effective Technique Madhu Sumkarpalli, business intelligence consultant, says, “I think defining the term based on its action is a better idea. That way we can be specific about it or at least close to specific, rather than being generic and abstract. I think the ‘thing’ is what it does. Of course, one can define ‘bird’ based on just its general characteristics, which would put it in the animal kingdom. However, based on its action, we can arrive at a specific definition that would define it appropriately and paint the proper picture, even if others haven’t seen it.” Vikas S. Rajput, database specialist, says, “Yes, there are certain times when we have to take an approach where not the ‘defined role’ or definition, but the action of the actor, defines the term in our data model. I will give you an example. At the front desk at a hospital when a patient is admitted, somebody needs to log in the patient details. That person could be anyone when you have kiosks all over the place (as happened with one of our clients). Here you don’t need to define the role, per se, you only need to capture the ‘first attendant’ details, which could be an employee ID.” Defining Something by its Actions is Part of the Solution Amarjeet Virdi, data architect, says, “Aren’t data entities meant to represent real life objects? And don’t real life objects perform functions? The questions then are:, Is the entity in question defined entirely by the ‘function’ it performs? Without the function, does the entity cease to exist? In this case, the object is a customer, which is an organization or person. If the action of signing a contract makes them a customer, what happens when the contract ends? What does that entity mean to your business? Does it have no existence outside the function it performs? Will it pass into a new lifecycle stage? Does it change names? When the function is completed, will the entity cease to exist or have no value to the business anymore? So, going back to Batman, when the bat suit is off, does Batman cease to exist for Gotham City? Batman and Bruce Wayne are entirely different entities, but will we never need to see an integrated view? Don't Bruce's life and actions help us understand the structure and function of Batman?” Raymond McGirt says, “I'm going with the good old standby: it depends. Today, I went into a local hardware store. I consider myself a customer, but today, I did not buy anything. I returned something. There are different business rules to be fulfilled when I buy something, as opposed to when I return something. But, either way, am I not a customer? A different role is being performed when I return something, but I'm still the same person. Either way, complexity increases. Either a single Customer performs two or more unique activities or there is a different term for each type of activity a non-employee could perform in the store.” Defining Something by its Actions is not Recommended Wade Baskin, senior database architect, says, “I've always taken the approach that mixing process with data is a dangerous practice. Data should have one, and only one, definition, regardless of the process. If the data changes as it matures or the process moves along, then the change is reflected as either a status code or a different data element; never change the current definition of an element based on process or location. An even more dangerous practice is to change the definition of an element based on the presence or absence of another data element. For the purist, this breaks the laws of normality, where the element is no longer relying on the key and nothing but the key for its presence/definition.” David P Reynevich, chief architect, says, “The ‘is’ part is (relatively) permanent; the ‘do’ part, if tied in with ‘how we do it,’ changes regularly. Allowing fields with multiple meanings is dangerous and should be avoided in principle. Whatever time savings that may look good now will almost certainly be lost many times over as future designers and systems builders stumble over the hidden meanings in the data, to say nothing of the additional complexity that this will bring to straightforward data queries.” Summary Defining a term by what it does is effective, at least as a starting point, because most business professionals define things by the roles they play (e.g., a person playing the role of a customer). However, taking such an approach may eventually lead to data integration issues (such as whether a Customer and Prospect can ever be the same Person), hidden business logic (e.g., Contract Open Date has multiple definitions) and what will happen to the thing when the activity it is performing ceases. Great thoughts! Note: If you’re interested in chiming in on the discussion by becoming a Design Challenger, sign up at stevehoberman.com.
301,589
Search - Blogs I Follow Top Posts & Pages Category Archives: Kiribati Kiribati, South Tarawa – Cost of Living … Posted in Kiribati Tagged cost of living, Kiribati, South Tarawa 2 Comments Kiribati, South Tarawa Cost of Living January 2010 LOCATION: Kiribati is in Oceania, a group of 33 coral atolls in the Pacific Ocean, straddling the Equator; the capital Tarawa is about one-half of the way from Hawaii to Australia. CAPITAL CITY: South Tarawa LARGEST CITY: South Tarawa CURRENCY: … Posted in cost of living, Kiribati, South Tarawa
138,520
In my 40+ years working in the real estate industry I thought I had seen it all – competitive buyers’ markets and hot sellers’ markets. But what we are now witnessing in Nebraska is very concerning – national wholesalers and investors aggressively marketing themselves with bogus claims to buy properties at top dollar for cash and with no inspections. This too-good-to-be-true offer often takes advantage of sellers while creating an unstable local housing market. Up to 30% of home sales are going to these firms who have spent countless thousands of dollars with a twisted message that “real estate agents fees are expensive and home inspections cause sellers trouble.” What is happening is wholesalers are purchasing homes from local people for very low prices and then immediately reselling them on the market for $20,000+ or more profit per home. Investors add to their rental portfolio, thereby removing these all-important entry level homes from our market. While I am not a real estate agent, I firmly believe people thinking of selling their home should get a second opinion from an experienced agent to discuss the current market, the realistic value of their property, and how to get the most money (and less stress) during the sale. When people ask me my opinion, I always recommend trying to sell to local people through a reputable real estate agent. Doing so has many benefits – including supporting the local economy. Steve Vacha President Home Standards Inspection Services
389,137
Innovation, climate and policy discussed at the BPC 2021 The Baltic Ports Conference 2021, hosted by the Port of Tallinn, concluded yesterday. Among the topics discussed were the future shape of the maritime sector, as well as ports’ role as entrepreneurs, innovation drivers and alternative energy hubs. The event also marked Baltic Ports Organization’s (BPO) 30th anniversary, celebrated by looking at the evolution of the port sector over the past decades. It wasn’t an easy choice to move BPO’s most important, annual event to the online realm but the event proved a great success, gathering a broad audience and staying true to its tradition of providing a stage for some of the port sector’s most influential representatives. Towards a green future It will not come as a surprise that topics related to environmental issues were very present throughout the day. As the European Commission continues to push for the whole transport industry to become greener via initiatives such as the “Fit for 55 proposal”, it becomes increasingly important for ports to plan accordingly. Measures expected to be implemented, such as shore power facilities, can’t be a burden that the ports have to bear alone. Especially smaller ports may struggle in this regard as the proposed regulations applies to core and comprehensive ports equally. There was a word mentioned by multiple speakers and panelists participating in the BPC 2021 - cooperation. Ports may take the role of facilitators of change, but involvement of all parties making up the port ecosystem is crucial if the industry wants to meet the ambitious goals set by the European Commission. The role policymakers play in the process is also an incredibly important one. The proposed emission cuts need to be achievable not only technically but also economically, taking into account the diversity that characterizes the port industry. No two ports are the same, differing when it comes to size, location and the types of traffic they attract. Moreover, all stakeholders acting in ports should be involved. Geopolitical aspects also become more and more impactful, as the EU moves forward with its green policies. Taking the Baltic Sea region as an example, the fact that not all countries in the area are obliged to adhere to the rules proposed by EU’s policymakers, may have a direct impact on the overall competitiveness of Baltic ports in the decades to come. A possible shift of business towards ports from outside the EU is a valid concern and as such, the idea of equal rules for everyone must be considered in a broader, geopolitical context. Driving innovation Thankfully, the face of ports, traditionally seen as rather stagnant and conservative when it comes to change, is a today a wholly different one. While the complex nature of a port ecosystem brings with it a set of challenges that make rapid shifts in their development plans difficult, it is also one of their strengths. It is becoming more and more common for the ports to take up the role of innovation drivers and orchestrators of actions that in the end transform no just the port itself, but also the surrounding body of companies. As the crossroads between land and sea, ports are the perfect testing ground for new technologies that may also help them and the whole maritime transport sector, prepare for and achieve the goals set before them. The focus put by the policymakers on shore power and alternative fuels, such as hydrogen, doesn’t come as a surprise to an industry that has been implementing the former for a number of years in the Baltic Sea region alone. The case is similar with gas-based alternative fuels. Experience and knowledge gathered during the still ongoing development of the Baltic liquefied natural gas (LNG) bunkering network will come very handy during the green transition. And then there’s also digitalization process. The conference’s host, Port of Tallinn, serves as a great example for utilizing digital tools in order to increase efficiency and in turn reduce the impact the port has on the climate and environment. The crucial point is - legislation needs to be future-proof and stand the test of time. Technology is moving forward and the maritime industry is ready to make use of it but port development and investment plans are long-term affairs and must be understood as such by all parties involved in sketching the future of the transport sector. See you in Gdynia! The Baltic Ports Conference will return on 7-8 September, 2022, taking us to Gdynia, Poland. Hopefully, with a bit of luck, it will offer the opportunity to finally meet in person and raise a glass to commemorate BPO’s 30th birthday, which the Organization is celebrating this year. And next year will mark an important date for the host of the BPC 2022, Port of Gdynia, celebrating its 100th birthday. Please head to BPO’s LinkedIn channel where you can see how its Member’s changed and evolved during the years, as well as re-watch the BPC 2021 stream for some wonderful visual impressions of the history of the Baltic port sector. There is still more to come this year, so make sure to keep an eye out for future publications commemorating BPO’s history. The recording of the BPC 2021 can be accessed on BPO’s official YouTube channel.
152,686
CE: 30 contact hours (3.0 CEUs) are required each renewal period. For purposes of this rule, “renewal period” means the 27-month period commencing April 1st prior to the previous license expiration and ending June 30th, the date of current license expiration. A mininum of each pharmacist's 30 contact hours (3.0 CEUs) is required in the topics specified below. Training for Identifying and Reporting Abuse Iowa-licensed pharmacists shall complete an approved training program in accordance with 657—2.16 (235B,272C). pharmacist who regularly provides primary health care to children shall complete: An Iowa-licensed pharmacist who regularly provides primary health care to adults shall complete: An Iowa-licensed pharmacist Pharmacy Contact Us · Terms of Service · Privacy Policy · Help
39,593
TITLE: Does direct current exist in an infinite straight thin wire? QUESTION [0 upvotes]: Suppose an infinite-long thin wire is placed along $z$ axis in 3D space, with current density $\textbf{J}$ and static magnetic field $\textbf{H}$ satisfying the Ampère's law: $\nabla\times\textbf{H}=\textbf{J}$. By integrating both sides of the equation over surface $z=0$, we have \begin{equation} \int_{z=0}dxdy~\nabla\times\textbf{H}=\int_{z=0}dxdy~\textbf{J}. \end{equation} With finite magnetic field, the left-hand side of the equation is mathematically zero (think about the Fourier transform), leading to the right-hand side—the current flux—be zero as well. In this sense, there should be no current in this thin wire. One interesting point is the infinite inductance of the wire: \begin{equation} L=\frac{2~\text{magnetic energy}}{\text{current flux}^2}=\frac{\int_\text{3D space} dV~\mu_0 |\textbf{H}|^2}{|\int_{z=0}dxdy~\textbf{J}|^2}=\frac{\text{non-zero value}}{0}=+\infty \end{equation} Perhaps explaining $L$ can help understanding. Afterall, it is merely a thought experiment to transport electrons endlessly through the universe. But I am still wondering which setting is unphysical in this scenario: is it the infinite long wire, infinite large magnetic field, or others? REPLY [0 votes]: Mathematically, the electromagnetic energy per unit length is divergent for an infinite wire supporting a finite electromagnetic field. The characteristic impedance is infinite. Thus, in principle, you cannot have current on an infinite wire. However, the divergence is logarithmic, so even in the case of an extremely long wire with the return current far away, the wire can support a substantial current. Practical single-wire transmission lines have characteristic impedances of ~600Ω, not terribly high. Edit in response to comment: I don't know of a reference, but it's implicit in the usual calculation of coaxial cable impedance, as $D\rightarrow\infty$. Or, you may notice that the electric field of a linear charge is proportional to $1/r$. Integrating the with respect to $r$ yields the potential, and that diverges. So, any finite charge density on the wire should, theoretically, yield infinite potential. The puzzle to me in my student days was that I knew that real wires don't behave like that, even if they are very long. The solution is that real wires aren't infinite, and they aren't infinitely far from other conductors. The idealization that a long wire suspended in space may be assumed to be infinitely long in an empty vacuum fails.
132,383
Slingshot Auto Repair Do you own a Slingshot vehicle or thinking about buying one? They're really cool looking and get a tons of attention when they go down the road, sorta like the Bat Mobile! Nick, the owner of Revive Auto Repair, owns one, too. It’s a thrilling drive, and our team of auto mechanics fights over who gets to perform the Slingshot vehicle service and repairs. Yes, the Slingshot is really that cool! The Slingshot looks like a motorcycle with two wheels up front and one wheel in the back. Slingshots typically have a five-speed transmission and a 4-cylinder Ecotec engine made by General Motors. They’re classified as a motorcycle, but can seat two people side by side with a steering wheel! The only place you can repair these guys is at the Polaris dealership or Revive Auto Repair. There aren’t that many shops in the Michigan region offering Slingshot vehicle repair and service. The auto mechanics at Revive Auto Repair are unique because we look at things outside the box. We are analytical, yes, and we are master problem solvers. On a Slingshot, we see an opportunity to serve because many other repair shops are scared of what they don’t understand. Because we are so familiar with Slingshot vehicles, we’re able to service and repair all Slingshot models. Slingshots are belt driven, just like a Harley. They also have adjustable shocks, and the suspension up front is the same as a Corvette. There are other things you can do with this vehicle, too. You can add performance parts, navigation, composite body parts, and display screens that show the vital signs of the vehicle. Plus, you can add bigger back tires, modify it, and more. Slingshots are expensive and fascinating vehicles, and we love working on them because they’re so cool! Our auto mechanics work with integrity, honesty, and amazing auto repair skills to repair vehicles, which is why we are the preferred auto repair shop for drivers throughout Michigan and its surrounding states for Slingshot repairs. You can find us at 2888 E maple Rd, Troy MI, 48083. Give us a call to schedule your appointment. You’ll be glad you chose Revive Auto Repair to service your Slingshot vehicle! Accurate diagnostics General safety checks Pre-purchase inspections Tune-ups Safety inspections and certificates Brakes Tires Modifications Courtesy pickup and drop-off
364,742
As you may have noticed, I think way too much about Mad Men, to the point sometimes that it becomes a crippling obsession. After I spotted Room 503 in Sunday night’s episode, in fact, I spent the better part of the next two hours trying to figure out what Weiner was referencing. I had a half-baked theory in my 25 Fun Facts, Callback, and Theories piece, but it wasn’t satisfying. So, I’ve spent maybe another hour or two each day trying to figure it out, and, after a ton of Internet research, I think we’ve found a winner. But before we get there, let’s look at four other theories that the Internet has come up with. 1. 503. In HTTP terminology, it’s a Service Unavailable error (via Reddit) and in SMPTP terminology, it refers to Bad Command Sequence (via Salon). Cute, but no. 2. With all the theories about the impending death of Don Draper, though it doesn’t work, I like this theory, mentioned by a Redditor. J.D. Salinger’s A Perfect Day for a Bananafish takes place in Room 507 of a hotel with a lot of “New York advertising men.” At the end of it, Seymour — psychologically traumatized by the war, much like J.D. Salinger and Don Draper — goes to his room, pulls out a gun, and kills himself, while his wife is napping on the bed. Maybe Room 503 is a spiritual sideways sequel of sorts to A Perfect Day for a Bananafish? It’s hard to tie 507 to 503 with just the New York advertising men as a through line, but as a member of the Cult of Catcher, I like to tie everything to J.D. Salinger. 3. Someone on Facebook suggested that 503 (5+3=8) might refer to the 8th Circle of Hell, which is reserved for “panderers and seducers” and “sins that involve conscious fraud” (Draper, obviously, is a fraud, because he assumed someone else’s identity). I like this theory, especially with the earlier reference to Dante, but it’s not satisfying, either. 4. This theory has some validity: It could be a reference to season five, episode 03 (i.e,, 503), which concerned Betty’s cancer scare, much like Joan’s scare with the ovarian cyst in this episode. In both episodes there was some concern about the future of the children. More than anything, this suggests to me that — if we see the number 503 again — we should probably expect the death of a parent. But in neither case did it concern Don Draper, so I’m dismissing this one, as well. 5. This one — from the extensive comments section over on Tom and Lorenzo’s recap — is on the nose, and the most likely of five, assuming you do the math again (5 + 3 = 8 = BUtterfield 8). There was something very familiar about the scenes with Sylvia in the hotel room 503 in bed with nothing on but a sheet and then it dawned on me. BUtterfield 8! The 1960’s movie with Elizabeth Taylor! The opening scenes of that movie has Liz Taylor as prostitute Gloria in bed, wrapped in a sheet, waking up after a night of partying with a client. The scene with Sylvia is almost a dead ringer for Elizabeth Taylor – from Sylvia’s hair style, to her undressed state, to the color of the walls in the hotel room, to the camera angles – it calls back to the movie. I’ve never been able to grasp Sylvia’s hairstyle in all her scenes with Don until I saw this episode, it all came together for me. Matt Weiner had foreshadowed it when Betty dyed her hair dark brown and said, “Elizabeth Taylor” to Henry. Don treating Sylvia as a prostitute, Sylvia being styled as Liz Taylor in BUtterfield 8, and Liz Taylor playing a prostitute in BUtterfield 8. That theory is further bolstered by the fact that a friend of Betty Draper’s has made a reference to BUtterfield 8 on another episode of Mad Men. Moreover, if you Google “Butterfield 8″ and “503” you get this image of a record with Gloria’s theme from the movie. Note the “E-503″ on the right. Does it add up perfectly? If you do a little stretching, but clearly the themes between this episode and BUtterfield 8 are similar, and while the reference may be both obscure and oblique, I think this is exactly what Weiner was referring to, assuming he was referring to anything at all. Or 503 could be in reference to 503 lbs or Fat Batty’s current weight. Occam’s razor wins! This is amazing. Don had a secretary named Dawn. And Lincoln was shot before dawn. By Kennedy. Something something, log cabin syrup. 503 refers to the Illuminati obviously. 5+0+3 = 8 8+3 = 11, 11-2=9… 9/11 It aired on 5/12/2013… 5 + (1+2)+(2+0+1+3) = 5+3+6 = 14 = 1+4 =5!!! Those with a Life Path of 5 seek freedom above all else. They are adventurers, having a restless nature, and being on the go, constantly seeking change and variety in life. They have a free spirit and need to have variety in their day. If they do not live the adventure, their lives become way too dramatic. MIND BLOWN Great now Glenn Beck is going to show up. And that guy never leaves. I wonder what room the Royal Suite was at the Ambassador Hotel where B. Kennedy was shot. He stayed on the 5th floor at the hotel and he died on the 5th floor at the Hospital. All I got. Dustin keep obsessing, I appreciate these posts my friends don’t watch Mad Men so I enjoy reading your posts and find happiness that someone else analyzes this show as much as I do. MLK was shot on 4/4/68. RFK was shot on 6/6/68. Since the next logical point would be 8/8/68, well: – that’s a lot of 8s – 8 is a lucky number in certain cultures, but unlucky in others (western), so…death? – it is the day Nixon gets nominated by the Republicans – there were race riots in Miami Or we have all gone mad. I have this theory about #503….I can’t go into details at the moment, but it involves a room number on the 5th floor of a hotel………..no, wait, that’s it. [takes the rest of day off] Some thoughts on things I probably won’t properly research on my own: What is the a police cod 503e in NY? As in, “There’s a 503 in progress at…” Dick and Don were in the army. Could their unit number be 503? Has anything happened at 5:03 previously on the show? A death, a birth, a suicide Roger having his one-millionth drink? *Let’s try that sentence as “police code 503″ I would watch a show called “Police Cod”. In a sleepy New England town with so many secrets, only one man can find the truth. One man…or one fish! 503 is the area code for Portland, Oregon. Portland is the home of Nike Nike’s famous ad (see ad as in marketing as in Mad Men?) campaign is “Just do it” Don is “doing” Sylvia in Room 503. IT ALL MAKES SENSE NOW. Wait. Don was having sex with Sylvia?!? Boy am I naive. Weiner owns your MIND! 503 can refer to the Watson 503 pill. That pill is Hydrocodone, a pain killer. Don is trying to kill the existential pain he is suffering. Watson could be Dr. Watson, from “Sherlock Holmes”. “Mad Men’s” Jared Harris was in the film version of Sherlock Holmes! Anyone else think of this…? 303 is Troy and Abed’s apartment number in Community, Annie later moved into apartment 303, “Remedial Chaos Theory” aired 2 years ago. Alison Brie is on Community and Mad Men, Therefore, Don is thinking about nailing Trudy Campbell the whole time, perhaps as a revenge fantasy against Pete. This was season 6 episode 8. Subtract 5 seasons and 3 episodes from this one and you get season 1 episode 5. In that episode “5G” at the very end Don visits his brother Adam in the hotel room and says goodbye for good. Also 5G is the next major phase and predicted future of mobile communications. Perhaps it symbolizes Don will reconnect with his Whitman past and enter the next major phase of his life in the near future. Jesus fucking Christ. We really have a lot of free time on our hands. For what it’s worth, since I’ve seen chatter here about foreshadowed airplane crashes, note that on May 3, 1968 (stretch from 503 referring to May 3) there was an airplane crash in Texas which killed 85. I dunno, is it too easy to make any connection to the 503 reference? 5 + 0 + 3 = 8 8 is a number, and Don has likely has definitely had a NUMBER of STDs by now. Wait I think I did this wrong. Even the furniture is the same colour. Great job, Dustin. I’m pretty sure 503 refers to the protagonist in the Russian novel We. The book influenced Huxley’s Brave New World and Orwell’s 1984. We takes many themes from the bible, one being that 503 is following the path of Adam. It’s an easy jump from there to see how much Don enjoys tasting the forbidden fruit. I just watched Butterfield 8 and I had so many ah-ha moments. Obviously Matt Weiner has watched this movie closely! Betty’s shooting the pigeons in season 1 is extremely similiar to Dina Merrill’s shooting scene in this movie. So keep thinking…it’s interesting to see how deep you can go with analyzing Mad Men. If this theory is correct, the BUtterfield 8 reference….it does not look good for sylvia’s future. AND…What product was the team working on in this particular episode? Margarine, a substitute for butter. OR Weiner said “I need a hotel room” and the Art Director simply stuck a 508 on the door. Not everything in life has a deeper significance. I thought it was because the main character in the book We is named D-503: [yinews.wordpress.com] This wouldn’t be totally unheard of. Shakespeare did it (to hilarious effect). Might liven up the season. What about the Apollo 8 mission? [en.wikipedia.org] It was the first manned rocket that orbited around the moon. The serial number of the rocket was SA-503. This would be an extremely obscure reference it weren’t for the fact that Don and Ted both flew through turbulent clouds in a bad thunderstorm to go see Henry Lamott from Mohawk; right after Don left Sylvia to stay in room 503 one last time. I believe Apollo 8 had complications in testing all the way to the point of the launch. What’s interesting about this possible connection is the story of Icarus is often used as a display to show someone’s hubris. Don’s hubris is shown when he and Ted finally level off in the clouds when he grabs for a book to read. Ted says “Don’t you want to look at the grandeur of God’s majesty?” Don essentially replies “No”; implying that he’s got better things to do than to ‘humbly display God’s wonder. Nothing new about who Don is. I mean he’s an over-ambitous ass. And an arrogant one at that too. At which point the Greek myth of Icarus flying too close to the sun is an obvious reference to me. Reenforced by the fact that Sylvia ended things with Dawn when he “fell from the sky”… if you will. Obviously, Don hasn’t fallen to his ruin, yet, but others certainly have in this season and through out the series: whether they were historical figures woven throughout the plot, or actual characters in the show. Nonetheless, this seems to me a foreshadow of things to come for Don. It would be interesting if through this connection of Icarus, that an argument could be made for another connection. This time being to James Joyce’s A Portrait of the Artist as a Young Man. The main character is Stephen Daedelus. Stephen’s last name “Daedelus” is the name of Icarus’ father in the Greek myth. At which point the connections for daddy issues with Don are easy enough to make. If anyone is familiar with the Don = D B Cooper theory, the hijacking took place on Northwest Orient Airlines, Flight 305. backwards yeah, but.. maybe! Dustin, the obsession you have must be contagious, because I caught it. These are great theories, but I’ve got one that fits like a glove. Plus, it’s not complicated or obscure. Why did Sirhan Sirhan shoot Robert Kennedy? Here is the direct quote from Wikipedia: “Sirhan testified that he had killed Kennedy ‘with 20 years of malice aforethought’. He explained in an interview with David Frost in 1989 that this referred to the time since the creation of the State of Israel.” Israel was created by Golda Meir, who was born on May 3. (5/03.) Below is a support of this theory, if you want additional reading. -Dan S. ADDITIONAL INFO Golda Meir is not an obscure figure, even to non-religious Jews like myself, or Matt Weiner, or Silvia’s husband, Dr. Arnold Rosen. Meir is in fact well known outside the Jewish community, given that the state of Israel was created in direct response to the horrors of WWII. People sometimes over analyze Mad Men, assigning meanings to things that simply have no meaning. But this time, it’s clear that “503” means something considering how many times they zoomed in on it, held the camera there for an extra 2 or 3 seconds, etc. The Robert Kennedy killing was the episode’s emotional climax, the devastating ending, and it’s symbolism permeates through numerous characters. It’s the heart of the episode. (Which we are only able to see in hindsight.) The “503” symbolism is not easy for a viewer to figure out, but it’s not insanely complicated either. This fits the mindset of Mad Men’s creator Matt Weiner, who has a history of wanting to challenge his audience, but not outright torture them. Of the dozen or so theories that have been proposed elsewhere on the web, this is by far the least complicated, and least harebrained. REFERENCES [en.wikipedia.org] [en.wikipedia.org]
167,351
Search in Web Directory for "SMS Solution" There are 1056: Reseller SMS- smsstamp.com Sms stamp provides :- Bulk SMS services India, Hindi SMS Facility, Smpp sms Gateway, unicode messaging, High Priority SMS, Promotional SMSKeywords: Bulk SMS Service- Latest Hindi SMS 2012, Love SMS, Funny SMS, Friendship SMS, Navratri SMS, April fool SMS.- hindisms.info Latest Hindi SMS Collection 2012, Free Hindi SMS, Friendship SMS, Birthday SMS, SMS Jokes, Anniversary SMS, Shayari, Hindi SMS, Navratri sms, April fool sms.Keywords: free hindi sms, hindi adult sms, santa banta sms, hindi mobile sms, hindi birthday sms, marathi sms, love hindi sms, new hindi sms, romantic hindi sms, hindi sms jokes, hindi sms shayari, hindi sms friendship, short hindi sms, funny hindi sms, sad hindi sms, Navratri sms, April fool sms. Hindi SMS, Love SMS, Funny SMS, Marathi SMS, Friendship SMS, Navratri SMS, April fool, Navratri SMS, April fool SMS.Keywords: hindi sms, free hindi sms, hindi adult sms, santa banta sms, hindi birthday sms, marathi sms, love hindi sms Bulk Sms package Djibouti- bulksmsinternational.com Bulk Sms Package Djibouti have the best sms solution for any business needs. Bulk sms package Djibouti provides multiple package and the rate are best and more competitive than other service provider in Djibouti, also we cover maximum services provider in Djibouti so we have better customer reach and almost zero delivery failure.Keywords: sms donation- elitpay.com Elitpay offers SMS gateway service, SMS lock, premium SMS, SMS donate, SMS bank, SMS billing service and SMS content services.Keywords: Mass SMS applications has a deal with the interactive graphical user interface which makes the messaging process very easy and smooth to use.Keywords: send text online, bulks SMS, bulk SMS software, mass SMS software, mass SMS, SMS gateways, bulk SMS in india Bulk SMS Reseller- smshesms.com RedHatinfotech is the biggest SMS Portal for bulk sms, send sms, online bulk sms, bulk mailing, bulk sms reseller, premium bulk sms, mobile sms, sms text, sms gateway, bulk sms provider, bulk sms service provider, bulk sms service, online bulk sms service, online bulk mailing service, bulk, sms, mailing on cheap price.Keywords: bulk sms, Sms Payment- txtnation.com Mobile wireless SMS and MMS messaging,online billing, and digital content delivery solutions.Keywords: mobile billing, sms payment, sms billing, pay with sms, mobile payment, mobile payments, premium sms, pay by sms, sms pay, sms gateway Bulk SMS,International SMS,Email Marketing,SMS ADVERTISING,MOBILE MARKETING,SMS CAMPAIGN,SHORT CODE, Web Hosting,LONG CODE,Mobile Campaign, SMS alerts,Text message,Email to SMS, SMS India,Bulk SMS India,SMS Gateway,MMS,MMS advertising,Bulk SMS provider in future.Keywords: None New Year SMS, text messages for mobile phones - Free SMS on YouMint.comKeywords: SMS Marketing- smstick.com SMS Tick is a comprehensive and powerful desktop tool that enables you to send out SMS from your computer to mobile phones across the world Now you can send mobile messages from your PC to everywhere by your own brand name.Keywords::
370,892
TITLE: Big etale topos vs small etale topos QUESTION [5 upvotes]: Are they equivalent? That is, given a sheaf of sets $\mathscr{F}$ defined on the small etale site on $X$, is there an essentially unique way to extend it to a sheaf on the big etale site on $X$? If not, what is an example of a sheaf which cannot be extended? What about for sheaves of abelian groups? REPLY [4 votes]: There is a site morphism $$i: X_{\rm \acute{e}t}\to {\rm \acute{E}t}(X),$$ giving an adjunction (indeed, a geometric morphism of topoi—abstract nonsense) $${\rm EXT}=i^*: \mathsf{Shv}(X_{\rm \acute{e}t})\rightleftarrows\mathsf{Shv}({\rm \acute{E}t}(X)): i_*={\rm Res},$$ where ${\rm EXT}=i^*$ is given by some Kan extension, $i_*={\rm Res}$ is really the restriction. Standard text books on étale cohomology say that the unit $F\to{\rm Res}({\rm EXT}(F))$ of the adjunction $({\rm EXT}, {\rm Res})$ is an isomorphism. So every sheaf on the small étale site is always extendable to the big étale site and restricts back to the original sheaf. So your question is not a correct one. On the other hand, the counit ${\rm EXT}({\rm Res}(G))\to G$ is usually not an isomorphism. As in the above comments, different big étale sheaves can restrict to isomorphic small étale sheaves. The correct question is to ask for such examples, already given by the above comments. What is important in étale cohomology theory is, for any $A\in\mathsf{ShvAb}(X_{\rm \acute{e}t}), B\in\mathsf{ShvAb}({\rm \acute{E}t}(X)), U\in X_{\rm \acute{e}t}$, there are canonical isomorphisms $${\rm H}^n_{\rm \acute{e}t}(X; A)\cong{\rm H}^n_{{\rm \acute{E}t}(X)}(X; {\rm EXT}(A)), {\rm H}^n_{\rm \acute{e}t}(X; {\rm Res}(B))\cong{\rm H}_{{\rm \acute{E}t}(X)}^n(X; B); $$ $${\rm H}^n_{\rm \acute{e}t}(U; A)\cong{\rm H}^n_{{\rm \acute{E}t}(X)}(U; {\rm EXT}(A)).$$
108,303
\begin{document} \raggedbottom \tableofcontents \mainmatter \ChapterAuthor[Where VLMC and PRW meet]{Variable Length Markov Chains, Persistent Random Walks: a close encounter}{Peggy \Name{C\'enac}, Brigitte \Name{Chauvin}, Fr\'ed\'eric \Name{Paccaut}, Nicolas \Name{Pouyanne}} \label{chap-struct} \markboth {Variable Length Markov Chains, persistent random walks: a close encounter} {Variable Length Markov Chains, persistent random walks: a close encounter} \section{Introduction} This is the story of the encounter between two worlds: the world of random walks and the world of Variable Length Markov Chains (VLMC). The meeting point turns around the semi-Markov property of underlying processes. In a VLMC, unlike fixed order Markov chains, the probability to predict the next symbol depends on a possibly unbounded part of the past, the length of which depends on the past itself. These relevant parts of pasts are called \emph{contexts}. They are stored in a \emph{context tree}. With each context is associated a probability distribution prescribing the conditional probability of the next symbol, given this context. Variable length Markov chains are now widely used as random models for character strings. They have been introduced in \cite{rissanen/83} to perform data compression. When they have a finite memory, they provide a parsimonious alternative to fixed order Markov chain models, in which the number of parameters to estimate grows exponentially fast with the order; they are also able to capture finer properties of character sequences. When they have infinite memory -- this will be our case of study in this chapter -- they are a tractable way to build non-Markov models and they may be considered as a subclass of ``cha\^ines à liaisons compl\`etes'' (\cite{doeblin/fortet/37}) or ``chains with infinite order'' (\cite{harris/55}). Variable length Markov chains are used in bioinformatics, linguistics or coding theory to modelize how words grow or to classify words. In bioinformatics, both for protein families and DNA sequences, identifying patterns that have a biological meaning is a crucial issue. Using VLMC as a model enables to quantify the influence of a meaning pattern by giving a transition probability on the following letter of the sequence. In this way, these patterns appear as contexts of a context tree. Notice that their length may be unbounded (\cite{bejerano/yona/01}). In addition, if the context tree is recognised to be a signature of a family (of proteins say), this gives an efficient statistical method to test whether or not two samples belong to the same family (\cite{busch/etc/09}). Therefore, estimating a context tree is an issue of interest and many authors (statisticians or not, applied or not) stress the fact that the height of the context tree should not be supposed to be bounded. This is the case in \cite{galves/leonardi/08} where the algorithm \texttt{CONTEXT} is used to estimate an unbounded context tree or in \cite{garivier/leonardi/11}. Furthermore, as explained in \cite{csiszar/talata/06}, the height of the estimated context tree grows with the sample size so that estimating a context tree by assuming \emph{a priori} that its height is bounded is not realistic. There is a large litterature on constructing efficient estimators of context trees, as well for finite or infinite context trees. This chapter is not a review of stastistics issues, which would already be relevant for finite memory VLMC. This is a study of the probabilistic properties of infinite memory VLMC as random processes, and more specifically of the main property of interest for such processes: existence and uniqueness of a stationary measure. As already been said, VLMC are a natural generalisation to infinite memory of Markov chains. It is usual to index a sequence of random variables forming a Markov chain with positive integers and to make the process grow to the right. The main drawback of this habit for infinite memory process is that the sequence of the process is read from left to right whereas the (possibly infinite) sequence giving the past needed to predict the next symbol is read in the context tree from right to left, thus giving rise to confusion and lack of readability. For this reason, in this chapter, the VLMC grows to the left. In this way, both the process sequence and the memory in the context tree are read from left to right. Classical random walks have \emph{independent} and identically distributed increments. In the literature, \emph{Persistent} Random Walks, also called \emph{Goldstein-Kac Random Walks} or \emph{Correlated Random Walks} refer to random walks having a Markov chain of finite order as an increment process. For such walks, the dynamics of trajectories has a short memory of given length and the random walk itself is not Markovian any more. What happens whenever the increments depend on a \emph{non bounded} past memory? Consider a walker on $\g Z$, allowed to increment its trajectory by $-1$ or $1$ at each step of time. Assume that the probability to keep the current direction $\pm 1$ depends on the time already spent in the said direction -- the distribution of increments acts thus as a reinforcement of the dependency from the past. More precisely, the process of increments of such a $1$-dimensional random walk is a Markov chain on the set of (right-)infinite words, with variable -- and unbounded -- length memory: a VLMC. The concerned VLMC is defined in Section~\ref{subsec:PRWdim1}. It is based on a context tree called \emph{double comb}. Besides, Section~\ref{subsec:PRWdim2} deals with a $2$-dimensional persistent random walk defined in an analogous manner on $\g Z^2$ by a VLMC based on a context tree called \emph{quadruple comb}. These random walks which have an unbounded past memory can be seen as a generalization of ``Directionally Reinforced Random Walks (DRRW)'' introduced by \cite{Mauldin1996}, in the sense that the persistence times are anisotropic ones. For a $1$-dimensional random walk associated with a double comb, a complete characterization of recurrence and transience, in terms of changing (or not) direction probabilities, is given in Section~\ref{subsec:PRWdim1}. More precisely, when one of the random times spent in a given direction (the so-called \emph{persistence times}) is an integrable random variable, the recurrence property is equivalent to a classical drift-vanishing. In all other cases, the walk is transient unless the weight of the tail distributions of both persistent times are equal. In dimension $2$, sufficient conditions of transience of recurrence are given in Section~\ref{subsec:PRWdim2}. Actually, because of the very specific form of the underlying driving VLMC, these persistent random walks turn out to be in one-to-one correspondence with so-called \emph{Markov Additif Processes}. Section~\ref{sec:meetic} is devoted to the close links between persistent random walks, Markov additive processes, semi-Markov chains and VLMC. In Section~\ref{sec:VLMCdef}, the definition of a general VLMC and a couple of examples are given. In Section~\ref{sec:PRW}, the persistent random walks (PRW) are defined and known results on their recurrence properties are collected. In view to the final Section~\ref{sec:meetic} where we show how PRW and VLMC meet through the world of semi-Markov chains, Section~\ref{sec:VLMC} is devoted to results -- together with a heuristic approach -- on the existence and unicity of stationary measures for a VLMC. \section{Variable Length Markov Chains: definition of the model} \label{sec:VLMCdef} Let $\CA$ be a finite set, called the \emph{alphabet}. In this text, $\CA$ will most often be the standard alphabet $\CA=\set{0,1}$, but also $\CA=\set{d,u}$ (for \emph{down} and \emph{up}) or $\CA=\set{\ttN,\ttE,\ttW,\ttS}$ (for the cardinal directions). Let \[ \CR=\set{\alpha\beta\gamma\cdots : \alpha ,\beta ,\gamma ,\cdots\in\CA} \] be the set of \emph{right-infinite} words over $\CA$, written by simple concatenation. A VLMC on~$\CA$, defined below and most often denoted by $\left( U_n\right) _{n\in\Nset}$, is a particular type of $\CR$-valued discrete time Markov chain where: \vskip -8pt - the process evolves between time $n$ and time $n+1$ by adding one letter \emph{on the left} of $U_n$; \vskip -8pt - the transition probabilities between time $n$ and time $n+1$ depend on a finite - but not bounded - prefix\footnote{In fact, an infinite prefix might be needed in a denumerable number of cases. } of the current word $U_n$. Giving a formal frame of such a process leads to the following definitions. For a complete presentation of VLMC, one can also refer to~\cite{cenac/chauvin/paccaut/pouyanne/12}. \begin{definition}[Context tree] A \emph{context tree} on $\CA$ is a saturated tree on $\CA$ having an at most countable set of infinite branches. \end{definition} As usual, a \emph{tree on $\CA$} is a set $\CT$ of finite words -- namely a subset of $\cup _{n\in\Nset}\CA^n$ -- which contains the empty word $\emptyset$ (the \emph{root} of $\CT$) and which is prefix-stable: for all finite words $u,v$, $uv\in\CT\Longrightarrow u\in\CT$. The tree $\CT$ is \emph{saturated} whenever any internal node has $\#\left(\CA\right)$ children: for any finite word $u$ and for any $\alpha\in\CA$, $u\alpha\in\CT\Longrightarrow\left( \forall\beta\in\CA,~u\beta\in\CT\right)$. A right-infinite word on $\CA$ is an \emph{infinite branch} of $\CT$ when all its finite prefixes belong to $\CT$. \vskip -8pt A context tree is therefore made of \emph{internal nodes} ($u\in\CT$ is internal when $\exists\alpha\in\CA$, $u\alpha\in\CT$) and of \emph{leaves} ($u\in\CT$ is a leaf when it has no child: $\forall\alpha\in\CA$, $u\alpha\notin\CT$). Following the vocabulary introduced by Rissanen, a \emph{context} of the tree is a leaf or an infinite branch. A finite or right-infinite word on $\CA$ is an \emph{external node} when it is neither internal nor a context. See below Figure~\ref{fig:exampleArbre} that illustrates these definitions, as well as the $\pref$ function defined hereunder. \begin{definition}[$\pref$ function] Let $\CT$ be a context tree. If $w$ is any external node or any context, the symbol $\pref w$ denotes the longest (finite or infinite) prefix of $w$ that belongs to $\CT$. \end{definition} In other words, $\pref w$ is the only context $c$ for which $w=c\cdots$ For a more visual presentation, hang $w$ by its head (its left-most letter) and insert it into the tree; the only context through which the word goes out of the tree is its $\pref$. \begin{figure}[h] \begin{center} \begin{tikzpicture}[scale=0.5] \newcommand{\segGeom}[5]{\draw [#5] (0,0) plot[domain=0:#3]({#1+\x*cos(#4)},{#2+\x*sin(#4)});} \newcommand{\pointCart}[4]{\fill [#4] (0,0) plot [domain=0:360]({#1+#3*cos(\x)},{#2+#3*sin(\x)});} \newcommand{\pointGeom}[6]{\fill [#6] (0,0) plot [domain=0:360]({#1+#3*cos(#4)+#5*cos(\x)},{#2+#3*sin(#4)+#5*sin(\x)});} \newcommand{\etA}{1} \newcommand{\etB}{0.95} \newcommand{\etC}{0.9} \newcommand{\etD}{0.85} \newcommand{\etE}{0.8} \newcommand{\etF}{0.75} \newcommand{\etG}{0.7} \newcommand{\etH}{0.65} \newcommand{\penteA}{0.2} \newcommand{\penteB}{0.4} \newcommand{\penteC}{0.6} \newcommand{\penteD}{0.8} \newcommand{\penteE}{1} \newcommand{\penteF}{2} \newcommand{\penteG}{4} \newcommand{\penteH}{12} \newcommand{\noeudB}[2]{\draw [#2] (0,0)--(#1\etA/\penteA,-\etA);} \newcommand{\noeudC}[3]{\draw [#3] (#1\etA/\penteA,-\etA)--(#1\etA/\penteA#2\etB/\penteB,-\etA-\etB);} \newcommand{\noeudD}[4]{\draw [#4] (#1\etA/\penteA#2\etB/\penteB,-\etA-\etB)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC,-\etA-\etB-\etC);} \newcommand{\noeudE}[5]{\draw [#5] (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC,-\etA-\etB-\etC)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD,-\etA-\etB-\etC-\etD);} \newcommand{\noeudF}[6]{\draw [#6] (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD,-\etA-\etB-\etC-\etD)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE,-\etA-\etB-\etC-\etD-\etE);} \newcommand{\noeudG}[7]{\draw [#7] (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE,-\etA-\etB-\etC-\etD-\etE)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF,-\etA-\etB-\etC-\etD-\etE-\etF);} \newcommand{\noeudH}[8]{\draw [#8] (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF,-\etA-\etB-\etC-\etD-\etE-\etF)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF#7\etG/\penteG,-\etA-\etB-\etC-\etD-\etE-\etF-\etG);} \newcommand{\noeudI}[9]{\draw [#9] (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF#7\etG/\penteG,-\etA-\etB-\etC-\etD-\etE-\etF-\etG)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF#7\etG/\penteG#8\etH/\penteH,-\etA-\etB-\etC-\etD-\etE-\etF-\etG-\etH);} \newcommand{\rayon}{0.13} \newcommand{\feuilleB}[2]{\draw [#2] (0,0)--(#1\etA/\penteA,-\etA);\fill (#1\etA/\penteA,-\etA) circle(\rayon);} \newcommand{\feuilleC}[3]{\draw [#3] (#1\etA/\penteA,-\etA)--(#1\etA/\penteA#2\etB/\penteB,-\etA-\etB);\fill (#1\etA/\penteA#2\etB/\penteB,-\etA-\etB) circle(\rayon);} \newcommand{\feuilleD}[4]{\draw [#4] (#1\etA/\penteA#2\etB/\penteB,-\etA-\etB)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC,-\etA-\etB-\etC);\fill (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC,-\etA-\etB-\etC) circle(\rayon);} \newcommand{\feuilleE}[5]{\draw [#5] (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC,-\etA-\etB-\etC)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD,-\etA-\etB-\etC-\etD);\fill (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD,-\etA-\etB-\etC-\etD) circle(\rayon);} \newcommand{\feuilleF}[6]{\draw [#6] (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD,-\etA-\etB-\etC-\etD)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE,-\etA-\etB-\etC-\etD-\etE);\fill (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE,-\etA-\etB-\etC-\etD-\etE) circle(\rayon);} \newcommand{\feuilleG}[7]{\draw [#7] (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE,-\etA-\etB-\etC-\etD-\etE)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF,-\etA-\etB-\etC-\etD-\etE-\etF);\fill (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF,-\etA-\etB-\etC-\etD-\etE-\etF) circle(\rayon);} \newcommand{\feuilleH}[8]{\draw [#8] (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF,-\etA-\etB-\etC-\etD-\etE-\etF)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF#7\etG/\penteG,-\etA-\etB-\etC-\etD-\etE-\etF-\etG);\fill (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF#7\etG/\penteG,-\etA-\etB-\etC-\etD-\etE-\etF-\etG) circle(\rayon);} \newcommand{\feuilleI}[9]{\draw [#9] (#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF#7\etG/\penteG,-\etA-\etB-\etC-\etD-\etE-\etF-\etG)--(#1\etA/\penteA#2\etB/\penteB#3\etC/\penteC#4\etD/\penteD#5\etE/\penteE#6\etF/\penteF#7\etG/\penteG#8\etH/\penteH,-\etA-\etB-\etC-\etD-\etE-\etF-\etG-\etH);} \noeudB-{} \noeudB+{} \noeudC--{} \noeudC-+{} \noeudC+-{} \noeudC++{} \noeudD---{} \feuilleD--+{} \noeudD-+-{} \feuilleD-++{} \noeudD+--{} \feuilleD+-+{} \feuilleD++-{} \noeudD+++{} \noeudE----{} \feuilleE---+{} \feuilleE-+--{} \noeudE-+-+{} \feuilleE+---{}\noeudF+---+{dashed,line width=0.8pt}\noeudG+---+-{dashed,line width=0.8pt}\noeudH+---+--{dashed,line width=0.8pt}\noeudI+---+--+{dashed,line width=0.8pt} \noeudE+--+{} \noeudE+++-{} \noeudE++++{} \noeudF-----{} \feuilleF----+{} \noeudF-+-+-{} \feuilleF-+-++{} \feuilleF+--+-{} \noeudF+--++{} \feuilleF+++--{} \feuilleF+++-+{} \noeudF++++-{} \noeudF+++++{} \noeudG------{} \feuilleG-----+{} \feuilleG-+-+--{} \noeudG-+-+-+{} \feuilleG+--++-{} \noeudG+--+++{} \noeudG++++--{} \noeudG++++-+{} \noeudG+++++-{} \noeudG++++++{} \noeudH-------{} \feuilleH------+{} \noeudH-+-+-+-{} \feuilleH-+-+-++{} \noeudH+--+++-{} \feuilleH+--++++{} \feuilleH++++---{} \feuilleH++++--+{} \feuilleH++++-+-{} \feuilleH++++-++{} \noeudH+++++--{} \noeudH+++++-+{} \noeudH++++++-{} \noeudH+++++++{} \noeudI--------{dotted, line width = 0.6pt} \noeudI-+-+-+--{dotted, line width = 0.6pt} \noeudI+--+++--{dotted, line width = 0.6pt} \noeudI+++++---{dotted, line width = 0.6pt} \noeudI+++++-++{dotted, line width = 0.6pt} \noeudI++++++--{dotted, line width = 0.6pt} \noeudI++++++++{dotted, line width = 0.6pt} \draw (-10,-0.6) node{{\small An internal node}}; \draw [->,>=latex,blue,line width=1pt] (-7.7,-0.7) ..controls +(1,0) and +(-1,1).. (-4.2,-2.7); \draw (8,-0.4) node{{\small A context}}; \draw [->,>=latex,blue,line width=1pt] (6.5,-0.5) ..controls +(-1,0) and +(0.5,1.5).. (4.2,-2.7); \draw (-0.8,-1.2) node{{\small $1000$}}; \draw [->,>=latex,blue,line width=1pt] (-0.8,-1.6) ..controls +(0.2,-1) and +(0,1).. (0,-3.5); \draw (-2.5,-0.1) node{{\small $0$}};\draw (2.5,-0.1) node{{\small $1$}}; \end{tikzpicture} \end{center} \caption{\label{fig:exampleArbre}A context tree on the alphabet $\CA=\set{0,1}$. The dotted lines are possibly the beginning of infinite branches. Any word that writes $1000\cdots$, as the one drawn dashed, admits $1000$ as a $\pref$.} \end{figure} With these definitions, it is now possible to define a VLMC. \begin{definition}[VLMC] \label{defVLMC} Let $\CT$ be a context tree. For every context $c$ of $\CT$, let $q_c$ be a probability measure on $\CA$. The \emph{variable length Markov chain (VLMC)} defined by $\CT$ and by the $\left( q_c\right) _c$ is the $\CR$-valued discrete-time Markov chain $\left( U_n\right) _{n\in\Nset}$ defined by the following transition probabilities: $\forall n\in\Nset$, $\forall\alpha\in\CA$, \begin{equation} \label{probaTransition} \PP\left( U_{n+1}=\alpha U_n | U_n\right) =q_{\pref\left(U_n\right)}\left(\alpha\right). \end{equation} \end{definition} To get a realisation of a VLMC as a process on $\CR$, take a (random) right infinite word \[ U_0=X_{0}X_{-1}X_{-2}X_{-3}\cdots \] At each step of time $n\geq 0$, one gets $U_{n+1}$ by adding a random letter $X_{n+1}$ on the left of $U_n$: \[ \setlength{\jot}{50pt} \begin{array}{rl} U_{n+1}&=X_{n+1}U_n\\[5pt] &=X_{n+1}X_{n}\cdots X_{1}X_{0}X_{-1}X_{-2}\cdots \end{array} \] under the conditional distribution~\ref{probaTransition}. \begin{remark} \emph{Probabilizing} a context tree consists, as in Definition~\ref{defVLMC}, in endowing it with a family of probability measures on the alphabet, indexed by the set of contexts. This vocabulary is used below. \end{remark} \begin{remark} Assume that the context tree is finite and denote its height by $h$; in this condition, the VLMC is just a Markov chain of order $h$ on $\CA$. On the contrary, when the context tree is infinite, and this is mainly our case of interest, the VLMC is generally \emph{not} a Markov process on~$\CA$. \end{remark} \begin{example} Take $\CA=\set{\ttN,\ttE,\ttW,\ttS}$ as an (ordered) alphabet, so that the daughters of an internal node are represented as at the left side of Figure~\ref{fig:news}. Making the transition probabilities $\PP\left( U_{n+1}=\alpha U_n | U_n\right)$ depend only on the length of the largest prefix of the form $\ttN^k$ ($k\geq 0$) of $U_n$ amounts to taking a comb as a context tree, as drawn at the right side of Figure~\ref{fig:news}. Its finite contexts are the $\ttN^k\alpha$ where $k\geq 0$ and $\alpha\in\CA\setminus\set\ttN$. \begin{figure}[h] \begin{center} \begin{minipage}{150pt} \begin{tikzpicture}[scale=1.6] \newcommand{\haut}{0.8} \newcommand{\ray}{0.05} \draw (0,0)--(-1/3,-\haut/3);\draw (-2/3,-2*\haut/3)--(-1,-\haut);\fill (-1,-\haut) circle(\ray);\draw (-0.5,-\haut/2) node{$\ttN$}; \draw (0,0)--(-0.3/3,-\haut/3);\draw (-2*0.3/3,-2*\haut/3)--(-0.3,-\haut);\fill (-0.3,-\haut) circle(\ray);\draw (-0.5*0.3,-\haut/2) node{$\ttE$}; \draw (0,0)--(0.3/3,-\haut/3);\draw (2*0.3/3,-2*\haut/3)--(0.3,-\haut);\fill (0.3,-\haut) circle(\ray);\draw (0.5*0.3,-\haut/2) node{$\ttW$}; \draw (0,0)--(1/3,-\haut/3);\draw (2/3,-2*\haut/3)--(1,-\haut);\fill (1,-\haut) circle(\ray);\draw (0.5,-\haut/2) node{$\ttS$}; \end{tikzpicture} \end{minipage} \begin{minipage}{100pt} \includegraphics{dessinPeigneGauche4-crop-N.pdf} \end{minipage} \end{center} \caption{\label{fig:news} On the left: how one can represent trees on $\CA=\set{\ttN,\ttE,\ttW,\ttS}$. On the right, the so-called \emph{left comb} on $\CA=\set{\ttN,\ttE,\ttW,\ttS}$.} \end{figure} \end{example} \begin{example} \label{ex:comb} Take again $\CA=\set{\ttN,\ttE,\ttW,\ttS}$ as an alphabet. Making the transition probabilities $\PP\left( U_{n+1}=\alpha U_n | U_n\right)$ depend only on the length of the largest prefix of the form $\alpha^k$ ($k\geq 1$) of $U_n$ where $\alpha$ is \emph{any} letter amounts to taking a \emph{quadruple comb} as a context tree, as drawn at the right side of Figure~\ref{fig:bcomb}. In the same vein, if one takes $\CA=\set{u,d}$, the \emph{double comb} is the context tree drawn at the left side of Figure~\ref{fig:bcomb}. In the corresponding VLMC, the transitions depend only on the length of the last current run $u^k$ or $d^k$, $k\geq 1$. The double comb and the quadruple comb are used below to define persistent random walks. \begin{figure}[h] \begin{center} \begin{minipage}{100pt} \begin{tikzpicture}[scale=0.45] \newcommand{\segGeom}[5]{\draw [#5] (0,0) plot[domain=0:#3]({#1+\x*cos(#4)},{#2+\x*sin(#4)});} \newcommand{\pointCart}[4]{\fill [#4] (0,0) plot [domain=0:360]({#1+#3*cos(\x)},{#2+#3*sin(\x)});} \newcommand{\pointGeom}[6]{\fill [#6] (0,0) plot [domain=0:360]({#1+#3*cos(#4)+#5*cos(\x)},{#2+#3*sin(#4)+#5*sin(\x)});} \newcommand{\gdePenteG}{0.8} \newcommand{\gdePenteD}{-\gdePenteG} \newcommand{\noeud}[2]{-#1*#2,-#1} \newcommand{\absNoeud}[2]{-#1*#2} \newcommand{\titePenteGD}{0.3} \newcommand{\titePenteDG}{-\titePenteGD} \newcommand{\rayon}{0.13} \newcommand{\feuille}[3]{ \draw (\noeud{#2}{#1})--++(-#3,-1); \fill (\absNoeud{#2}{#1}-#3,-#2-1) circle(\rayon); } \draw (0,0)--(\noeud{1}{\gdePenteG}); \draw (0,0)--(\noeud{1}{\gdePenteD}); \draw (\noeud{1}{\gdePenteG})--(\noeud{2}{\gdePenteG}); \feuille{\gdePenteG}{1}{\titePenteDG} \draw (\noeud{1}{\gdePenteD})--(\noeud{2}{\gdePenteD}); \feuille{\gdePenteD}{1}{\titePenteGD} \draw (\noeud{2}{\gdePenteG})--(\noeud{3}{\gdePenteG}); \feuille{\gdePenteG}{2}{\titePenteDG} \draw (\noeud{2}{\gdePenteD})--(\noeud{3}{\gdePenteD}); \feuille{\gdePenteD}{2}{\titePenteGD} \draw (\noeud{3}{\gdePenteG})--(\noeud{3.4}{\gdePenteG}); \draw [dashed] (\noeud{3.4}{\gdePenteG})--(\noeud{4.2}{\gdePenteG}); \feuille{\gdePenteG}{3}{\titePenteDG} \draw (\noeud{3}{\gdePenteD})--(\noeud{3.4}{\gdePenteD}); \draw [dashed] (\noeud{3.4}{\gdePenteD})--(\noeud{4.2}{\gdePenteD}); \feuille{\gdePenteD}{3}{\titePenteGD} \end{tikzpicture} \end{minipage} \hskip 10pt \begin{minipage}{200pt} \begin{tikzpicture}[scale=0.5] \newcommand{\gdePenteN}{1.8} \newcommand{\gdePenteE}{0.6} \newcommand{\gdePenteW}{-\gdePenteE} \newcommand{\gdePenteS}{-\gdePenteN} \newcommand{\noeud}[2]{-#1*#2,-#1} \newcommand{\absNoeud}[2]{-#1*#2} \newcommand{\titePenteNE}{1} \newcommand{\titePenteNW}{0.6} \newcommand{\titePenteNS}{0.2} \newcommand{\titePenteEN}{0.9} \newcommand{\titePenteEW}{0.1} \newcommand{\titePenteES}{-0.3} \newcommand{\titePenteWN}{-\titePenteES} \newcommand{\titePenteWE}{-\titePenteEW} \newcommand{\titePenteWS}{-\titePenteEN} \newcommand{\titePenteSN}{-\titePenteNS} \newcommand{\titePenteSE}{-\titePenteNW} \newcommand{\titePenteSW}{-\titePenteNE} \newcommand{\rayon}{0.1} \newcommand{\feuille}[3]{ \draw (\noeud{#2}{#1})--++(-#3,-1); \fill (\absNoeud{#2}{#1}-#3,-#2-1) circle(\rayon); } \draw (0,0)--(\noeud{3.4}{\gdePenteN}); \draw (0,0)--(\noeud{3.4}{\gdePenteE}); \draw (0,0)--(\noeud{3.4}{\gdePenteW}); \draw (0,0)--(\noeud{3.4}{\gdePenteS}); \draw [dashed] (\noeud{3.4}{\gdePenteN})--(\noeud{4.1}{\gdePenteN}); \draw [dashed] (\noeud{3.4}{\gdePenteE})--(\noeud{4.3}{\gdePenteE}); \draw [dashed] (\noeud{3.4}{\gdePenteW})--(\noeud{4.3}{\gdePenteW}); \draw [dashed] (\noeud{3.4}{\gdePenteS})--(\noeud{4.1}{\gdePenteS}); \feuille{\gdePenteN}{1}{\titePenteNE} \feuille{\gdePenteN}{1}{\titePenteNW} \feuille{\gdePenteN}{1}{\titePenteNS} \feuille{\gdePenteE}{1}{\titePenteEN} \feuille{\gdePenteE}{1}{\titePenteEW} \feuille{\gdePenteE}{1}{\titePenteES} \feuille{\gdePenteW}{1}{\titePenteWN} \feuille{\gdePenteW}{1}{\titePenteWE} \feuille{\gdePenteW}{1}{\titePenteWS} \feuille{\gdePenteS}{1}{\titePenteSN} \feuille{\gdePenteS}{1}{\titePenteSE} \feuille{\gdePenteS}{1}{\titePenteSW} \feuille{\gdePenteN}{2}{\titePenteNE} \feuille{\gdePenteN}{2}{\titePenteNW} \feuille{\gdePenteN}{2}{\titePenteNS} \feuille{\gdePenteE}{2}{\titePenteEN} \feuille{\gdePenteE}{2}{\titePenteEW} \feuille{\gdePenteE}{2}{\titePenteES} \feuille{\gdePenteW}{2}{\titePenteWN} \feuille{\gdePenteW}{2}{\titePenteWE} \feuille{\gdePenteW}{2}{\titePenteWS} \feuille{\gdePenteS}{2}{\titePenteSN} \feuille{\gdePenteS}{2}{\titePenteSE} \feuille{\gdePenteS}{2}{\titePenteSW} \feuille{\gdePenteN}{3}{\titePenteNE} \feuille{\gdePenteN}{3}{\titePenteNW} \feuille{\gdePenteN}{3}{\titePenteNS} \feuille{\gdePenteE}{3}{\titePenteEN} \feuille{\gdePenteE}{3}{\titePenteEW} \feuille{\gdePenteE}{3}{\titePenteES} \feuille{\gdePenteW}{3}{\titePenteWN} \feuille{\gdePenteW}{3}{\titePenteWE} \feuille{\gdePenteW}{3}{\titePenteWS} \feuille{\gdePenteS}{3}{\titePenteSN} \feuille{\gdePenteS}{3}{\titePenteSE} \feuille{\gdePenteS}{3}{\titePenteSW} \end{tikzpicture} \end{minipage} \end{center} \caption{\label{fig:bcomb} The double comb and the quadruple comb.} \end{figure} \end{example} \begin{example} \label{ex:4VLMC} Take $\CA=\set{0,1}$ (naturally ordered for the drawings). The left comb of right combs, drawn at the left side of Figure~\ref{fig:combsComb}, is the context tree of a VLMC that makes its transition probabilities depend on the largest prefix of~$U_n$ of the form~$0^p1^q$. If one has to take into consideration the largest prefix of the form $0^p1^q$ or $1^p0^q$, one has to use the double comb of opposite combs, as drawn at the right side of Figure~\ref{fig:combsComb}. \begin{figure}[h] \begin{center} \hbox{ \includegraphics[width=120pt]{dessinPgPd-crop-N.pdf} \hskip 10pt \includegraphics[width=200pt]{dessinDoublePgPd-crop-N.pdf} } \end{center} \caption{\label{fig:combsComb} Context trees on $\CA=\set{0,1}$: the left comb of right combs (on the left) and a double comb of opposite combs (on the right).} \end{figure} \end{example} \begin{definition}[Non-nullness] \label{nn} A VLMC is called \emph{non-null} when no transition probability vanish, \emph{i.e.} when $q_c(\alpha )>0$ for every context $c$ and for every $\alpha\in\CA$. \end{definition} Non-nullness appears below as an irreducibility-like assumption made on the driving VLMC of persistent random walks and for existence and unicity of an invariant probability measure for a general VLMC as well. \section{Definition and behaviour of Persistent Random Walks} \label{sec:PRW} In this section, the so called \emph{Persistent Random Walks (PRW)} are defined. A PRW is a random walk driven by some VLMC. In dimension $1$ and $2$, results on transience and recurrence of PRW are given. These results are detailed and proven in \cite{CDLO,cenac/chauvin/herrmann/vallois/13} in dimension one and in \cite{cenac:hal-01658494} in dimension two. \subsection{Persistent Random Walks in dimension one} \label{subsec:PRWdim1} In this section, we deal with $1$-dimensional Persistent Random Walks (PRW). Notice that, contrary to the classical random walk, a PRW is generally not Markovian. Let $\CA:=\{d,u\}=\{-1,1\}$ ($d$ for down and $u$ for up) and consider the \emph{double comb} on this alphabet as a context tree, probabilize it and denote by $(U_n)_n$ a realisation of the associated VLMC. The $n\textsuperscript{th}$ increment $X_n$ of the PRW is given as the first letter of~$U_n$: define the persistent random walk $S=(S_n)_{n\geq 0}$ by $S_0=0$ and, for $n\geq 1$, \begin{equation} \label{rw-S} S_n:=\sum_{\ell=1}^n X_{\ell}, \end{equation} so that for any $n\geq 1$, $m\geq 0$, \begin{eqnarray*} \PP\left(S_{m+1}=S_m+1|U_m=d^n u\ldots \right)&=&q_{d^nu}(u) \\ \PP\left(S_{m+1}=S_m-1|U_m=u^n d\ldots \right)&=&q_{u^nd}(d). \end{eqnarray*} Furthermore, for sake of simplicity and without loss of generality, we condition the walk to start a.s. from $\{X_{-1}=u, X_0=d\}$ -- this amounts to changing the origin of time. In this model, a walker on a line keeps the same direction with a probability which depends on the discrete time already spent in the direction the walker is currently moving. See Figure~\ref{marche}. This model can be seen as a generalisation of Directionally Reinforced Random Walks (DRRWs) introduced in \cite{Mauldin1996}. Taking different probabilized context trees would lead to different probabilistic impacts on the asymptotic behaviour of resulting PRWs. Moreover, the characterization of the recurrent \emph{versus} transient behaviour is difficult in general. We state here exhaustive recurrence criteria for PRWs defined from a double comb. In order to avoid trivial cases, we assume that $S$ cannot be frozen in one of the two directions with a positive probability. Therefore, we make the following assumption. \begin{assumption}[finiteness of the length of runs] \label{ass:a1} For any $\alpha,\beta\in\{u,d\}$, $\alpha\neq \beta$, \begin{equation} \label{a1} \lim _{n\to +\infty}\left(\prod_{k=1}^{n}q_{\alpha^k\beta}(\alpha)\right)=0. \end{equation} \end{assumption} Let $\tau^u_{n}$ and $\tau^d_{n}$ be respectively the length of the $n\textsuperscript{th}$ rise and of the $n\textsuperscript{th}$ descent. \begin{figure}[h] \definecolor{qqqqff}{rgb}{0,0,1} \definecolor{cqcqcq}{rgb}{0.75,0.75,0.75} \begin{center} \begin{tikzpicture}[scale=0.6,line cap=round,line join=round,>=latex,x=1.0cm,y=1.0cm] \begin{scriptsize} \draw [color=cqcqcq,dash pattern=on 3pt off 3pt, xstep=2.0cm,ystep=2.0cm] (-4.87,-3.74) grid (13.61,6.62); \draw[->,color=black] (-4.87,0) -- (13.61,0); \foreach \x in {-4,-2,2,4,6,8,10,12} \draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt); \draw[->,color=black] (0,-3.74) -- (0,6.62); \foreach \y in {-2,2,4,6} \draw[shift={(0,\y)},color=black] (2pt,0pt) -- (-2pt,0pt); \clip(-4.87,-3.74) rectangle (13.61,6.62); \draw [dotted] (-4,0)-- (-3,-1); \draw [dotted] (-3,-1)-- (-2,-2); \draw [dotted] (-2,-2)-- (-1,-1); \draw [dotted] (-1,-1)-- (0,0); \draw [dotted] (-4,0)-- (-4.26,0.3); \draw (-3.53,-0.07) node[anchor=north west] {d}; \draw (-2.64,-0.96) node[anchor=north west] {d}; \draw (-1.79,-1.05) node[anchor=north west] {u}; \draw (-0.96,-0.22) node[anchor=north west] {u}; \draw (0,0)-- (1,1); \draw (1,1)-- (2,2); \draw (2,2)-- (3,3); \draw (3,3)-- (4,4); \draw (4,4)-- (5,3); \draw (0.15,0.95) node[anchor=north west] {u}; \draw (1.12,1.94) node[anchor=north west] {...}; \draw (2.11,2.93) node[anchor=north west] {...}; \draw (2.99,3.84) node[anchor=north west] {u}; \draw (4.64,3.84) node[anchor=north west] {d}; \draw (3.68,0) node[anchor=north west] {$B_0$}; \draw [->,line width=1pt] (3,4) -- (9,4); \draw [->,line width=1pt] (4,3) -- (4,6); \draw [dash pattern=on 6pt off 6pt] (4,3)-- (4,0); \draw (5,3)-- (6,2); \draw (6,2)-- (7,3); \draw [dash pattern=on 6pt off 6pt] (6,2)-- (6,0); \draw (5.62,0) node[anchor=north west] {$B_1$}; \draw (5.49,2.9) node[anchor=north west] {d}; \draw (6.1,2.85) node[anchor=north west] {u}; \draw (7,3)-- (8,2); \draw (7.49,2.92) node[anchor=north west] {d}; \draw (6.61,0) node[anchor=north west] {$B_2$}; \draw [dash pattern=on 6pt off 6pt] (7,3)-- (7,0); \draw (8,2)-- (9,1); \draw (9,1)-- (10,2); \draw (10,2)-- (11,3); \draw (11,3)-- (12,4); \draw (12,4)-- (12.4,3.6); \draw (8.44,1.95) node[anchor=north west] {d}; \draw (9.15,1.93) node[anchor=north west] {u}; \draw (10.11,2.9) node[anchor=north west] {u}; \draw (11.1,3.85) node[anchor=north west] {u}; \draw (8.63,0) node[anchor=north west] {$B_3$}; \draw (11.63,0) node[anchor=north west] {$B_4$}; \draw [dash pattern=on 6pt off 6pt] (9,1)-- (9,0); \draw [dash pattern=on 6pt off 6pt] (12,4)-- (12,0); \draw [->,line width=1pt] (4,-1) -- (6,-1); \draw [->,line width=1pt] (6,-1) -- (4,-1); \draw [->,line width=1pt] (6,-1) -- (7,-1); \draw [->,line width=1pt] (7,-1) -- (6,-1); \draw [->,line width=1pt] (7,-1) -- (9,-1); \draw [->,line width=1pt] (9,-1) -- (7,-1); \draw [->,line width=1pt] (9,-1) -- (12,-1); \draw [->,line width=1pt] (12,-1) -- (9,-1); \draw (4.51,-0.91) node[anchor=north west] {$\tau_1^d$}; \draw (6.03,-1.02) node[anchor=north west] {$\tau_1^u$}; \draw (7.4,-0.90) node[anchor=north west] {$\tau_2^d$}; \draw (9.92,-0.96) node[anchor=north west] {$\tau_2^u$}; \draw (6.27,3.92) node[anchor=north west] {$Y_1$}; \draw (10.19,3.90) node[anchor=north west] {$Y_2$}; \draw (6.13,-1.91) node[anchor=north west] {$\displaystyle M_n=\sum_{\ell=1}^nY_{\ell}$}; \draw [->,dash pattern=on 1pt off 3pt on 6pt off 4pt] (7,4) -- (7,3); \draw [->,dash pattern=on 1pt off 3pt on 6pt off 4pt] (11,3) -- (11,4); \draw (4.24,5.61) node[anchor=north west] {$S_n$}; \fill [color=qqqqff] (-2,-2) circle (2.5pt); \fill [color=qqqqff] (-3,-1) circle (1.5pt); \fill [color=qqqqff] (-4,0) circle (1.5pt); \fill [color=qqqqff] (-1,-1) circle (1.5pt); \fill [color=qqqqff] (0,0) circle (1.5pt); \fill [color=qqqqff] (-4.26,0.3) circle (0.5pt); \fill [color=qqqqff] (1,1) circle (1.5pt); \fill [color=qqqqff] (2,2) circle (1.5pt); \fill [color=qqqqff] (3,3) circle (1.5pt); \fill [color=qqqqff] (5,3) circle (1.5pt); \fill [color=qqqqff] (4,4) circle (2.5pt); \draw [color=qqqqff] (4,0)-- ++(-1.5pt,0 pt) -- ++(3.0pt,0 pt) ++(-1.5pt,-1.5pt) -- ++(0 pt,3.0pt); \fill [color=qqqqff] (6,2) circle (2.5pt); \fill [color=qqqqff] (7,3) circle (2.5pt); \draw [color=qqqqff] (6,0)-- ++(-1.5pt,0 pt) -- ++(3.0pt,0 pt) ++(-1.5pt,-1.5pt) -- ++(0 pt,3.0pt); \fill [color=qqqqff] (8,2) circle (1.5pt); \draw [color=qqqqff] (7,0)-- ++(-1.5pt,0 pt) -- ++(3.0pt,0 pt) ++(-1.5pt,-1.5pt) -- ++(0 pt,3.0pt); \fill [color=qqqqff] (9,1) circle (2.5pt); \fill [color=qqqqff] (12,4) circle (2.5pt); \fill [color=qqqqff] (10,2) circle (1.5pt); \fill [color=qqqqff] (11,3) circle (1.5pt); \draw [color=qqqqff] (9,0)-- ++(-1.5pt,0 pt) -- ++(3.0pt,0 pt) ++(-1.5pt,-1.5pt) -- ++(0 pt,3.0pt); \draw [color=qqqqff] (12,0)-- ++(-1.5pt,0 pt) -- ++(3.0pt,0 pt) ++(-1.5pt,-1.5pt) -- ++(0 pt,3.0pt); \draw [color=qqqqff] (7,4)-- ++(-1.5pt,0 pt) -- ++(3.0pt,0 pt) ++(-1.5pt,-1.5pt) -- ++(0 pt,3.0pt); \draw [color=qqqqff] (11,4)-- ++(-1.5pt,0 pt) -- ++(3.0pt,0 pt) ++(-1.5pt,-1.5pt) -- ++(0 pt,3.0pt); \end{scriptsize} \end{tikzpicture} \end{center} \caption{\label{marche}A one-dimensional PRW} \end{figure} Then by a renewal type property (see \cite[Prop. 2.3]{cenac/chauvin/herrmann/vallois/13}), $(\tau_n^d)_{n\geq 1}$ and $(\tau_n^u)_{n\geq 1}$ are independent sequences of {\it i.i.d.\@} random variables. Their distribution tails are straightforwardly given by: for any $\alpha, \beta\in\{u,d\}$, $\alpha\neq \beta$ and $n\geq 1$, \begin{equation} \label{tail-2d} \PP(\tau^{\alpha}_{1} \geq n)=\prod_{k=1}^{n-1}q_{\alpha^k\beta}(\alpha). \end{equation} Note that Assumption~\ref{ass:a1} amounts to supposing that the \emph{persistence times} $\tau_n^d$ and $\tau_n^u$ are almost surely finite. The \emph{jump times} (or breaking times) are: $B_0=0$ and, for $n\geq 1$, \begin{equation} \label{def:Bn} B_{2n} := \sum_{k=1}^n \left(\tau_k^d + \tau_k^u \right) \hbox{ and } B_{2n+1} := B_{2n} + \tau_{n+1}^d . \end{equation} In order to deal with a more tractable random walk built with the possibly unbounded but \emph{i.i.d.\@} increments $Y_n:=\tau_n^u-\tau_n^d$, we introduce the underlying \emph{skeleton} random walk $(M_n)_{n\geq 1}$ which is the original walk observed at the random times of up-to-down turns: \begin{equation} \label{def:skeleton} M_n:=\sum_{k=1}^{n}Y_k = S_{B_{2n}}. \end{equation} Two main quantities play a key role in the asymptotic behaviour, namely the expectations of the lengths of runs: with Formula \eqref{tail-2d}, let \begin{equation} \label{thetaDim1} \Theta_d:=\E[\tau_1^d] = \sum_{n\geq 1} \prod_{k=1}^{n-1}q_{d^ku}(d) \ \mbox{\ and\ }\ \Theta_u:=\E[\tau_1^u] = \sum_{n \geq 1}\prod_{k=1}^{n-1}q_{u^kd}(u). \end{equation} Actually, $\Theta_d$ and $\Theta_u$ already appeared in~\cite[Prop. B1]{cenac/chauvin/herrmann/vallois/13} where it is shown that the driving VLMC of a $1$-dimensional PRW admits a unique invariant probability measure if, and only if $\Theta_d<\infty$ \emph{and} $\Theta_u<\infty$. Note that the expectation of $Y_1$ is well defined in $[-\infty,+\infty]$ whenever at least one of the persistence times $\tau_1^u$ or $\tau_1^d$ is integrable. Thus, as soon as $\Theta_{d}<\infty$ \emph{or} $\Theta_{u}<\infty$, let \begin{equation} \label{drift-def1} {\mathbf d}_{M}:=\E[Y_1] = \underbrace{\Theta_{u}-\Theta_{d}}_{\in[-\infty,+\infty]} \end{equation} and \begin{equation} \label{drift-def2} {\mathbf d}_{S}:=\frac{\E[\tau^{u}_{1}]-\E[\tau^{d}_{1}]}{\E[\tau^{u}_{1}]+\E[\tau^{d}_{1}]} = \frac{\Theta_{u}-\Theta_{d}}{\Theta_{u}+\Theta_{d}}\in [-1,1]. \end{equation} An elementary computation shows that $\g E\left(M_n\right) = n{\mathbf d}_M$ and $\g E\left(S_n\right) \sim n{\mathbf d}_S$ when $n$ tends to infinity. Thus, ${\mathbf d}_M$ and ${\mathbf d}_S$ appear as asymptotic drifts when the walks $(M_n)_n$ and $(S_n)_n$ respectively turn out to be transient (see Table~\ref{tableau-rec-trans-2d}). The behaviour of the walk also depends on quantities $J_{\alpha\mid \beta}$, defined for $\alpha$ and $\beta\in\rond A, \alpha\not=\beta$ by: \[ J_{\alpha\mid \beta}:=\sum_{n=1}^{\infty} \frac{n\mathbb \PP(\tau_1^{\alpha}=n)}{\sum_{k=1}^{n} \mathbb \PP(\tau_1^{\beta}\geq k)}. \] A complete and usable characterization of the recurrence and the transience of the PRW in terms of the probabilities to persist in the same direction or to switch is given in Proposition \ref{prop:tableau}. Its proof relies on a criterion of Erickson (see \cite{Erickson}), applied to the skeleton walk $\left( M_n\right)_n$ which is simpler to deal with because its increments are independent. \begin{proposition} \label{prop:tableau} Under non-nullness assumption and Assumption~\ref{ass:a1}, the random walk $\left( S_n\right)_n$ is recurrent or transient as described in Table \ref{tableau-rec-trans-2d}. \end{proposition} \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{$\Theta_u < \infty$} & \multicolumn{2}{c|}{$\Theta_u = \infty$} \\ \hline \multirow{4}*{$\Theta_d < \infty$} & & drifting $+\infty$ & \multicolumn{2}{c|}{\multirow{4}*{drifting $+\infty$}} \\ & recurrent & $\mathbf{d}_S > 0$ & \multicolumn{2}{c|}{\multirow{2}*{drifting $+\infty$}} \\ \cline{3-3} & $\mathbf{d}_S=0$ & drifting $-\infty$ & \multicolumn{2}{c|}{}\\ & & $\mathbf{d}_S < 0$ & \multicolumn{2}{c|}{}\\ \hline \multirow{4}*{$\Theta_d = \infty$} & \multicolumn{2}{c|}{\multirow{4}*{drifting $-\infty$}} & & drifting $+\infty$ \\ & \multicolumn{2}{c|}{} & recurrent & $\infty = J_{u\mid d} > J_{d\mid u}$ \\ \cline{5-5} & \multicolumn{2}{c|}{} & $J_{u\mid d}=J_{d\mid u}=\infty$ & drifting $-\infty$ \\ & \multicolumn{2}{c|}{} & & $\infty = J_{d\mid u} > J_{ u\mid d}$\\ \hline \end{tabular} \caption{\label{tableau-rec-trans-2d}Recurrence versus Transience (drifting) for $(S_n)_n$ in dimension 1.} \end{center} \end{table} The most fruitful situation emerges when both running times $\tau_1^u$ and $\tau_1^d$ have infinite means. In that case, the recurrence properties of $\left( S_n\right)_n$ are related to the behaviour of the skeleton random walk $\left( M_n\right)_n$ defined in \eqref{def:skeleton}, the drift of which, $\mathbf d_{M}$, is not defined. Thus the behaviour of $\left( S_n\right)_n$ depends on the comparison between the distribution tails of $\tau_1^u$ and $\tau_1^d$ defined in \eqref{tail-2d}, expressed by the quantities~$J_{\alpha\mid \beta}$. Notice that the case when both $J_{u\mid d}$ and $J_{d\mid u}$ are finite does not appear in the table since it would imply that $\Theta_u <\infty$ and $\Theta_d <\infty$ (see \cite{Erickson}). In all three other cases, the drift $\mathbf d_{S}$ is well defined and the PRW is recurrent if and only if $\mathbf d_{S}=0$. In that case, $\displaystyle \lim_{n \to \infty} \frac{S_n}{n}=\mathbf d_{S}=0$. Notice that, modifying one transition $q_c$ transforms a recurrent PRW into a transient one, since $\mathbf d_{S}$ becomes non-zero. \subsection{Persistent Random Walks in dimension two} \label{subsec:PRWdim2} Take the alphabet $\mathcal{A}:=\{\ttN,\ttE,\ttW,\ttS\}$. Here, $({\mathtt e},{\mathtt n})$ stands for the canonical basis of~$\mathbb Z^2$, ${\mathtt w}=-{\mathtt e}$ and ${\mathtt s}=-{\mathtt n}$. Hence, the letters $\ttE$, $\ttN$, $\ttW$ and $\ttS$ stand for moves to the east, north, west and south respectively. Having in mind a random walk with increments in~$\rond A$, any word of the form $\alpha \beta$, $\alpha,\beta\in\rond A, \alpha\not=\beta$ is called a \emph{bend}. For the sake of simplicity, we condition the walk to start a.s. with a $\ttN\ttE$ bend: $\set{X_{-1}=\ttN, X_{0}=\ttE}$. \begin{figure}[h] \definecolor{xdxdff}{rgb}{0.49,0.49,1} \definecolor{ududff}{rgb}{0,0,1} \begin{center} \begin{tikzpicture}[scale=0.8,line cap=round,line join=round,>=latex,x=1.0cm,y=1cm] \clip(-4.3,-4.7) rectangle (14.56,6.34); \begin{scriptsize} \draw [->,line width=0.4pt,color=gray] (1.02,-2.26) -- (1,-1); \draw [line width=0.4pt,color=gray] (1,2)-- (1.02,-1); \draw [->,line width=2.5pt] (1,1) -- (1,2); \draw [->,line width=2.5pt] (1,2) -- (2,2); \draw (1.22,1.78) node[anchor=north west] {$J_0=\ttN\ttE$}; \draw [line width=0.4pt,color=gray] (2,2)-- (5,2); \draw [->,line width=2.5pt] (5,2) -- (6,2); \draw [->,line width=2.5pt] (6,2) -- (6,3); \draw (5.56,1.52) node[anchor=north west] {$J_1=\ttE\ttN$}; \draw [line width=0.4pt,color=gray] (6,4)-- (6,3); \draw [->,line width=2.5pt] (6,4) -- (6,5); \draw [->,line width=2.5pt] (6,5) -- (5,5); \draw [line width=0.4pt,color=gray] (5,5)-- (-2,5); \draw [->,line width=2.5pt] (-2,5) -- (-3,5); \draw [->,line width=2.5pt] (-3,5) -- (-3,4); \draw [line width=0.4pt,color=gray] (-3,4)-- (-3,0); \draw [->,line width=2.5pt] (-3,0) -- (-3,-1); \draw [->,line width=2.5pt] (-3,-1) -- (-2,-1); \draw [line width=0.4pt,color=gray] (-2,-1)-- (3,-1); \draw [->,line width=2.5pt] (3,-1) -- (4,-1); \draw [->,line width=2.5pt] (4,-1) -- (4,0); \draw [line width=0.4pt,color=gray] (4,0)-- (4,3); \draw [->,line width=2.5pt] (4,3) -- (4,4); \draw [->,line width=2.5pt] (4.2,4) -- (4.2,3); \draw [line width=0.4pt,color=gray] (4.2,3)-- (4.2,2); \draw [->,line width=2.5pt] (4.2,2) -- (4.2,1); \draw [->,line width=2.5pt] (4.2,1) -- (3.2,1); \draw [line width=0.4pt,color=gray] (3.2,1)-- (-4.2,1.04); \draw [->,line width=0.4pt,color=gray] (-3,1.03) -- (-4.2,1.04); \draw (5.44,5.94) node[anchor=north west] {$J_2=\ttN\ttW$}; \draw (-2.66,4.72) node[anchor=north west] {$J_3=\ttW\ttS$}; \draw (-2.66,-0.32) node[anchor=north west] {$J_4=\ttS\ttE$}; \draw (2.3,-0.36) node[anchor=north west] {$J_5=\ttE\ttN$}; \draw (2.4,3.98) node[anchor=north west] {$J_6=\ttN\ttS$}; \draw (3.94,0.72) node[anchor=north west] {$J_7=\ttS\ttW$}; \draw [->,dash pattern=on 5pt off 5pt] (1,2.5) -- (6,2.5); \draw (3.34,3) node[anchor=north west] {$B_1$}; \draw [->,dash pattern=on 5pt off 5pt] (6,2.5) -- (1,2.5); \draw [->,dash pattern=on 5pt off 5pt] (6.5,2) -- (6.5,5); \draw (6.68,3.92) node[anchor=north west] {$B_2-B_1$}; \draw [->,dash pattern=on 5pt off 5pt] (6.5,5) -- (6.5,2); \fill [color=ududff] (1,2) circle (2.5pt); \draw[color=ududff] (0.46,1.96) node {$M_0$}; \fill [color=ududff] (6,2) circle (2.5pt); \draw[color=ududff] (6.19,1.66) node {$M_1$}; \fill [color=ududff] (6,5) circle (2.5pt); \draw[color=ududff] (6.32,5.34) node {$M_2$}; \fill [color=ududff] (-3,5) circle (2.5pt); \draw[color=ududff] (-2.74,5.58) node {$M_3$}; \fill [color=ududff] (-3,-1) circle (2.5pt); \draw[color=ududff] (-3.12,-1.36) node {$M_4$}; \fill [color=ududff] (4,-1) circle (2.5pt); \draw[color=ududff] (4.46,-1.2) node {$M_5$}; \fill [color=ududff] (4,4) circle (2.5pt); \draw[color=ududff] (3.76,4.32) node {$M_6$}; \fill [color=ududff] (4.2,1) circle (2.5pt); \draw[color=ududff] (4.72,0.92) node {$M_7$}; \end{scriptsize} \end{tikzpicture} \end{center} \vskip -2cm \caption{\label{marched2}A walk in dimension two.} \end{figure} Take a non-null VLMC associated with a quadruple comb on $\rond A$ as drawn in Figure~\ref{fig:bcomb}: the contexts are $\alpha^n\beta$ for $\alpha,\beta\in\rond A, \alpha\not= \beta$, $n\geq 1$ and the attached probability distributions are denoted by $q_{\alpha^n\beta}$. The $2$-dimensional PRW $\left( S_n\right) _n$ is defined, using this VLMC, as in Formula \eqref{rw-S}. Contrary to the $1$-dimensional PRWs, as detailed below, the probability to change direction depends on the time spent in the current direction but also on the previous direction. As in dimension one, we intend to avoid that $S$ remains frozen in one of the four directions with a positive probability. Therefore, we make the following assumption, analogous to Assumption~\ref{ass:a1} in dimension $2$. \begin{assumption}[finiteness of the length of runs] \label{ass:a2} For any $\alpha,\beta\in \{\ttN,\ttE,\ttW,\ttS\}$, $\alpha\neq \beta$, \begin{equation} \label{a2} \lim _{n\to +\infty}\left(\prod_{k=1}^{n}q_{\alpha^k\beta}(\alpha)\right)=0. \end{equation} \end{assumption} Let $(B_n)_{n\geq 0}$ be the \emph{breaking times} defined inductively by \begin{equation}\label{def:jump} B_0=0\quad\mbox{and}\quad B_{n+1}=\inf\left\{k>B_{n} : X_{k}\neq X_{k-1}\right\}. \end{equation} As in dimension $1$, Assumption~\ref{ass:a2} implies that the breaking times $B_n$ are almost surely finite. Define the so called \emph{internal chain} $\left( J_n\right) _{n\geq 0}$ by $J_0=\ttN\ttE$ and, for all $n\geq 1$, \begin{equation}\label{drivingchain} J_{n}:=X_{B_{n-1}}X_{B_{n}}. \end{equation} Let us illustrate these random variables by a small example, in which: $B_1=4$, $B_2=7$, $J_0=X_{-1}X_0$, $J_1=X_{B_0}X_{B_1}=X_0X_4$, $J_2=X_{B_1}X_{B_2}=X_4X_7$. \begin{tikzpicture} \newcommand{\hs}{0.7} \newcommand{\vs}{-0.5} \draw (-1.2*\hs,0) node{$-1$}; \foreach \j in {0,1,2,3,4,5,6,7} \draw (\j*\hs,0) node{$\j$}; \draw (-1.2*\hs,\vs) node{$\ttN$}; \foreach \j in {0,1,2,3} \draw (\j*\hs,\vs) node{$\ttE$}; \foreach \j in {4,5,6} \draw (\j*\hs,\vs) node{$\ttN$}; \foreach \j in {7} \draw (\j*\hs,\vs) node{$\ttW$}; \foreach \j in {0,1,2,3} \draw (\j*\hs,2*\vs) node{$\ttN\ttE$}; \foreach \j in {4,5,6} \draw (\j*\hs,2*\vs) node{$\ttE\ttN$}; \foreach \j in {7} \draw (\j*\hs,2*\vs) node{$\ttN\ttW$}; \draw (-0.5*\hs,0)--++(0,-2); \draw (3.5*\hs,0)--++(0,-2); \draw (6.5*\hs,0)--++(0,-2); \draw (0.2,3.5*\vs) node{\small$B_0=0$}; \draw (0.3,4.5*\vs) node{\small$J_0=\ttN\ttE$}; \draw (4*\hs+0.2,3.5*\vs) node{\small$B_1=4$}; \draw (4*\hs+0.3,4.5*\vs) node{\small$J_1=\ttE\ttN$}; \draw (7*\hs+0.2,3.5*\vs) node{\small$B_2=7$}; \draw (7*\hs+0.3,4.5*\vs) node{\small$J_2=\ttN\ttW$}; \draw (-2.5*\hs,0) node{$n$:}; \draw (-2.5*\hs,\vs) node{$X_n$:}; \draw (-2.5*\hs,2*\vs) node{$Z_n$:}; \end{tikzpicture} The process $\left( J_n\right) _{n\geq 0}$ is an irreducible Markov chain on the set of bends $\mathcal{S}:=\set{\alpha\beta|\alpha \in \CA, \beta \in \CA, \alpha\neq \beta}$. Its Markov kernel is defined by: for every $\beta,\alpha,\gamma\in\mathcal A$ with $\beta\neq\alpha$ and $\alpha\neq\gamma$, \begin{equation} \label{markovsymb1} P(\beta\alpha;\alpha\gamma):=\sum_{n=1}^{\infty}\left(\prod_{k=1}^{n-1}q_{\alpha^k \beta}(\alpha)\right)q_{\alpha^n \beta}(\gamma), \end{equation} the numbers $P(\alpha\beta,\gamma\delta)$ being $0$ for every couple of bends not of the previous form. Remark that the non-nullness assumption (see Definition~\ref{nn}) implies the irreducibility of $\left( J_n\right)_n$ and its aperiodicity. The state space $\CS$ is finite so that $\left( J_n\right)_n$ is positive recurrent: it admits a unique invariant probability measure $\pi_{\scriptscriptstyle J}$. Denote $T_0=0$ and $T_{n+1}:=B_{n+1}- B_n$ for every $n\geq 0$. These waiting times (also called \emph{persistence times}) are not independent, contrary to the one-dimensional case. The \emph{skeleton random walk} $(M_{n})_{n\geq 0}$ on $\mathbb Z^2$ -- which is the PRW observed at the breaking times -- is then defined as \begin{equation} M_n:=S_{B_n}=\sum_{i=1}^{n}\left(\sum_{k=B_{i-1}+1}^{B_{i}} X_k\right) = \sum_{i=1}^{n}\left( B_{i} - B_{i-1}\right) X_{B_i} . \end{equation} Notice that $(M_n)_n$ is generally not a classical RW with \emph{i.i.d.} increments. Nevertheless, taking into account the additional information given by the internal Markov chain $(J_n)_n$, then $\left( J_n,M_n\right)_n$ is a Markov Additive Process (see \cite{cinlar/72}) as it will appear in Section~\ref{sec:meetic}. Here, $(J_n)_n$ is positive recurrent but this does not imply the recurrence of $(S_n)_n$ or $(M_n)_n$. Moreover, $(S_n)_n$ and $(M_n)_n$ may have different behaviours. Explicit necessary and sufficient conditions for the recurrence of $(M_n)_n$ in terms of characteristic functions and convergence of suitable series are given in \cite[Theorem 2.1]{cenac:hal-01658494}. The following proposition states a dichotomy between some recurrence \emph{versus} transience phenomenon. \begin{theorem} Under non-nullness assumption, the following dichotomy holds. (i) The series $\sum _n\mathbb P\left( M_n=0\right)$ diverges if, and only if the process $\left( M_n\right) _n$ is recurrent in the following sense: \[ \exists r>0,~~\PP\left(\liminf_{n\to\infty} \|M_n\|<r\right)=1. \] (ii) The series $\sum _n\mathbb P\left( M_n=0\right)$ converges if, and only if the process $\left( M_n\right) _n$ is transient in the following sense: \[ \PP\left(\lim_{n\to\infty} \|M_{n}\|=\infty\right)=1. \] \end{theorem} Does the recurrence (resp. the transience) of $(M_n)_n$ and $(S_n)_n$ occur at the same time? The answer to this twenty-year-old question is no. \begin{theorem} [Definitive invalidation of the conjecture in \cite{Mauldin1996}] \label{conjecture} There exist recurrent PRWs $(S_n)_n$ having an associated transient MRW skeleton $(M_n)_n$. \end{theorem} Supposing that the persistence time distributions are horizontally and vertically symmetric is a natural necessary condition for the random walk $(S_n)_n$ to be recurrent. One example is given by the Directionally Reinforced Random Walk (DRRW), originally introduced in \cite{Mauldin1996}, see Figure~\ref{transitions2}. \begin{figure}[h] \vskip -2cm \definecolor{ffqqqq}{rgb}{1,0,0} \definecolor{ududff}{rgb}{0.3,0.3,1} \begin{tikzpicture}[line cap=round,line join=round,>=latex,x=1.0cm,y=1.0cm] \clip(-4.3,-3.28) rectangle (8.68,6.3); \draw (-4.14,2)-- (3,2); \draw [->] (-4,2) -- (-3,2); \draw (3.5,2.64) node[anchor=north west] {$q_{\alpha^n\beta}(\alpha)$}; \draw (3.04,3.6) node[anchor=north west] {$\frac{1}{3}(1-q_{\alpha^n\beta(\alpha)})$}; \draw (3.04,1.3) node[anchor=north west] {$\frac{1}{3}(1-q_{\alpha^n\beta(\alpha)})$}; \draw (0.34,2.64) node[anchor=north west] {$\frac{1}{3}(1-q_{\alpha^n\beta(\alpha)})$}; \draw [->,line width=1.5pt,dash pattern=on 3pt off 3pt,color=ffqqqq] (3,2) -- (3,3); \draw [->,line width=1.5pt,dash pattern=on 3pt off 3pt,color=ffqqqq] (3,2) -- (4,2); \draw [->,line width=1.5pt,dash pattern=on 3pt off 3pt,color=ffqqqq] (3,2) -- (3,1); \draw [->,line width=1.5pt,dash pattern=on 3pt off 3pt,color=ffqqqq] (3,2) -- (2,2); \begin{scriptsize} \fill [color=ududff] (3,2) circle (2.5pt); \end{scriptsize} \end{tikzpicture} \vskip -4cm \caption{\label{transitions2}The original Directionally Reinforced Random Walk (DRRW).} \end{figure} Some particular values of the transition probabilities $q_{\alpha ^n\beta}$ provide counterexamples. It is shown in~\cite{cenac:hal-01658494} that the corresponding distributions of the persistence times must be non-integrable. In Section~\ref{sec:meetic}, this non integrability will be related to non existence of any invariant probability measure for the driving VLMC. \section{VLMC: existence of stationary probability measures} \label{sec:VLMC} Take a VLMC denoted by $U=\left( U_n\right) _{n\geq 0}$, defined by a pair $\left(\CT,q\right)$ where $\CT$ is a context tree on an alphabet~$\CA$ and $q=\left( q_c\right) _{c\in\CC}$ a family of probability measures on $\CA$, indexed by the contexts of $\CT$. A probability measure $\pi$ on $\CR$ is \emph{stationary} or \emph{invariant} (with regard to $U$) whenever $\pi$ is the distribution of every $U_n$ as soon as it is the distribution of $U_0$. The question of interest consists here in finding conditions on $\left(\CT,q\right)$ for the process to admit at least one -- or a unique one -- stationary probability measure. The heuristic presentation aims to show how combinatoric objects -- namely the $\alpha$-lis of contexts -- and numbers -- the cascades -- naturally emerge. Assume that $\pi$ is a stationary probability measure on $\CR$. $\bullet$ First step: finite words. Since $\CR$ is endowed with the cylinder $\sigma$-algebra, $\pi$ is determined by its values $\pi\left( w\CR\right)$ on the cylinders $w\CR$, where $w$ runs over all finite words on $\CA$. $\bullet$ Second step: longest internal suffixes of words. Assume that $e$ is a finite non-internal word and take $a\in\CA$. Then, its $\pref$ is well defined and, because of Formula~\eqref{probaTransition}, since $\pi$ is stationary, \begin{equation} \label{preCascade} \pi\left(\alpha e \CR\right)=q_{\pref(e)}(\alpha )\times\pi\left(e\CR\right). \end{equation} Iterating this formula as far as possible leads to the following definitions. Consider any non-empty finite word $w$. It is uniquely decomposed as $w=p\alpha s=\beta _1\beta_2\beta _3\cdots\beta_\ell\alpha s$, where $\alpha$ and the $\beta$'s are letters and $s$ is the \emph{longest internal suffix} of $w$. The integer $\ell$ is non-negative and $p=\beta _1\beta_2\cdots\beta_\ell$ is a prefix of $w$ that may be empty -- in which case $\ell =0$. \begin{definition} [lis and $\alpha$-lis] With these notations, the Longest Internal Suffix $s$ is shortened as the $\emph{lis}$ of $w$. The word $\alpha s$ is called the \emph{$\alpha$-lis} of $w$. \end{definition} \begin{definition} [cascade] With the notation above, the \emph{cascade} of $w$ is the product \begin{equation} \label{formCascade} \casc (w) =q_{\pref(\beta_2\cdots\beta_\ell\alpha s)}(\beta _1 ) q_{\pref(\beta_3\cdots\beta_\ell\alpha s)}(\beta _2 ) \cdots q_{\pref(\alpha s)}(\beta_\ell). \end{equation} \end{definition} Note that this definition makes sense because all the $\beta_k\cdots\beta_\ell\alpha s$ are non-internal words, $k\geq 2$. Moreover, if $w=\alpha s$ where $s$ is internal, then $\ell =0$ and $\casc (w)=1$. With these definitions, iterating Formula~\eqref{preCascade} leads to the following equality, named \emph{Cascade Formula}: for every non-empty finite word $w$ having $\alpha s$ as an $\alpha$-lis, \begin{equation} \pi\left(w \CR\right) = \casc (w)\times\pi\left(\alpha s\CR\right). \end{equation} This shows that $\pi$ is determined by its values on words of the form $\alpha s$ where $s$ is internal and $\alpha\in\CA$. $\bullet$ Third step: finite contexts. Assume that $s$ is an internal word and that $\alpha\in\CA$. It is shown in~\cite{cenac/chauvin/paccaut/pouyanne/18} that a stationary probability measure never charges infinite words so that, by disjoint union, \begin{equation} \label{chaisPasCommentLApeller} \pi\left(\alpha s \CR\right)=\sum _{\substack{ {c:}{\rm~finite~context}\\ c=s\cdots}} \pi\left(\alpha c\CR\right)=\sum _{\substack{ {c:}{\rm~finite~context}\\ c=s\cdots}} q_c(\alpha )\pi\left(c\CR\right). \end{equation} Note that the set of indices may be infinite but the family is summable because $\pi$ is a finite measure. This shows that $\pi$ is entirely determined by its values $\pi\left(c\CR\right)$ on the finite contexts. $\bullet$ Fourth step: $\alpha$-lis of finite contexts. Cascade Formula~\eqref{formCascade} applied to any finite context $c$ (contexts are non-empty words) writes $\pi\left(c\CR\right)=\casc (c)\pi\left(\alpha _cs_c\CR\right)$, where $\alpha _cs_c$ is the $\alpha$-lis of~$c$. Denote by $\CS =\CS\left(\CT\right)$ the set of finite context $\alpha$-lis: \[ \CS=\set{\alpha _cs_c:~c{\rm ~finite~context}}. \] If $s$ is an internal word and if $\alpha\in\CA$, then Formula~\eqref{chaisPasCommentLApeller} leads to \begin{equation} \label{rhoooYenAMarreDeChercherDesNomsDeFormules} \pi\left(\alpha s \CR\right) =\sum _{\substack{ {c:}{\rm~finite~context}\\ c=s\cdots}} \casc\left(\alpha c\right)\pi\left(\alpha _cs_c\CR\right), \end{equation} showing that $\pi$ is determined by its values $\pi\left(\alpha _cs_c\CR\right)$ on $\CS$. $\bullet$ Last step: a (generally infinite) linear system. When $w$ and $v$ are finite words and when $\alpha s\in\CS$, the notation \[ w=v\cdots =\cdots [\alpha s] \] stands for: $w$ has $v$ as a prefix and $\alpha s$ as an $\alpha$-lis. Writing Formula~\eqref{rhoooYenAMarreDeChercherDesNomsDeFormules} for every $\alpha s\in\CS$ and grouping in each of them the terms that arise from contexts having the same $\alpha$-lis leads to the following square system (at most countably many unknowns $\pi\left(\alpha s\CR\right)$ and as many equations): \begin{equation} \label{systQ} \forall \alpha s\in\CS,~ \pi\left(\alpha s\CR\right) =\sum _{\beta t\in\CS}\pi\left(\beta t\CR\right) \left(\sum _{\substack{{c:}{\rm~finite~context}\\c=s\cdots =\cdots [\beta t]}}\casc\left(\alpha c\right)\right). \end{equation} \begin{definition}[Matrix Q] When $\CT$ is a context tree having $\CS$ as a context $\alpha$-lis set, $Q=Q\left(\CT\right)$ is the $\CS$-indexed square matrix defined by: \begin{equation} \label{defQ} \forall\alpha s,\beta t\in\CS,~ Q_{\beta t,\alpha s} =\sum _{\substack{{c:}{\rm~finite~context}\\c=s\cdots =\cdots [\beta t]}}\casc\left(\alpha c\right) \in [0,+\infty ]. \end{equation} \end{definition} Thus, System~\eqref{systQ} tells us that, when $\pi$ is a stationary measure, the row-vector $\left(\pi\left(\alpha s \CR\right)\right) _{\alpha s\in\CS}$ appears as a left-fixed vector of the matrix $Q$. \begin{definition}[Cascade series] \label{def:cascseries} For every $\alpha s\in\CS$, denote \[ \kappa _{\alpha s} =\sum _{\substack{{c:}{\rm~finite~context}\\c=\cdots [\alpha s]}}\casc (c)\in [0,+\infty ]. \] When this series is summable, one says that \emph{the cascade series of $\alpha s$ converges}. Whenever the cascades series of all $\alpha s\in\CS$ converge, one says that \emph{the cascade series (of the VLMC) converge}. \end{definition} Note that the convergence of (all) the cascade series is sufficient to guarantee the finiteness of $Q$'s entries. Actually, for a general VLMC, as it is made precise in~\cite{cenac/chauvin/paccaut/pouyanne/18}, the convergence of the cascade series appears as a pivot condition when dealing with existence and unicity of a stationary probability measure. In this paper, we just state a necessary and sufficient condition for a special kind of VLMC: the \emph{stable} ones that have a finite $\CS$. The following proposition is proven in~\cite{cenac/chauvin/paccaut/pouyanne/18}. \begin{proposition}\label{prop:defstable} Let $\rond T$ be a context tree. The following conditions are equivalent. \begin{enumerate} \item[(i)] $\forall \alpha \in \rond A$, $\forall w \in \rond W$, $\alpha w \in \rond T \Longrightarrow w \in \rond T$. \item[(ii)] If $c$ is a finite context and $\alpha \in \rond A$, then $\alpha c$ is non-internal. \item[(iii)] $\rond T\subseteq\CA\CT=\{\alpha w,~\alpha\in\CA,~w\in\CT\}$. \item[(iv)] For any VLMC $(U_n)_n$ associated with $\rond T$, the process $\left(\pref(U_n)\right)_{n\in\g N}$ is a Markov chain that has the set of contexts as a state space. \end{enumerate} \end{proposition} The context tree is called \emph{stable} whenever one of these conditions is fulfilled. It turns out that the stability of $\rond T$ together with the non-nullness of the VLMC imply both stochasticity and irreducibility of the matrix $Q$. Consequently, in the simple case where $Q$ is a finite-dimensional matrix, there exists (thanks to stochasticity) a unique (thanks to irreducibility) left-fixed vector for $Q$. As a consequence of a much more general result proven in~\cite{cenac/chauvin/paccaut/pouyanne/18}, this implies existence and unicity of a stationary probability measure for the VLMC, as stated below. \begin{theorem} \label{th:stablefini} Let $({\CT},q)$ be a non-null stable probabilized context tree. If $\#{\CS}<\infty$, then the following are equivalent. \begin{enumerate} \item The VLMC associated to $({\CT},q)$ has a unique stationary probability measure. \item The cascade series converge (see Definition \ref{def:cascseries}). \end{enumerate} \end{theorem} Notice that in the non stable case, the matrix $Q$ is generally not stochastic nor is it even substochastic. Notice also that, even in the stable case, when $\#\rond S=\infty$, the matrix $Q$ may be stochastic, irreducible and positive recurrent while the VLMC does not admit any stationary probability measure. One can find such an example in~\cite{cenac/chauvin/paccaut/pouyanne/18}, built with a left comb of left comb -- see Example~\ref{ex:4VLMC}. \section{Where VLMC and PRW meet} \label{sec:meetic} On one hand, a VLMC is defined by its context tree and its transition probability distributions $q_c$ -- in particular the double and the quadruple combs which are stable trees with finitely many context $\alpha$-lis. Necessary and sufficient conditions of existence and uniqueness of stationary probability measures are given in terms of cascade series. On the other hand, for PRW (defined from VLMC), recurrence properties are written in terms of persistence times. Our aim is to build a bridge between these two families of objects and properties. The meeting point turns out to be the semi-Markov processes of $\alpha$-lis and bends. \subsection{Semi-Markov chains and Markov Additive Processes} \label{subsec:defSM} Semi-Markov chains are defined following \cite{barbu/limnios/08} thanks to so-called Markov renewal chains. \begin{definition}[Markov renewal chain] \label{def:MRC} A Markov chain $(J_n,T_n)_{n\geq0}$ with state space $\rond E\times \g N$ is called a (homogeneous) \emph{Markov renewal chain} (shortly MRC) whenever the transition probabilities satisfy: $\forall n\in\g N$, $\forall a,b\in\rond E$, $\forall j,k\in\g N$, \[ \begin{array}{rl} \PP\left( J_{n+1}=b, T_{n+1} = k\big| J_n = a, T_n =j \right) &= \PP\left( J_{n+1}=b, T_{n+1} = k\big| J_n = a \right) \\ [5pt] &=: p_{a,b}(k) \end{array} \] and $\forall a,b\in\rond E$, $p_{a,b}(0) = 0$. For such a chain, the family $p=\left(p_{a,b}(k)\right)_{a,b\in\rond A, k\geq 1}$ is called its \emph{semi-Markov kernel}. \end{definition} \begin{definition}[Semi-Markov chain] \label{def:semiMarkov} Let $(J_n,T_n)_{n\geq0}$ be a Markov renewal chain with state space $\rond E\times \g N$. Assume that $T_0=0$. For any $n\in \g N$, let $B_n$ be defined by \[ B_n = \sum_{i=0}^n T_i. \] The \emph{semi-Markov chain} associated with $(J_n,T_n)_{n\geq0}$ is the $\rond E$-valued process $(Z_j)_{j\geq0}$ defined by \[ \forall j \hbox{ such that } B_n\leq j < B_{n+1}, \hskip 5mm Z_j = J_n. \] \end{definition} Note that the sequence $(B_n)_{n\geq 0}$ is almost surely increasing because of the assumption $p_{a,b}(0) = 0$ (instantaneous transitions are not allowed) that guarantees that $T_n\geq 1$ almost surely, for any $n\geq 1$. The $B_n$ are \emph{jump times}, the $T_n$ are \emph{sojourn times} in a given state and $Z_j$ stagnates at a same state between two successive jump times. The process $\left( J_n\right) _n$ is called the \emph{internal (underlying) chain} of the semi-Markov chain $\left( Z_n\right) _n$. The previous definitions make transitions to the same state between time $n$ and time $n+1$ possible. Nevertheless, one can boil down to the case where $p_{a,a}(k)=0$ for all $a\in\rond E, k\in\g N$ (see the details in \cite{cenac/chauvin/paccaut/pouyanne/18}). A close notion, \emph{Markov Additive Processes}, can be found in \cite{cinlar/72}. \begin{definition}[Markov Additive Process] \label{def:MAP} A Markov chain $(J_n,B_n)_{n\geq0}$ with state space $\rond E\times \g N$ is called a \emph{Markov Additive Process} (shortly MAP) whenever $\left( J_n,B_{n}-B_{n-1}\right) _n$ is a Markov renewal chain. \end{definition} \subsection{Persistent Random Walks induce semi-Markov chains} \label{subsec:PRWandSM} Let us start with $1$-dimensional PRW, as defined in Section \ref{subsec:PRWdim1}. In this case, at each time $j, j\geq 0$, the increment $X_j$ of the walk $S$ takes $d$ or $u$ as a value (see Figure \ref{marche}). Let us see that $(X_j)_{j\geq 0}$ is a semi-Markov chain, starting from $X_0 = d$. Remember that $B_n$ denotes the $n$-th jump times -- see Equation \eqref{def:Bn}. Define then $(J_n)_n$ by \begin{equation} \label{def:JnDim1} J_n:=X_{B_n}. \end{equation} Moreover, let $T_n$ be the $n$-th waiting time, namely $T_0=0$ and, for $n\geq 1$, \[ T_n=B_n-B_{n-1}. \] These waiting times are related to the persistence times $\tau$ by the following formulae: for all $k\geq 1$, \begin{equation} \label{def:TnDim1} T_{2k}:=\tau_k^u \hbox{ \ \ and \ \ } T_{2k-1}:=\tau_k^d. \end{equation} With these notations, $(J_n,T_n)_{n\geq 0}$ is a Markov renewal chain and its semi-Markov kernel writes: $\forall \alpha,\beta\in\{ u,d\},\alpha\not= \beta,\forall k\geq 1$, \begin{equation} \label{SMkernelDim1} p_{\alpha,\beta}(k) = \left(\prod_{j=1}^{k-1}q_{\alpha^j \beta}(\alpha)\right)q_{\alpha^k \beta}(\beta), \end{equation} as can be straightforwardly checked. Moreover, Assumption~\ref{ass:a1} guarantees that the $T_n$ are a.s. finite. Besides, Formulae~\eqref{thetaDim1} write \[ \g E\left( T_{2k}\right)=\Theta _u {\rm ~and~~} \g E\left( T_{2k+1}\right)=\Theta _d. \] The situation in dimension $1$ is summarized by the following proposition. \begin{proposition} \label{prop:semiMarkovPRW1} For a PRW in dimension 1, defined by a VLMC associated with a double comb, the sequence $(X_j)_j$ of the increments is an $\rond A$-valued semi-Markov chain with Markov renewal chain $(J_n,T_n)_n$ as defined in \eqref{def:JnDim1} and \eqref{def:TnDim1} and its semi-Markov kernel is given by equation \eqref{SMkernelDim1}. \end{proposition} Let us deal now with the $2$-dimensional PRW, defined in Section~\ref{subsec:PRWdim2}. At each time~$j, j\geq 0$, the increment $X_j$ of the walk $S$ takes $\ttN, \ttE, \ttW$ or $\ttS$ as a value. But, as already noticed, changing direction depends on the time spent in the current direction but also, contrary to the $1$-dimensional PRWs, on the previous direction. In otherwords, the bends play the main role. This gives rise to the process $\left( Z_j\right) _j$, valued in the set of bends $\{\alpha\beta : \alpha, \beta\in\rond A, \alpha\not=\beta \}$, defined in the following manner: $Z_0 = X_{-1}X_0=\ttN\ttE$ and, for $j\geq 1$, $Z_j = \alpha\beta$ if and only if $X_j = \beta$ and the first letter distinct from $\beta$ in the sequence $X_{j-1},X_{j-2},X_{j-3},\cdots$ is $\alpha$. Let us see that $(Z_j)_{j\geq 0}$ is a semi-Markov chain. Use here notations $(J_n)_n$, $(B_n)_n$ and $(T_n)_n$ of section~\ref{subsec:PRWdim2}. Notice that, contrary to the one-dimensional case, the waiting times $T_n$ are not independent. Nevertheless, $(J_n,T_n)_{n\geq 0}$ is a Markov renewal chain with semi-Markov kernel \begin{equation} \label{SMkernelDim2} p_{\beta\alpha,\alpha\gamma}(k) := \left(\prod_{j=1}^{k-1}q_{\alpha^j \beta}(\alpha)\right)q_{\alpha^k \beta}(\gamma), \end{equation} as can be straightforwardly checked. Summarizing, the following proposition holds. \begin{proposition} \label{prop:semiMarkovPRW2} For a PRW in dimension 2, defined by a VLMC associated with a quadruple comb, the sequence $(Z_j)_j$ of the bends is a semi-Markov chain with Markov renewal chain $(J_n,T_n)_n$ as defined in Section~\ref{subsec:PRWdim2}. Its semi-Markov kernel is given by equation \eqref{SMkernelDim2}. In addition, $(J_n,B_n)_n$ is a Markov Additive Process. \end{proposition} \subsection{Semi-Markov chain of the $\alpha$-lis in a stable VLMC} \label{subsec:compareSM} In this section, let us consider a more general case than a double comb or a quadruple comb, namely a stable VLMC. In this case, there is always a semi-Markov chain induced by the process $(U_n)_n$, as described in the following. Let $(U_n)_{n\geq 0}$ be a stable non-null VLMC such that the series of cascades converge (see Definition \ref{def:cascseries}). Recall that $\rond S$ denotes the set of context $\alpha$-lis of the VLMC. Let $(C_n)_{n\geq 0}$ be the sequence of contexts and for $n\geq 0$, let ${Z}_n$ be the $\alpha$-lis of $C_n$: \[ C_n = \pref (U_n) \hskip 5mm \hbox{ and } \hskip 5mm {Z}_n = \alpha_{C_n}s_{C_n}. \] \begin{proposition} \label{pro:compareSM} Let $({B}_n)_{n\geq 0}$ be the increasing sequence of times defined by ${B}_0 = 0$ and for any $n\geq 1$, \[ {B}_n = \inf \set{ k> {B}_{n-1}, |C_k| \leq |C_{k-1}|}= \inf \set{ k> {B}_{n-1}, C_k \in\rond S} \] and let ${T}_n = {B}_n - {B}_{n-1}$ for $n\geq 1$ and ${T}_0 = 0$. For any $n\geq 0$, let ${J}_n = {Z}_{{B}_n}$. Then \begin{itemize} \item[(i)] ${B}_n$ and $T_n$ are almost surely finite and for $\alpha s\in\rond S$, $\g E\left( T_n \big| J_n = \alpha s\right) = \kappa_{\alpha s}$. \item[(ii)] $({Z}_n)_{n\geq 0}$ is an $\rond S$-valued semi-Markov chain associated with the Markov renewal chain $({J}_n, {T}_n)_{n\geq 0}$. \item[(iii)] The associated semi-Markov kernel writes: $\forall \alpha s, \beta t \in\rond S$, $\forall k\geq 1$, \[ p_{\alpha s, \beta t}(k) = \sum_{\substack{c\in\rond C,~c=t\cdots\\c= \cdots [\alpha s]\\|c| = |\alpha s| + k-1}} \casc\left(\beta c\right) . \] \end{itemize} \end{proposition} The proof is detailed in~\cite{cenac/chauvin/paccaut/pouyanne/18}. It relies on the way the VLMC grows between two jump times: at the beginning, letters are added to the current context $C_n$, the $\alpha$-lis does not change and the length of the current context increases one by one. At a certain time (a.s. finite), adding a letter to the current context does not provide a context any more but an external node. At this moment, it happens (it is not trivial and only holds for a stable context tree) that (i) the $\alpha$-lis of the current context is renewed; (ii) the length of current context does not grow; (iii) the current context begins by a lis. These mechanisms explain the expressions of $B_n$ and the formula giving the semi-Markov kernel. \begin{remark} \label{rem:semiMarkov} In the very particular case of the double or quadruple comb, the semi-Markov chain $\left( Z_n\right) _n$ contains as much information as the chain~$\left( U_n\right) _n$. But in general, the semi-Markov chain $\left( Z_n\right) _n$ contains less information than the chain $\left( U_n\right) _n$. To illustrate this, here is an example with a finite context tree. \vskip 5pt \begin{minipage}{0.3\textwidth} \centering \begin{tikzpicture}[scale=0.4] \tikzset{every leaf node/.style={draw,circle,fill},every internal node/.style={draw,circle,scale=0.01}} \Tree [.{} [.{} [.{} {} [.{} {} {} ] ] [.{} {} [.{} {} {} ] ] ] [.{} {} [.{} {} {} ] ] ] \end{tikzpicture} \end{minipage} \begin{minipage}{0.6\textwidth} \centering \begin{tabular}{r|l} $\alpha$-lis $\alpha s$&contexts having $\alpha s$ as an $\alpha$-lis \\ \hline 10&10,010,110,0010,0110\\ 000&000\\ 111&111,0111\\ 0011&0011 \end{tabular} \end{minipage} \vskip 5pt In this example, 0010 and 0110 are two contexts of the same length, with the same $\alpha$-lis 10 and beginning by the same lis 0. Hence if we know that ${J}_n=10$, ${B}_{n+1}-{B}_{n}=3$ and ${J}_{n+1}=10$, then ${Z}_j$ is uniquely determined between the two successive jump times, whereas there are two possibilities to reconstruct the VLMC $(U_n)_n$. With the notations of Proposition \ref{pro:compareSM}, there are two cascade terms in $p_{10,10}(3) $: \begin{minipage}{0.68\textwidth} \begin{align*} p_{10,10}(3) &= \PP\left(C_{{B}_n+1} = 010, C_{{B}_n+2} = 0010,C_{{B}_n+3} = 10010 | C_{{B}_n} =10\right)\\ & \ \ + \PP\left(C_{{B}_n+1} = 110, C_{{B}_n+2} = 0110,C_{{B}_n+3} = 10110 | C_{{B}_n} =10\right)\\ &= q_{10}(0) q_{010}(0) q_{0010}(1) + q_{10}(1) q_{110}(0) q_{0110}(1)\\ &= \casc (10010) + \casc (10110). \end{align*} \end{minipage} \end{remark} \subsection{The meeting point} \label{subsec:meetic} Summing up, the announced close encounter can be done with the following (commutative) diagram, together with the following explanations. \begin{equation} \begin{tikzpicture} \label{diagrammeComm} \newcommand{\largeur}{4.6} \newcommand{\hauteur}{1.5} \newcommand{\lgFlecheHoriz}{1} \newcommand{\lgFlecheVert}{0.5} \draw (0,\hauteur) node{MRC $\left( {J_n^V},T_n\right)_n$}; \draw (0,2*\hauteur) node{VLMC $\left( U_n\right)_n$}; \draw (\largeur,\hauteur) node{MAP $\left( {J_n^W},M_n\right)_n$}; \draw (\largeur,2*\hauteur) node{PRW $\left( S_n\right)_n$}; \draw (\largeur,0) node{Semi-Markov $\left( Z_n^W\right)_n$}; \draw (0,0) node{Semi-Markov $\left( Z_n^V\right)_n$}; \draw (0.5*\largeur,2*\hauteur)--++(-0.5*\lgFlecheHoriz,0); \draw [->,>=latex] (0.5*\largeur,2*\hauteur)--++(0.5*\lgFlecheHoriz,0); \draw (0.5*\largeur,2*\hauteur+0.3) node{$D$}; \draw (0.5*\largeur,\hauteur)--++(0.5*\lgFlecheHoriz,0); \draw [->,>=latex] (0.5*\largeur,\hauteur)--++(-0.5*\lgFlecheHoriz,0); \draw (0.5*\largeur,\hauteur+0.3) node{$N$}; \draw [->,>=latex] (0.5*\largeur,0)--++(-0.5*\lgFlecheHoriz,0); \draw [->,>=latex] (0.5*\largeur,0)--++(0.5*\lgFlecheHoriz,0); \draw (0.5*\largeur,0+0.3) node{$R$}; \draw (0,1.5*\hauteur)--++(0,0.5*\lgFlecheVert); \draw [->,>=latex] (0,1.5*\hauteur)--++(0,-0.5*\lgFlecheVert); \draw (0.3,1.5*\hauteur) node{$L$}; \draw (0,0.5*\hauteur)--++(0,0.5*\lgFlecheVert); \draw [->,>=latex] (0,0.5*\hauteur)--++(0,-0.5*\lgFlecheVert); \draw (0.4,0.5*\hauteur) node{$S_V$}; \draw (\largeur,1.5*\hauteur)--++(0,0.5*\lgFlecheVert); \draw [->,>=latex] (\largeur,1.5*\hauteur)--++(0,-0.5*\lgFlecheVert); \draw (\largeur+0.3,1.5*\hauteur) node{$B$}; \draw (\largeur,0.5*\hauteur)--++(0,0.5*\lgFlecheVert); \draw [->,>=latex] (\largeur,0.5*\hauteur)--++(0,-0.5*\lgFlecheVert); \draw (\largeur+0.4,0.5*\hauteur) node{$S_W$}; \end{tikzpicture} \end{equation} The mapping $D$ consists in defining the PRW from the VLMC: the random increments of the PRW are the initial letters of a VLMC. With the notations above, $S_n=\sum _{0\leq k\leq n}X_k$ where $X_k$ is the initial letter of $U_k$. The mapping $L$ associates with a VLMC the process of its successive different $\alpha$-lis that turns out to be a MRC when considered together with its jump times $T_n$ -- see Section~\ref{subsec:compareSM}. Here, $J_n^V$ is the $n$-th distinct $\alpha$-lis of the successive right-infinite words $U_0,U_1,U_2,\cdots$ and $T_n$ is the length of the $n$-th run of identical letters in the sequence $X_0,X_1X_2,\dots$ The power $V$ refers to the VLMC. The mapping $B$ associates with a PRW $\left( S_n\right) _n$ the process of its successive different bends (changes of directions). With our notations, $J_n^W$ is the $n$-th distinct bend and $M_n$ is the value of $S$ at the precise moment when the $n$-th bend $J_n^W$ occurs -- see Section~\ref{subsec:PRWandSM}. The power $W$ refers to the PRW. The mapping $S_{V}$ only consists in defining a semi-Markov process from a MRC, as stated in Section~\ref{subsec:defSM}. The mapping $S_W$ is defined in the same manner: it maps a MAP $\left( J_n^W,M_n\right) _n$ to the semi-Markov chain of the MRC $\left( J_n^W,M_n-M_{n-1}\right) _n$, as made precise in Definition~\ref{def:MAP}. The mapping $N$ acts on the first coordinate by reversing words: $J_n^V=\overline{J_n^W}$. The notation $\overline w$ stands for the reversed word of $w$: $\overline{ab}=ba$. For the second coordinate, remark first that $M_n-M_{n-1}$ is always of the form $k\alpha$ where $k$ is a positive integer and $\alpha$ an increment vector. The integer $T_n$ is this $k$. Finally, the mapping $R$ is simply the reversing of words: $Z_n^V=\overline{Z_n^W}$. In fact, in these particular situations (double and quadruple combs), the composition $S_V\circ L$ is a bijection -- see Remark~\ref{rem:semiMarkov}. Therefore, all these mappings are also one-to-one, showing that all these processes are essentially equivalent. Now that our different processes are related, let us translate the parameters, properties and assumptions that come from the VLMC world in terms of PRW, and vice-versa. \noindent {\bf Dimension 1} The PRW in dimension $1$ is driven by a VLMC based on the so-called double comb, as it was defined in Example~\ref{ex:comb}. The contexts of this tree are the $u^kd$, which have $ud$ as an $\alpha$-lis and the $d^ku$ which have $du$ as an $\alpha$-lis ($k\geq 1$ for both families of contexts). The cascades of the contexts write \[ \casc\left( u^kd\right)=\prod _{j=1}^{k-1}q_{u^jd}(u) {\rm ~~and~~} \casc\left( d^ku\right)=\prod _{j=1}^{k-1}q_{d^ju}(d) \] and there are two cascade series \[ \kappa _{ud}=\sum _{k\geq 1}\casc\left( u^kd\right) {\rm ~~and~~} \kappa _{du}=\sum _{k\geq 1}\casc\left( d^ku\right). \] Theorem~\ref{th:stablefini} guarantees that, under non-nullness assumption, this VLMC admits an invariant probability measure if, and only if $\kappa _{ud}<\infty$ and $\kappa _{du}<\infty$. Since the double comb is a very simple context tree, one can also make a direct computation that leads to the following result: {\it a non-null double-comb VLMC admits a $\sigma$-finite stationary measure if, and only if $\casc\left( u^kd\right)\to 0$ and $\casc\left( d^ku\right)\to 0$ when $k$ tends to infinity}. It turns out that, on the side of the $1$-dimensional PRW, Assumption~\ref{ass:a1} as well as the expectations of the persistence times $\tau _1^u$ and $\tau _1^d$ are functions of these cascades so that one can relate the above properties of the VLMC to the results of Section~\ref{subsec:PRWdim1} on $1$-dimensional PRW. The expectations of the waiting times are exactly the sums of cascades: $\kappa _{ud}=\Theta _u$ and $\kappa _{du}=\Theta _d$. Finally, one can assert: \newcommand{\blocAss}{\hbox{Assumption \ref{ass:a1}}} \newcommand{\blocTau}{\left(\begin{array}{c}\tau _1^u{\rm ~and~}\tau _1^d\\{\rm are~a.s.~finite}\end{array}\right)} \newcommand{\blocCasc}{\left(\begin{array}{c}\casc\left( u^kd\right)\underset{\scriptscriptstyle k\to\infty}{\longrightarrow}0\\{\rm ~~and~~}\\[3pt]\casc\left( d^ku\right)\underset{\scriptscriptstyle k\to\infty}{\longrightarrow}0\end{array}\right)} \newcommand{\blocMesInv}{\left(\begin{array}{c}{\rm ~The~VLMC~admits}\\{\rm a~}\sigma-{\rm finite}\\{\rm invariant~measure}\end{array}\right)} \[ \begin{array}{ccc} \blocCasc &\Longleftrightarrow &\blocAss\\[15pt] \Updownarrow&&\Updownarrow\\[15pt] \blocMesInv &\Longleftrightarrow &\blocTau \end{array} \] and \newcommand{\blocTauInt}{\left(\begin{array}{c}\tau _1^u{\rm ~and~}\tau _1^d\\{\rm are~integrable}\end{array}\right)} \newcommand{\blocSeriesCasc}{\left(\begin{array}{c}\displaystyle\sum _{k\geq 1}\casc\left( u^kd\right)<\infty\\{\rm ~~and~~}\\[3pt]\displaystyle\sum _{k\geq 1}\casc\left( d^ku\right)<\infty\end{array}\right)} \newcommand{\blocProbaInv}{\left(\begin{array}{c}{\rm ~The~VLMC~admits}\\{\rm a~unique~probability}\\{\rm invariant~measure}\end{array}\right)} \[ \begin{array}{ccc} \blocSeriesCasc &\Longleftrightarrow &\blocTauInt\\[15pt] \Updownarrow&&\\[15pt] \blocProbaInv&& \end{array} \] The link between recurrence or transience of the PRW and the behaviour of the VLMC is only partial. For instance, the PRW may be recurrent while there is no invariant probability measure for the VLMC. The PRW may even be transient while the VLMC admits an invariant probability measure -- see Table \ref{tableau-rec-trans-2d}. \noindent {\bf Dimension 2} The PRW in dimension $2$ is driven by a VLMC based on the so-called quadruple comb, as it is defined in Example~\ref{ex:comb}. Here, the contexts are the $\alpha^k\beta$, where $\alpha,\beta\in\rond A=\set{\ttN,\ttE,\ttW,\ttS}$, $\alpha\neq\beta$, $k\geq 1$. The $\alpha$-lis of the context $\alpha^k\beta$ is $\alpha\beta$, and its cascade writes \[ \casc\left( \alpha^k\beta\right)=\prod_{i=1}^{k-1}q_{\alpha^i\beta}(\alpha). \] Therefore, there are twelve cascade series, namely \begin{equation} \label{cascDim2} \kappa_{\alpha \beta}=\sum_{k=1}^{\infty} \casc\left( \alpha^k\beta\right), ~\alpha,\beta\in\rond A,~\alpha\neq\beta. \end{equation} As in dimension $1$, since the quadruple comb is a stable context tree having a finite set of context $\alpha$-lis, the non-null VLMC that drives the $2$-dimensional PRW admits a unique stationary probability measure if, and only if the twelve cascade series~\ref{cascDim2} converge. This is a consequence of Theorem~\ref{th:stablefini} and, here again, due to the simplicity of the quadruple comb, on can directly check that {\it a non-null quadruple-comb VLMC admits a $\sigma$-finite stationary measure if, and only if $\casc\left( \alpha^k\beta\right)\to 0$ when $k$ tends to infinity, for every $\alpha,\beta\in\rond A$, $\alpha\neq\beta$}. The transition matrix of the Markov process $\left( J_n\right) _n$ of the PRW bends, denoted by~$P$ in Formula~\eqref{markovsymb1}, writes also \[ P(\beta\alpha ,\alpha\gamma) =\sum _{n\geq 1}\casc\left( \gamma\alpha^n\beta\right) \] -- all other entries vanish. Relating this expression to the definition~\eqref{defQ} of the $Q$-matrix of the VLMC leads to the following: \begin{equation} \label{PQ} P(\beta\alpha ,\alpha\gamma)=Q_{\alpha\beta,\gamma\alpha} \end{equation} so that, up to the re-ordering that consists in reversing the indices $\alpha\beta\leadsto\beta\alpha$, the stochastic matrices $P$ and $Q$ are the same ones. Note that, since the quadruple comb is stable, the process of the $\alpha$-lis of the VLMC is Markovian and $Q$ is its transition matrix. Referring to the commutative diagram~\eqref{diagrammeComm}, Formula~\eqref{PQ} amounts to saying that the Markov chains $\left( J_n^V\right) _n$ and $\left(\overline{J_n^W}\right) _n$ are identical. In terms of persistence times of the PRW vs stationary measures for the VLMC, the properties stated in Section~\ref{subsec:PRWdim2} show that the following equivalences hold. \newcommand{\blocAssDeux}{\hbox{Assumption \ref{ass:a2}}} \newcommand{\blocTn}{\left(\begin{array}{c}\forall n,~T_n{\rm ~is~a.s.~finite}\end{array}\right)} \newcommand{\blocCascDeux}{\left(\begin{array}{c}{\rm for~all~}\alpha,\beta\in\rond A,\alpha\neq\beta ,\\[8pt]\casc\left( \alpha^k\beta\right)\underset{\scriptscriptstyle k\to\infty}{\longrightarrow}0\end{array}\right)} \newcommand{\blocMesInvDeux}{\left(\begin{array}{c}{\rm ~The~VLMC~admits}\\{\rm a~}\sigma-{\rm finite}\\{\rm invariant~measure}\end{array}\right)} \[ \begin{array}{ccc} \blocCascDeux &\Longleftrightarrow &\blocAssDeux\\[15pt] \Updownarrow&&\Updownarrow\\[15pt] \blocMesInvDeux &\Longleftrightarrow &\blocTn \end{array} \] and \newcommand{\blocTnInt}{\left(\forall n,~T_n{\rm ~is~integrable}\right)} \newcommand{\blocSeriesCascDeux}{\left(\begin{array}{c}{\rm for~all~}\alpha,\beta\in\rond A,\alpha\neq\beta ,\\[8pt]\displaystyle\sum _{k\geq 1}\casc\left( \alpha^k\beta\right)<\infty\end{array}\right)} \newcommand{\blocProbaInvDeux}{\left(\begin{array}{c}{\rm ~The~VLMC~admits}\\{\rm a~unique~probability}\\{\rm invariant~measure}\end{array}\right)} \[ \begin{array}{ccc} \blocSeriesCascDeux &\Longleftrightarrow &\blocTnInt\\[15pt] \Updownarrow&&\\[15pt] \blocProbaInvDeux&& \end{array} \] The counterexample cited in Theorem~\ref{conjecture} is enlighted by these equivalences: an example of recurrent $2$-dimensional PRW having a transient skeleton $\left( M_n\right) _n$ cannot be found without assuming that the $T_n$ are a.s. finite but non-integrable, as shown in~\cite{cenac:hal-01658494}. Reading the above equivalences shows that such a PRW must be driven by a VLMC the series of cascades of which diverge while their general terms tend to zero at infinity. \bibliographystyle{agsm} \bibliography{paccaut} \vfill\pagebreak \ \thispagestyle{empty} \end{document}
171,024
?:. Has anyone tried the Pineapple Hazy IPA? Please let me know what you thought of the beer. Full disclosure: Ecliptic provided me with two bottles of this beer. That being said. I loved it like I mentioned above 🙂 Recent Comments
411,589
The Facts About Fecal Sampling October 11, 2014 / Cat's Meow / Leave a comment Many types of parasites commonly infect cats in Southern Ontario. These parasites can be transmitted though hunting mice, birds or rabbits; by contact with other animals or their feces; through potting soil for houseplants; by fleas, mosquitos and other biting insects; or by contact with people. Written by Dr. Matthew Kornya
158,573
TITLE: Why time difference is permanent and mass increase is temporary? QUESTION [3 upvotes]: When a clock is transported here and there into space and then brought to the same place it differs with the other clock. When particles are accelerated with high speeds and then brought to rest their mass again gets back to its original rest mass. Why? Answer allegorically please. REPLY [1 votes]: I think you mix up time (ie what this or that clock is showing right now), and time's speed (ie, how fast that clock is "ticking" compared to how another clock (possibly moving differently) is "ticking", as seen from a reference point.). Like the first answer says, you should compare m with dt (time's "speed"), not t (ie, current time, the integration of that dt) What changes when you move a clock in regard to another is that it's time's "speed" and its mass are changing, then go back to "normal". The "current time" of the clock you are moving has changed "less" if you compare it to another clock "at rest". The clock that was moving will have, while moving faster, a slower "time speed", or "ticking rate", and then that "time speed" goes back to normal when it comes back to "rest" (compared to the reference clock). The accumulated time is therefore different, and will surely not "go back" (Going from T1 to T2 Of course this is all kind of mind boggling as the very word I used (ticking, backwards) are defined "in time", so it's kind of hard to separate whatr's physically happening from false information derived from the words themselves...
50,351
Family attorneys say they will ask the Oregon Attorney General to conduct an independent review after the Washington County District Attorney's office cleared three Beaverton WinCo employees of criminal responsibility in the death of an Aloha man who stopped breathing during a citizen's arrest for shoplifting. Mason Stevenson, 38, died Sept. 4. Chief Deputy District Attorney Rob Bletko said Wednesday he agreed with the findings of Beaverton Police Department detectives, who interviewed several witnesses and the three employees. According to BPD reports, "The citizen arrest of Stevenson by the three WinCo employees was lawful," and the "force used by the WinCo employees to arrest Stevenson was reasonable." "I've made my decision," Bletko said. "The DA's office is not proceeding with criminal charges." Family attorney Thomas Hill said he believes the DA's office may have backed off charges because they couldn't prove excessive force beyond a reasonable doubt. "In our mind they clearly, clearly did use excessive force," Hill said. "The family is very disheartened by the decision," Hill continued. At the very least, a grand jury should be convened to examine the credibility of the witnesses to the incident, he said. I say good that another low life criminal, parasite on the body of society is gone! I get so sick of the people in this country who always see the criminals as victims! Why should the family of Hill complain, they knew he was a thief and is a criminal act. I don't feel sorry for this man, for dieing, the employees did the right thing in a stoping a shoplifter. They complain because they can, because in our lawsuit happy society it doesn't matter what this guy did, his family can throw a hissy fit and they'll get money simply because he died. It's sick that we've allowed this tool to keep powers in check to be abused like this. It's amazing to me that there are still people out in the world like you that left the messages above. Was he a criminal? Yes. Did he do something very wrong? Yes. Should he have been punished? Yes. But death? For stealing school supplies? Come on. If people should die for doing stupid things, none of you would be here right now. Think about that when you are glad this father, husband, son died. And no...I have no relation to him or to his family. I'm just someone who sees things for what they are. urajakas (nice name..) You need to slow down and reread the posts. They don't condem him.. They are pointing out that he was caught in a criminal act.. The store employee's didn't kill him.. He killed himself by his own actions and lack of action in keep himself fit.. He got his just due.. And of course the family will sue.. Alway's looking for that free ride.. Our courts are as corrupt as the congressmen and senators we have in office that are stealing from us every day.. The new american way.. Doberman144...we must be looking at different postings. Because the first 2 that I read absolutely condem him. Saying "good that another low life criminal, parasite on the body of society is gone" and "I don't feel sorry for this man, for dieing" is condeming him. I don't negate the fact that the store employees did their job in stopping the criminal act he was performing - however my point is that he DID die. Whether at the hands of the employees or not, a human being is dead. We're supposed to be a society of law. Death is not the penalty for stealing. Death (in the sense of violence) is not the penalty for being overweight. The hatefulness of a few of the comments is really sad. I believe there should have been trials - a trial of the thief, not his death, or a trial of those who caused his death, to determine responsibility. Because there were not, I can tell you that's going to affect how I am voting today. And it has already affected whether or not I am willing to shop at Winco. I dislike shoplifting as much as anybody, but death isn't the correct punishment for being caught shoplifting while fat. You know there is a fundamental problem with the way things are and with the problem with the way people who think this man got what he deserved. It's the idea that laws are more important than people. What this man was doing was wrong to some. If you think about it though, here is a man who was stealing school supplies. I assume for a child in his life. If so, his thoughts were of nothing more than he did not want his child to have to go into a classroom without the things they needed. That thought is of a man wanting to provide for a child, which is what I consider a noble thought. He may have gone about it the wrong way. Certainly it was not a malicious thought. He was not trying to hurt anyone. I think that alot of people are losing site of the fact that he was a PERSON, not just a criminal. Even people who commit crimes, when you get right down to it are, in fact PEOPLE. Our criminal justice sysytem is screwed up, more than most people realize, especially in this county. When you care more about laws then you do about people, things get terribly messed up in peoples lives. Laws are here to protect people. It is the people that are of value. Sometimes we forget about that and it is people that are hurt by this way of thinking.
198,777
TITLE: Prove that $T$ is a tree of order $n$ if and only if $T$ is a connected graph with size $n-1$. QUESTION [1 upvotes]: Prove that $T$ is a tree of order $n$ if and only if $T$ is a connected graph with size $n-1$. Here is my answer. Proof: ($\Rightarrow$) Let $T$ be a tree of order $n$ and let the size of $T$ be $m$. By definition, $T$ is connected. Also $n = m + 1\implies m = n - 1$, so the tree has size $n - 1$. ($\Leftarrow$) Let $T$ be a connected graph with size $n-1$. It suffices to show that $T$ is acyclic or has no cycles. If $T$ contains a cycle $c$, and $e$ is an edge of $c$, then $T-e$ is a connected graph of order $n$ having size $n-2$, which is impossible. Therefore $T$ is acyclic or has no cycles. Thus, $T$ is a tree. I am not sure my answer is good so I am just wondering if I can get some feedback if it is correct. REPLY [1 votes]: It seems to me like you are assuming the result you want to prove in both cases. For $\Rightarrow$, how do you know $n=m+1$? For $\Leftarrow$, why is it impossible for a connected graph of order $n$ to have $n-2$ edges?
161,609
TITLE: Why is $\mathbb{R^2} \setminus (0, 0)$ connected but not simply connected? QUESTION [1 upvotes]: Why is $\mathbb{R^2} \setminus (0, 0)$ connected but not simply connected? Simply connected means path connected, intuitively it seems that for any point in $\mathbb{R^2} \setminus (0, 0)$ we can have an unbroken path to any other point in this set? So what am I misunderstanding? REPLY [7 votes]: Simply connected does not mean path-connected. Simply connected means that if a path starts at a point $p$ and returns in to $p$, then that loop can be contracted to $p$. If a path starts at $1$ and winds once clockwise around $0$ and returns to $1$, then that path cannot be contracted to $1$ within the space $\mathbb R^2\setminus \{(0,0)\}$. It gets "caught" on the point $(0,0)$. REPLY [2 votes]: Path connectedness is a more basic concept. Simple connectedness means any two paths between two points can be 'deformed continuously' to each other. Take the unit circle centred at origin. The upper and lower semi-cricular arcs can be thought of as two paths from $(-1,0)$ to $(1,0)$. When we try to push one of these arcs towards the other you have a hurdle at (0,0). SOme intermediate path has to use the origin which unfortunately has been removed.
193,093
. Here's who's on the list: - Sen. John McCain, R-Ariz.: McCain starts the race as the front-runner in the Grand Canyon State, which voted for GOP nominee Mitt Romney with 54 percent in 2012. But Democrats, including EMILY's List, say they're excited about their candidate, Rep. Ann Kirkpatrick, who they say will be McCain's strongest opponent yet. Rothenberg & Gonzales Political Report/Roll Call rating: Leans Republican - Sen. Mark S. Kirk, R-Ill.: Kirk is arguably the most vulnerable Senator up for re-election in 2016. He will face voters in a state that voted for President Barack Obama by double-digit margins in 2008 and 2012. And Democrats say his likely opponent, Democratic Rep. Tammy Duckworth, makes for a good matchup. Rating: Tilts Democratic - Sen. Kelly Ayotte, R-N.H.: Democrats got their preferred candidate here when Gov. Maggie Hassan announced she would run for Senate in the Granite State, rather than seek a third term as governor. Polling shows a Hassan-Ayotte contest in a dead heat. Rating: Tilts Republican - Sen. Richard M. Burr, R-N.C.: Democrats finally got a candidate in former state Rep. Deborah Ross after a number of top-tier recruits passed on the race this year. EMILY's List has yet to officially endorse Ross' candidacy. But access to the group's national fundraising base could help Ross build up a war chest for the contest. Rating: Leans Republican The four members join Sen. Patrick J. Toomey and a handful of House members on the "On Notice" list — which EMILY's List reserves for those it says have voted for anti-abortion legislation. “Senators Kelly Ayotte, Richard Burr, Mark Kirk, and John McCain all share an extreme ideological agenda that hurts women and families across the country,” Stephanie Schriock, president of EMILY’s List, said in a news release. “These four vulnerable senators are joining their colleague Pat Toomey ‘On Notice’ because they all have long records of voting for anti-woman legislation and standing in the way of policies that give working families a fair shot." Related: Roll Call Race Ratings Map: Ratings for Every House and Senate Race in 2016 Get breaking news alerts and more from Roll Call in your inbox or on your iPhone.
358,005
The Complete Musical Aptitude Test Library The Complete Online Musical Aptitude Test and Coaching CourseWatch Promo Understanding music by ear is a vital skill. People who can hear what's going on inside of music often progress more quickly and effortlessly than others. With schools now testing musical aptitude for admissions - it's vital that you or your student learn to master this powerful but practical musical test. This highly extensive musical aptitude test Library contains all the tests you need to practice in order to have the best chance of passing an MAT Test. It also coaches you directly at the piano about how to practice or help a student practice deeper musical awareness. Part 1 is a highly extensive test covering all four areas of Pitch, Melody, Texture/Harmony and Rhythm. This section contains many more exercises and tests than are usual in MAT tests - to cover all bases and give you the best overview of what to expect. Part 2 explains in great detail over 20 different exercises and practice routines that you can do yourself or with your child or student to regularly and systematically enhance their aural awareness, concentration, memory and perception of musical content. Class Curriculum.
328,779
BIENNALES ART EVENTS AUSTRALIAN ART CONTEMPORARY ART GLOBALISATION ENLIGHTENMENT With an unprecedentedly high attendance of over half a million visitors, the 17th Sydney Biennale has also been the largest in scale since the biennale was first held in 1973. From 12 May to 1 August, 444 works by 167 artists from 36 countries sprawled out over seven exhibition venues, including the Museum of Contemporary Art, Cockatoo Island, Pier 2/3, Artspace, the Sydney Opera House, the Royal Botanic Gardens and the entrance court of the Art Gallery of New South Wales. What follows is an Art Radar summary of this year’s artists and events and a collection of comments and critiques made by various arts writers and bloggers. From European Enlightenment to globalisation Titled The Beauty of Distance: Songs of Survival in a Precarious Age, this year’s biennale celebrated the end of European Enlightenment in art and welcomed a new era of shifted balance of power. David Elliott, artistic director of the biennale, spoke to The New Zealand Herald about the breaking down of previous political and geopolitical structures and the changing dispersion of power and knowledge in the present world. In an effort to explore this new world – a world in which Western superiority is being replaced by equality among different cultures – the biennale selected and presented works from diverse cultures, predominantly Australia, New Zealand, Canada, Scandinavia, Britain and China, works created mostly by artists who are new to international exhibitions. Diverse art styles, heavy demand for new technologies As the subtitle Songs of Survival in a Precarious Age suggests, the biennale presented arts with themes that are closely connected to contemporary realities. A recent article posted on c-artsmag mentioned how the biennale pointed to a world which “is fragmented and fractured, hobbled by inequalities and necessitating historical reassessment.” Common themes of the exhibited works include poverty, famine, inequality, environmental despoliation and globalisation. The biennale presented works of a variety of styles. In Sydney Morning Herald, Adam Fulton describes that, ).” There was a heavy demand for new technologies to support the audio and visual effects in many biennale works. A review by Colin Ho on ZDNet reports that over 70% of the budgeted expenses for artworks and installations of the biennale were spent on audio-visual and IT infrastructure. Australian arts were promoted The biennale represented the largest number of Australian artists in history. 65 Australian artists exhibited their works, and most of the 68 artists who premiered new works were also Australians. An example is Peter Hennessey’s sculptural work, My Hubble (the universe turned in on itself) (2010), with which visitors can play to modify and create their own universes that they can then view in the eye piece located high in the air. Peter Hennessey's 'My Hubble', which allows viewers to create and view their own universes, was part of this year's Sydney Biennale. Another example is Brook Andrew’s Jumping Castle War Memorial (2010). The seven-metre-wide bouncy castle is not designed for the children, but for adults over sixteen only. The plastic-enclosed turrets contain skulls which represent the victims of genocide worldwide. The plastic-enclosed turrets of Brook Andrew’s 'Jumping Castle War Memorial' contain skulls which represent the victims of genocide worldwide. The interactive installation is part of this year's Sydney Biennale. Major Asian artworks at the biennale Among all the exhibited works, one of the most visited, media-covered and praised artwork was Chinese artist Cai Guo Qiang’s Inopportune: Stage One, his largest installation to date. Cai Guo-Qiang’s 'Inopportune: Stage One' (2004) is a colossal installation made with nine cars and sequenced multichannel light tubes which create an impression of a series of cars exploding and rotating through space. The biennale exhibited other Asian premier works including Japanese artist Hiroshi Sugimoto’s Faraday Cage, Chinese artists Sun Yuan and Peng Yu’s Hong Kong Intervention and Chinese artist Jennifer Wen Ma‘s New Adventures of Havoc in Heaven III. Japanese artist Hiroshi Sugimoto’s premier work 'Faraday Cage' is an installation created with light boxes from his previous “lightening fields” which experiment with photographically imaging electricity on large-format film. Chinese artists Sun Yuan and Peng Yu’s premier work 'Hong Kong Intervention' (2009) reflects on the socio-economic inequity between the now mobile and globalised Filipino domestic maid workforce in Hong Kong and their employers. Chinese artists Jennifer Wen Ma's premier work 'New Adventures of Havoc in Heaven III', a video installation in which smoke projection beams an animated image of the Monkey King from Chinese mythology. Go here to view videos highlighting some of the major works in the 17th Sydney Biennale including Jennifer Wen Ma’s New Adventures of Havoc in Heaven III, Peter Hennessey’s My Hubble and Brook Andrew’s Jumping Castle War Memorial. Mixed response from professionals and blog critics While the consensus among critics and bloggers is that the Sydney Biennale this year was better than those in previous years, there are mixed comments about the biennale. John McDonald makes a summary of the biennale as a circus which relies too much on the natural ambience of Cockatoo Island. As he wrote in the Brisbane Times, “This Biennale is as much a circus as ever, with some impressive works and a huge amount of filler. It is a better, more consistent show than the previous Biennale, although it still contains many exhausting hours of video and leans heavily on the extraordinary ambience of Cockatoo Island.” He also questions whether the diverse selection of works is based on a central theme or just David Elliott’s taste. “The sheer diversity of this collection makes a mockery of the conceptual framework outlined by the director. He might just as easily have said: ‘These are works that I like, made by some friends of mine.’ Instead, we are subjected to the usual preposterous claims that this art will leave us gasping for breath and spiritually transfigured. If it doesn’t, the problem lies with us, not the show.” A blogger, writing on Art Kritique, shares a similar view with John McDonald and describes the biennale as confusing, banal and tricksy. “The Biennale of Sydney is confusing. A friend of mine recently described it as a ‘car crash mishmash’ and she was right, sometimes the unexpected juxtapositions make for magical surprises, more often they leave you with a headache … The inherent ghostly palimpsest of the island’s history, the shapes and textures of architecture and machinery speak so eloquently themselves that much of the work feels banal and tricksy.” But some appreciated the biennale as being thought provoking and the works as being engaging and of high standard. “Remarkably coherent and thoughtful, Elliott’s biennale mostly avoids the pitfalls of political correctness by including art that is thought-provoking, engaging and, in some instances, even beautiful.” Christina Ruiz, writing in the Art Newspaper. “The Sydney Biennale … is usually more Banale than Biennale but not this year. The Beauty of Distance: Songs of Survival in a Precarious Age, curated by David Elliott, is at turns poetic, ironic, and provocative. With tonnes of interesting artists doing amazing and often very humorous things, from Cai Guo Qiang and Shen Shaomin from China, to Folkert de Jong from the Netherlands, Paul McCarthy from the US, and Kader Attia from France. Roxy Paine’s ‘Neuron’ installation outside the Museum of Contemporary Art is particularly arresting, its stainless steel nerve cell of tree roots exploding in front of the MCA’s rather authoritarian 1930s facade. In my view, it is the best Biennale since the ‘The Readymade boomerang’ curated by René Block in 1990.” Chris Moore, writing in Saatchi Online TV and Magazine. “The Biennale has a delightfully freewheeling and inclusive spirit, but it is the high standard of the art work, carefully selected and displayed, that makes the big exhibition so enjoyable at all its venues, not just Cockatoo Island … It helps that there is very little art of the ‘my three year old could have drawn that’ school. The easy pose of ironic detachment which sometimes puts people off contemporary art is almost completely absent, or is at least leavened by a political and conceptual eagerness which eloquently expresses the Biennale’s seemingly unwieldy theme, “The Beauty of Distance: Songs of Survival in a Precarious Age.” Alan Miller, writing in the Berkshire Review for the arts. CBKM/KN Related topics: art events, Australia venues, promoting art Related posts: - Globalisation of contemporary art market evident in growth of art fairs – The Economist – August 2010 – this article explains how globalisation of arts necessitates more art fairs - Is globalisation of the art market slowing down? The Economist reports – June 2010 – discusses the dominance of American artists at Christie’s New York’s recent sales - Animamix Biennial – an alternative biennial pushes aesthetic of comic art – interview curator Victoria Lu – February 2010 – interview with a curator of a unique biennale - The Problem of Asia: Para/Site art exhibition explores Asian identity in Sydney – May 2010 – a list of works in an exhibition in Sydney that tells about Asian identity - Balgo Hills art: Indigenous Australian art by renowned masters in rare tour through Asia – March 2010 – works by a small community of indigenious Australians in a touring exhibition
300,025
The Automotive industry constitutes of all the firms that are into designing, manufacturing, and marketing motor vehicles. India is currently running at the 5th position in terms of the size of its automobile industry. Considering the stake in the aggregate production, the same industry holds a contribution of 7.5-10% in the Gross Domestic Product (GDP) of the nation and 4-5% share in the exports of the country as well. The market is forecasted to grow by 44% within the period of 2020-2027 and is likely to hit almost 7-million-unit annual sales by 2027. The industry, moreover, might create five crore direct and indirect jobs in this decade. In April-March 2020, overall automobile exports registered a growth of 2.95% in India. Passenger vehicles exports increased by 0.2 % (approximately) and two-wheeler exports reflected a growth of 7.30% in April-March 2020.The entire sector engrossed $24.5 billion Foreign Direct Investment (FDI) during the April 2000 – June 2020; the same is said to be accounting for 5.1% of the total FDI inflows of the entire country. The rise in the demand for cars, and other vehicles, supplemented by the increase in the annual income becomes the chief growth driver of the automobile industry in India. The launch of effective initiatives like tailor made finance schemes as well as schemes that are allowing repayment of loans at an ease has also assisted the augmentation of the automobile sector. B2B E-commerce and the Automotive Industry Having stated previously that the industry is marking a consistent ascend in its demand, the B2B firms are increasing their contribution towards the automobile supply, making it as one of the most prominent domain of B2B marketing. Modern technology facilitates B2B automotive brands to restructure their sales processes and automate a lot of manual as well transportation costs that are increasingly high otherwise, are reduced down and profit margin is increased. Through technological advancement, firms are able to create self-service portals that are open to operate 24/7, promoting and mediating genuine and perfect deals. One such thriving B2B platform is termed as supplier4buyer, which specializes in dealing of auto parts, connecting the manufactures of automobile parts directly to the buyers at a profit to both the parties. Contribution of Supplier4buyer (India’s leading B2B platform) in the Automotive Industry Founded in 2020 as a ‘StartupIndia’ registered entity, Supplier4buyer is an exceptional cloud-based unified Indian B2B e-commerce platform, linking millions of verified buyers and suppliers present in the national as well as in the international market. Having headquarters in Noida, Uttar Pradesh, it aims at improvising the supply chain of the modern industrial era by adding value to the B2B industry. Furthermore, it is the prime B2B portal in the market to initiate becoming an E-Marketplace for SMEs. Providing 100% competent leads to the clients, it has accomplished the title of India’s number 1 B2B portal. It provides a range of diverse supplies with respect to 27 blooming industries, one of which is the automotive industry. Supplier4buyer offers an extensive range of the utmost fine quality of automobile spare parts that facilitate the production of the finished goods in the automobile industry, for instance automotive parts and spare like gaskets, lubricants etc that are sourced exclusively from listed auto parts manufacturers and dealers. Below is a complete list of the goods that are provided through the B2B portal of supplier4buyer: Automobile Fuel Oil and Lubricants, Automobile Oil and Lubricants, Automobile other tools, Automobile Services, Automotive Bearing and support spring, Automotive Cleaning and Detailing, Automotive Drive train and components spare, Automotive Electronics, Automotive Exterior, Automotive Filters seal Ring Sleeve and Gaskets, Automotive Interior, Automotive Power train and component spare, Automotive Radiator Support coupling Valve, Automotive Tools and Die, Battery, Battery Accessories, Commercial Construction Agricultural Automobile, Electricity driven Automobile and Parts, Hydraulic Jacks and Lifting Equipments, Lamps, Lighting, Loan, Road Transportation, Services and specification Based Automobile and Two and Three wheeler Part Spare and Accessories. Summary- Are you on the lookout for the best B2B portal dealing in auto-parts? Here is Supplier4buyer; India’s leading online platform to provide 100% genuine deals.
159,302
Home > Flat Roof Materials > Torch On Felts SBS Felts APP Felts its melting point is only slightly lower than that of APP, it doesn't have the same flow characteristics as APP-modified bitumen. Warm SBS gets very sticky--a factor that increases SBS-modified bitumen roofing materials, determining when the sheet is properly heated is simplified. When the plastic burn-off sheet is melted away, and the SBS-modified membrane has a gooey consistency, the sheet is ready to be rolled into place. Proper torching is easy to check by back-rolling and looking for full adhesion to the substrate. COMPARISON SBS APP Bitumen Modifier Type Synthetic Rubber Plastic Low Temperature Flexibility -22°F to -5°F 14°F to 32°F Softening Point 230°F to 270°F 245°F to 300°F Elongation with Full Recovery >1000% 5-10% Serviceability Range -20°F to 270°F 15°F to 300°F Eco Slate Installation GuideWednesday, 18/10/2017 Easy-Trim Verge U DemonstrationFriday, 02/06/2017 Malco Product Demo Videos!Thursday, 01/06/2017
219,369
TITLE: Multiple solutions for a monic degree-5 polynomial in $\mathbb{Z}_{5}[x]$ for which all elements of $\mathbb{Z}_{5}$ are roots QUESTION [1 upvotes]: A monic degree-5 polynomial in $\mathbb{Z}_{5}[x]$ for which all elements of $\mathbb{Z}_{5}$ are roots I found is $(x^5 - x)$ since $f(1) \equiv$ 0(mod 5) $f(2) \equiv$ 0(mod 5) $f(3) \equiv$ 0(mod 5) $f(4) \equiv$ 0(mod 5) $f(5) \equiv$ 0(mod 5). I am wondering, however, if there could be more than one answer. I noticed that $(x^5 - x)$ is a difference and can be factored but I don't know if that is a good conjecture. REPLY [3 votes]: Hint: if their were two then their difference is a nonzero polynomial of degree $< 5$ with $5$ roots, contra: a polynomial in a field (or domain) has no more roots than its degree. Remark $ $ More generally, by above if $f(x) = 0\,$ for all $x$ then $\deg f \ge 5.\,$ Let $\, g = x^5-x.\,$ Note $h = f\bmod g = f - q\,g\,$ also has $\,h(x) = 0$ for all $x$ and $\,\deg h < 5\,$ so $\, h = 0,\, $ so $\,f = q\,g.\,$ Thus any other polynomial having the same set of roots is a multiple of the minimal degree polynomial $g$. Remark $ $ This is a special case of the fact that ideals in Euclidean domains are principal - generated by any element of minimal Euclidean value(= degree here).
89,130
Traditionally, the Intermediate Gauge has been identified as the Hadrian’s Wall curtain wall constructed between Milecastle 54 and Bowness fort to replace the Turf Wall, after the abandonment of the Antonine Wall. It was built on a foundation of flagstones and was around 2.45m wide. Further reading: Breeze 2006; Symonds and Mason 2009 Advertisements
346,516
TITLE: Efficiently solving $9\,\_\,8\,\_\,7\,\_\,6\,\_\,5\,\_\,4=2020$ (filling blanks with $+$, $-$, $\times$, $\div$, and using parentheses) QUESTION [4 upvotes]: Our physics teacher gives us problems every week to solve for fun. This week, we got this: $$9\,\_\,8\,\_\,7\,\_\,6\,\_\,5\,\_\,4=2020$$ Fill in the blanks with the operations $+$, $-$, $\times$, $\div$. Parentheses can be used as you wish. Despite my attempts at trying to find an elegant way to solve this problem, I eventually gave up and tried brute forcing an answer along with a friend, attempting to try out all of the $4^5×5!$ (I think) possibilities, along with some very basic high-level strategies for some filtering. A classmate, who used something like a brute force approach and happened to be lucky enough to stumble upon the right answer early got this: $$(9 \times 8 \times 7 + 6 - 5) \times 4 = 2020$$ While we have an answer, I would like to know if there's a better approach - one that's more elegant and/or efficient. Any insights would be much appreciated. Thanks! REPLY [3 votes]: I guess that there are a lot of approaches to this problem, but i would consider the following: Obviously 2020 is too large to be the result of some calculation which does not use a multiplication. So we need $\times$ at least once. That is why one should take a look at the prime factorization of 2020 which leads to $$2020 = 2 \times 2 \times 5 \times 101.$$ This is why i would decide to choose a $\times$ before the 4. Now we have 505 left which is quite large. So i started to multiply the numbers from left to right which leads to $$9 \times 8 \times 7 = 504.$$ This is luckily quite close to 505 which we are looking for and can then be received by adding 6 und subtracting 5. This finally leads us to $$(9 \times 8 \times 7 + 6 - 5) \times 4 = 2020.$$ However, there might be even more efficient ways to that. In particular, concerning the last steps when it comes to figuring out how to receive 505.
198,401
\begin{document} \maketitle \begin{abstract} One of the reasons that many neural networks are capable of replicating complicated tasks or functions is their universality property. The past few decades have seen many attempts in providing constructive proofs for single or class of neural networks. This paper is an effort to provide a unified and constructive framework for the universality of a large class of activations including most of existing activations and beyond. At the heart of the framework is the concept of neural network approximate identity. It turns out that most of existing activations are neural network approximate identity, and thus universal in the space of continuous of functions on compacta. The framework induces several advantages. First, it is constructive with elementary means from functional analysis, probability theory, and numerical analysis. Second, it is the first unified attempt that is valid for most of existing activations. Third, as a by product, the framework provides the first university proof for some of the existing activation functions including Mish, SiLU, ELU, GELU, and etc. Fourth, it discovers new activations with guaranteed universality property. Indeed, any activation\textemdash whose $\k$th derivative, with $\k$ being an integer, is integrable and essentially bounded\textemdash is universal. Fifth, for a given activation and error tolerance, the framework provides precisely the architecture of the corresponding one-hidden neural network with predetermined number of neuron, and the values of weights/biases. \end{abstract} \section{Introduction} Human brain consists of networks of billions of neurons, each of which, roughly speaking, receives information\textemdash electrical pulses \textemdash from other neurons via dendrites, processes the information using soma, is activated by difference of electrical potential, and passes the output along its axon to other neurons through synapse. Attempt to understand the extraordinary ability of the brain in categorizing, classifying, regressing, processing, etc information has inspired numerous scientists to develop computational models to mimic brain functionalities. Most well-known is perhaps the McCulloch-Pitts model \cite{McCulloch90}, which is also called perceptron by Rosenblatt who extended McCulloch-Pitts model to networks of artificial neurons capable of learning from data \cite{Rosenblatt58}. {\em The question is if such a network could mimic some of the brain capability}, such as learning to classify. The answer lies in the fact that perceptron networks can represent Boolean logic functions exactly. From a mathematical point of view, perceptron networks with Heaviside activation functions compute step functions, linear combinations of which form the space of simple functions which in turn is dense in the space of measurable functions \cite{Durrett2010,Royden18}. That is, linear combination of perceptions can approximate a measurable function to any desired accuracy \cite{Lippmann87, Cotter90}: {\em the first universal approximation for neural networks.} Universal approximation capability partially ``explains" why human brains can be trained to virtually learn any tasks. For training one-hidden layer networks, Widrow-Hoff rule \cite{widrow60} can be used for supervised learning. For many-layer ones, the most popular approach is back-propagation \cite{Rumelhart87} which requires the derivative of activation functions. Heaviside function is not differentiable in the classical sense. A popular smooth approximation is the standard logistic (also called sigmoidal) activation function which is infinitely differentiable. {\em The question is now: are neural networks of sigmoidal function universal?} The answer to this question was first addressed by Cybenko \cite{Cybenko1989} and Funahashi \cite{FUNAHASHI1989}. The former, though non-construtive (an almost constructive proof was then provided in \cite{Chen92} and revisited in \cite{Costarelli14}), elegantly used the Hahn-Banach theorem to show that sigmoidal neural networks (NNs) with one hidden layer is dense in the space of continuous functions on compacta. The latter used Fourier transform and an integral formulation for integrable function with bounded variation \cite{Irie88} to show the same results for NNs with continuous, bounded, monotone increasing sigmoidal activation function. Recognizing the NN output as an approximate back-projection operator, \cite{Carroll89} employed the inverse Radon transformation to show the universal approximation property in space of squared integrable functions. Making use of the classical Stone-Weierstrass approximation theorem \cite{HORNIK89} successfully proved NNs with non-decreasing sigmoidal function is dense in space of continuous functions over compacta and dense in space of measurable functions. The work in \cite{KURKOVA92}, based on a Kolmogorov theorem, showed that any continuous function on hypercubes can be approximated well with two-hidden layer NN of sigmoidal functions. Though sigmoidal functions are natural continuous approximation to Heaviside function, and hence mimicking the activation mechanism of neurons, they are not the only ones having universal approximation property. Indeed, \cite{MHASKAR92} showed, using distributional theory, that NN with $k$th degree sigmoidal function is dense in space of continuous functions over compacta. However, this approach is not realizable by typically NN as the universality is for sigmoidal function augmented by polynomials. Meanwhile, \cite{Gallant88} designed a cosine squasher activation function so that the output of one-hidden layer neural network is truncated Fourier series and thus can approximate square integrable functions to any desired accuracy. Following and extending \cite{Cybenko1989}, \cite{HORNIK91} showed that NN is dense in $\L^\p$ with bounded non-constant activation function and in space of continuous function over compacta with continuous bounded nonconstant activation functions. Using the Stone-Weierstrass theorem, \cite{Cotter90} showed that any network with activations (e.g. exponential, cosine squasher, modified sigma-pi, modified logistic, and step functions) that can transform product of functions into sum of functions is universal in the space of bounded measurable functions over compacta. The work in \cite{Stinchcombe90} provided universal approximation for bounded weights (and biases) with piecewise polynomial and superanalytic activation functions (e.g. sine, cosine, logistic functions and piecewise continuous functions are superanalytic at some point with positive radius of convergence). One hidden-layer NN is in fact dense in space of continuous functions and $\L^\p$ if the activation function is not polynomial almost everywhere \cite{LESHNO93, pinkus1999}. Using Taylor expansion and Vandermonde determinant, \cite{Attali97} provided an elementary proof of universal approximation for NNs with $C^\infty$ activation functions. Recently, \cite{YAROTSKY17} provides universal approximation theorem for multilayer NNs with ReLU activation functions using partition of unity and Taylor expansion for functions in Sobolev spaces. Universality of multilayer ReLU NNs can also be seen using approximate identity and rectangular quadrature rule \cite{Moon21}. Restricting in Barron spaces \cite{Barron93,klusowski18}, the universality of ReLU NNs is a straightforward application of the law of large numbers \cite{Weinan19}. The universality of ReLU NNs can also be obtained by emulating finite element approximations \cite{he20,Opschoor20}. Unlike others, \cite{CARDALIAGUET1992} introduced $\B$ellshape function (as derivative of squash-type activation function, for example) as a means to explicitly construct one-hidden layer NNs to approximate continuous function over compacta in multiple dimensions. More detailed analysis was then carried out in \cite{CHEN2009, CHEN2015}. The idea was revisited and extended in \cite{Anastassiou01,Anastassiou00,ANASTASSIOU2011,ANASTASSIOU2011b,ANASTASSIOU2011tanhnD} to establish universal approximation in the uniform norm for tensor product sigmoidal and hyperbolic tangent activations. Similar approach was also taken in \cite{COSTARELLI2015} using cardinal B-spline and in \cite{Costarelli2015sigmoid} using the distribution function as sigmoid but for a family of sigmoids with a certain of decaying-tail conditions. While it is sufficient for most universal approximation results to hold when each weight (and bias) varying over the whole real line $\R$, this is not necessary. In fact, universal approximation results for continuous function on compacta can also be obtained using finite set of weights \cite{CHUI1992,ITO91,Ismailov12}. It is quite striking that one-hidden layer with only one neuron is enough for universality: \cite{Guliyev16} constructs a smooth, sigmoidal, almost monotone activation function so that one-hidden layer with one neuron can approximate any continuous function over any compact subset in $\R$ to any desired accuracy. Universal theorems with convergence rate for sigmoidal and others NNs have also been established. Modifying the proofs in \cite{Chen92,MHASKAR92}, \cite{Debao93} showed that, in one dimension, the error incurred by sigmoidal NNs with $\N$ hidden units scales as $\mc{O}\LRp{\N^{-1}}$ in the uniform norm over compacta. The result was then extended to $\R$ in \cite{HONG02} for bounded continuous function. Similar results were obtained for functions with bounded variations in \cite{ GAO93} and with bounded $\phi$-variations in \cite{Lewicki03}. For multiple dimensions, \cite{Barron93} provided universal approximation for sigmoidal NNs in the space of functions with bounded first moment of the magnitude distribution of their Fourier transform with rate $\mc{O}\LRp{\N^{-1/2}}$ independent of dimensions. This result was generalized to Lipschitz functions (with additional assumptions) in \cite{LEWICKI04}. Using explicit NN construction in \cite{CARDALIAGUET1992}, \cite{Anastassiou01,Anastassiou00,ANASTASSIOU2011,ANASTASSIOU2011b,ANASTASSIOU2011tanhnD,CHEN2009, CHEN2015,Costarelli2015sigmoid} provided congergence rate of $\mc{O}\LRp{\N^{-\alpha}}$, $0 < \alpha < 1$ for H\"older continuous functions with exponent $\alpha$. Recently, \cite{SIEGEL2020313} has revisited universal approximation theory with rate $\mc{O}\LRp{\N^{-1/2}}$ for smooth activation function with polynomial decay condition on all derivatives. This improves/extends the previous similar work for sigmoidal functions by \cite{Barron93}) and exponentially decaying activation functions by \cite{HornikEtAl94}. The setting in \cite{SIEGEL2020313} is valid for sigmoidal, arctan, hyperbolic tangent, softplus, ReLU, Leaky ReLU, and $k$th power of ReLU, as their central differences satisfy polynomial decaying condition. However, the result is only valid for smooth functions in high-order Barron spaces. For activation functions without decay but essentially bounded and having bounded Fourier transform on some interval (or having bounded variation), the universal approximation results with rate $\mc{O}\LRp{\N^{-1/4}}$ can be obtained for first order Barron space. The rate can be further improved using stratified sampling \cite{MAKOVOZ96,klusowski18}. Convergence rates for ReLU NNs in Sobolev spaces has been recently established using finite elements \cite{he20,Opschoor20}. The contribution of this paper is to provide a constructive and unified framework for a large class of activations including most of existing ones. The main features of our approach are the following. First, the approach is constructive with elementary means from functional analysis, probability theory, and numerical analysis. Second, it is the first attempt to unify the universality for most of existing activations. Third, as a by product, the framework provides the first university proof for some of the existing activation functions including Mish, SiLU, ELU, GELU, and etc. Fourth, it discovers new activations with guaranteed universality property. Indeed, any activation\textemdash whose $\k$th derivative, with $\k$ being an integer, is integrable and essentially bounded\textemdash is universal. In that case, the activation function and all of its $j$th derivative, $j = 1,\hdots,\k$ are not only a valid activation but also universal. For example, any differential sigmoidal function and its derivative are universal. Fifth, for a given activation and error tolerance, the framework provides precisely the architecture of the corresponding one-hidden neural network with predetermined number of neuron, and the values of weights/biases. The paper is organized as follows. Section \secref{notations} introduces conventions and notations used in the paper. Elementary facts about convolution and approximate identities are presented in section \secref{cAI}. Section \secref{quadrature} recalls quadrature rules in terms of Riemann sums for continuous functions on compacta and their error analysis using moduli of continuity. This follows by a unified abstract framework for universality in section \secref{aAI}. The key to achieve this is to introduce the concept of neural network approximate identity (nAI), which immediately provides an abstract universality result in Lemma \lemref{AI}. This abstract framework reduces universality proof to nAI proof. Section \secref{manyActivations} shows that most of existing activations (including the family of rectified polynomial units (RePU) in which the parametric and leaky ReLUs are a member, a family of generalized sigmoidal functions in which the standard sigmoidal, hyperbolic tangent and softplus are a member, the exponential linear unit (ELU), the Gaussian error linear unit (GELU), the sigmoid linear unit (SiLU), and the Mish). It is the nAI proof of the Mish activation that guides us to devise a general framework for a large class of nAI functions (including all activations in this paper and beyond) in section \secref{generalAI}. Section \secref{conclusion} concludes the paper. \section{Notations} \seclab{notations} This section describes the notations used in the paper. \begin{itemize} \item We reserve lower case Romain letters for scalars or scalar-valued function. Boldface lower case Romain letters are reserved for vectors with components denoted by subscripts. \item $\R$: the set of real numbers. \item For $\x \in \Rn$, where $\n \in \mathbb{N}$ is the ambient dimension, $\snor{\x}_p := \LRp{\sum_{i=1}^\n\snor{\x_i}^\p}^{1/\p}$ denotes the standard $\ell^p$ norm in $\Rn$. \item $*$: convolution operator. \item $\f\LRp{\x} := \f\LRp{\x_1,\hdots,\x_n}$. \item $\nor{\f}_\p := \LRp{\intR \snor{f\LRp{\x}}^\p\,d\mu\LRp{\x}}^{1/\p}$ for $1\le \p <\infty$ and $\mu$ is the Lebesgue measure in $\R^n$. For $p = \infty$, $\nor{\f}_\infty := \text{ess}\sup_{\Rn}\snor{f} := \LRc{M > 0: \mu\LRc{\x:\snor{f(x)} > M} = 0}$. For simplicity, we use $d\x$ in place of $d\mu\LRp{\x}$. Note that we also use $\nor{f}_\infty$ to denote the uniform norm of continuous function $f$. \item $\L^\p:=\L^\p\LRp{\Rn} := \LRc{f: \nor{f}_p < \infty}$ for $1\le \p \le \infty$. \item $\p$ and $\q$ are called conjugate if $\frac{1}{p} + \frac{1}{q} = 1$. \item $\C_c\LRp{\K}$: space of continuous functions with compact support in $\K \subseteq\Rn$. \item $\C_0\LRp{\Rn}$: space of continuous functions vanishing at infinity. \item $\C_b\LRp{\K}$: space of bounded continuous functions on $\K \subseteq\Rn$. \item $\supp\LRp{\f}$: the support of $\f$. \end{itemize} \section{Convolution and Approximate Identity} \seclab{cAI} The mathematical foundation for our framework is approximate identity which relies on convolution. Convolution has been used by many authors including \cite{pinkus1999,COSTARELLI2015,LESHNO93} to assist in proving universal approximation theorems. Approximate identity generated by ReLU function has been used in \cite{Moon21} for showing ReLU universality in $\L^\p$. In this section we review important results from convolution and approximate identity. Let $\tau_\y\f\LRp{\x} := \f\LRp{\x-\y}$ be the translation operator. The following are standard facts about $\tau$. \begin{proposition} There hold: \begin{itemize} \item $\f$ is uniformly continuous iff $\lim_{\y\to 0}\nor{\tau_\y\f - f}_\infty = 0$. \item $\nor{\tau_\y\f}_\p = \nor{\f}_\p$ for $1\le p \le \infty$. \item For $1\le p < \infty$, $\tau$ is continuous in the $\L^\p$ norm, i.e., $\lim_{\y \to 0}\nor{\tau_\y\f -\f}_\p = 0.$ for $\f \in \L^\p\LRp{\Rn}$. \end{itemize} \propolab{translationOp} \end{proposition} Let $\f,\g: \Rn \to \R$ be two measurable functions in $\Rn$, the convolution of $\f$ and $\g$ is defined as \begin{equation*} \f*\g := \intR \f\LRp{\x - \y}\g\LRp{\y}\,d\y, \end{equation*} when the integral exists. The following are elementary facts about convolution. \begin{proposition} Assume that all integrals exist, there hold: \begin{itemize} \item $\f * \g = \g * \f$. \item $\tau_\y\LRp{\f *\g} = \LRp{\tau_\y\f} * \g = \f * \LRp{\tau_\y \g}$ for any $\y \in \Rn$. \item $\supp\LRp{\f *\g} \subset \overline{\supp \LRp{\f} + \supp\LRp{\g}}$ \end{itemize} \propolab{convolProp} \end{proposition} We are interested in the conditions under which $\f\LRp{\x-\y} \g\LRp{\y}$ (or $\f\LRp{\y} \g\LRp{\x-\y}$) is integrable for almost every (a.e.) $\x$, as our approach requires a discretization of the convolution integral. Below are a few relevant cases. \begin{lemma} The following hold: \begin{itemize} \item If $\f,\g \in \L^1\LRp{\Rn}$ then $\f * \g \in \L^1\LRp{\Rn}$. \item If $\f,\g \in \C_c\LRp{\Rn}$ then $\f * \g \in \C_c\LRp{\Rn}$. \item If $\f \in \L^1\LRp{\Rn},\g \in \C_0\LRp{\Rn}$ then $\f * \g \in \C_0\LRp{\Rn}$. \item If $\f \in \L^\p\LRp{\Rn},\g \in \L^\q\LRp{\Rn}$, where $\p$ and $\q$ are conjugate, and $1 < \p < \infty$, then $\f * \g \in \C_0\LRp{\Rn}$. \item If $\f \in \L^1\LRp{\Rn},\g \in \L^\infty\LRp{\Rn}$, then $\f * \g \in \C_b\LRp{\Rn}$ and $\f * \g$ is uniformly continuous. \end{itemize} \end{lemma} \begin{definition} A family of functions $\Beps \in \L^1\LRp{\Rn}$, where $\theta > 0$, is called an approximate identity if \begin{enumerate} \item The family is bounded in the $\L^1$ norm, i.e., $\nor{\Beps}_1 \le C$ for some $C > 0$, \item $\intR\Beps\LRp{\x}\,d\x =1$ for all $\theta > 0$, and \item $\int_{\nor{\x} > \delta} \snor{\Beps\LRp{\x}}\,d\x \longrightarrow 0$ as $\theta \longrightarrow 0$, for any $\delta > 0$. \end{enumerate} \defilab{AI} \end{definition} An important class of approximate identity is obtained by rescaling $\L^1$ functions. \begin{lemma} Suppose $\g \in \L^1\LRp{\Rn}$ and $\intR\g\LRp{\x}\,d\x = 1$, define $\Beps\LRp{\x} := \frac{1}{\theta^\n}\g\LRp{\frac{\x}{\theta}}$. Then $\Beps\LRp{\x}$ is an approximate identity. \lemlab{AIscaling} \end{lemma} We collect some important results about approximate identity for continuous functions. \begin{lemma} Suppose $\Beps$ is an approximate identity. \begin{itemize} \item $\forall \f \in \C_0\LRp{\Rn}$, $\lim_{\theta \to 0}\nor{\f * \Beps - \f}_\infty = 0$. \item If $\f \in \C_b\LRp{\Rn}$ and $\K$ is a compact subset of $\Rn$, then $\lim_{\theta \to 0}\nor{\Ind_{\K}\LRp{\f * \Beps - \f}}_\infty = 0$, where $\Ind_\K$ is the indicator function of $\K$. \item If $\f \in \C_b\LRp{\Rn}$ is H\"older continuous with exponent $0 < \alpha \le 1$, then $\lim_{\theta \to 0}\nor{\f * \Beps - \f}_\infty = 0$. \end{itemize} \lemlab{AI} \end{lemma} \begin{proof} We present the proof of the first assertion here as we will use a similar approach to estimate approximate identity error later. Since $f \in C_0\LRp{\Rn}$ it resides in $\L^\infty\LRp{\Rn}$ and is uniformly continuous. As a result from Proposition \proporef{translationOp}, for any $\varepsilon >0$, there exists $\delta > 0$ such that $\nor{\tau_\y\f - f}_\infty \le \varepsilon$ for $\nor{\y} \le \delta$. We have \begin{align*} \nor{\f * \Beps - \f}_\infty & \le \intR\nor{\tau_\y\f - \f}_\infty\snor{\Beps\LRp{\y}}\,d\y && \\ &\le \varepsilon\int_{\nor{\y} \le \delta} \snor{\Beps\LRp{\y}}\,d\y + 2\nor{\f}_\infty\int_{\nor{\y} > \delta} \snor{\Beps\LRp{\y}}\,d\y &&, \end{align*} where we have used the second property in Proposition \proporef{translationOp}. The assertion is now clear as $\varepsilon$ is arbitrarily small and by the third condition of approximate identity. \end{proof} \section{Quadrature rules for continuous functions on bounded domain} \seclab{quadrature} Recall that for a bounded function on a compact set, it is Riemann integrable if and only if it is continuous almost everywhere. In that case, Riemann sums converge to Riemann integrals as the corresponding partition size approaches 0. It is well-known (see, e.g. \cite{Baker68,DavisRabinowitz84}, and references therein) that most common numerical quadrature rules\textemdash including trapezoidal, some Newton-Cotes formulas, and Gauss-Legendre quadrature formulas\textemdash are Riemann sums. In this section, we use the Riemann sum interpretation of quadrature rule to approximate integrals of bounded functions. The goal is to characterize quadrature error in terms of modulus of continuity. We first discuss quadrature error in one dimension and then extend the result to $\n$ dimensions using tensor product trick. We assume the domain of interest is $\LRs{-1,1}^\n$. Let $\f \in {\C}\LRs{-1,1}^\n$, $\P^\m := \LRc{\z^1,\hdots,\z^{\m+1}}$ a partition of $\LRs{-1,1}$, and $\Q_\m := \LRc{\xi^1,\hdots,\xi^\m}$ the collection of all ``quadrature points" such that $-1\le \z^j\le \xi^j\le \z^{j+1}\le 1$ for $j=1,\hdots,\m$. We assume that \[ \sum_{j=1}^\m\g\LRp{\xi^j}\LRp{\z^{j+1}-\z^j} \] be a valid Riemann sum, e.g. trapezoidal rule, and thus converging to $\int_{-1}^1 \g\LRp{\z}\,d\z$ as $\m$ approaches $\infty$. We define a quadrature rule for $\LRs{-1,1}^\n$ as the tensor product of the aforementioned one dimensional quadrature rule, and thus \begin{equation} \S\LRp{\m,\f} := \sum_{j^1=1,\hdots,j^n=1}^\m\f\LRp{\xi^{j^1},\hdots,\xi^{j^n}}\Pi_{i=1}^\n\LRp{\z^{j^i+1}-\z^{j^i}} \eqnlab{RiemannSum} \end{equation} is a valid Riemann sum for $\int_{\LRs{-1,1}^\n}\f\LRp{\x}\,d\x$. Let us denote by $\omega\LRp{\f,\h}$ the modulus of continuity of $\f$. Recall that \begin{equation} \omega\LRp{\f,\h} := \sup_{\snor{\x-\y}_1\le \h} \snor{\f\LRp{\x} - \f\LRp{\y}} = \sup_{\snor{\z}_1\le h}\nor{\tau_\z\f - \f}_\infty, \eqnlab{modCon} \end{equation} $\omega\LRp{\f,0} = 0$, and $\omega\LRp{\f,\h}$ is continuous with respect to $h$ at $0$ (due to the uniform continuity of $\f$ on $\LRs{-1,1}^\n$). The following error bound for the tensor product quadrature rule is an extension of the standard one dimensional case \cite{Baker68}. \begin{lemma} Let $\f \in \C\LRs{-1,1}^n$. Then\footnote{The result can be marginally improved using the common refinement for $\P_\m$ and $\Q_\m$, but that is not important for our purpose.} \[ \snor{\S\LRp{\m,\f} - \int_{\LRs{-1,1}^\n} \f\LRp{\x}\,d\x} \le 2^\n\omega\LRp{\f,\snor{\P_\m}}, \] where $\snor{\P_\m}$ is the norm of the partition $\P_\m$. \lemlab{QuadratureError} \end{lemma} \begin{proof} Using the definition of the Riemann sum \eqnref{RiemannSum} we have \begin{align*} &\snor{\S\LRp{\m,\f} - \int_{\LRs{-1,1}^\n} \f\LRp{\x}\,d\x} = \\ &\snor{\sum_{j^1=1}^\m\hdots\sum_{j^n=1}^\m\int_{\z^{j^1}}^{\z^{j^1+1}}\hdots\int_{\z^{j^n}}^{\z^{j^n+1}}\LRs{\f\LRp{\xi^{j^1},\hdots,\xi^{j^n}} - \f\LRp{\x}}\,d\x_1\hdots d\x_\n} \\ &\le 2^\n\omega\LRp{\f,\snor{\P_\m}}. \end{align*} \end{proof} \section{Error Estimation for Approximate Identity with quadrature} \seclab{errorAI} We are interested in approximating functions in $\C\LRs{-1,1}^\n$ and $\L^\p\LRp{-1,1}^\n$, $1\le \p <\infty$ using neural networks. Since $\C_c\LRs{-1,1}^\n$ is dense in $\C\LRs{-1,1}^\n$ and $\L^\p\LRp{-1,1}^\n$, it is sufficient to consider $\C_c\LRs{-1,1}^\n$. At the heart of our framework is first approximate any function $\f$ in $\C_c\LRs{-1,1}^\n$ with an approximate identity $\Beps$ and then numerically integrating convolution integral $\f * \Beps$ with quadrature. This extends the similar approach in \cite{Moon21} for ReLU activation functions in several directions: 1) we rigorously account for the approximate identity and quadrature errors, 2) our unified framework holds for most activation functions (see Section \secref{manyActivations}) and identifies the conditions under which an activation is admissible (see Section \secref{generalAI}), and 3) we provide the rate of convergence (see Section ). As will be shown, this procedure is the key to our unification of neural network universality. Let $\f \in \C_c\LRs{-1,1}^\n$ and $\Beps$ be an approximate identity. From Lemma \lemref{AI} we know that $\f * \Beps$ converges to $\f$ uniformly as $\theta \to 0$. We, however, are interested in estimating the error $\nor{\f * \Beps - \f}_\infty$ for a given $\theta$. From the proof of Lemma \lemref{AI}, for any $\delta > 0$, we have \begin{align*} \nor{\f * \Beps - \f}_\infty & \le \intR\nor{\tau_\y\f - \f}_\infty\snor{\Beps\LRp{\y}}\,d\y && \\ &\le \omega\LRp{\f,\delta}\int_{\nor{\y} \le \delta} \snor{\Beps\LRp{\y}}\,d\y + 2\nor{\f}_\infty\int_{\nor{\y} > \delta} \snor{\Beps\LRp{\y}}\,d\y && \\ & \le \omega\LRp{\f,\delta} + \nor{\f}_\infty \T\LRp{\Beps,\delta}, && \end{align*} where $\T\LRp{\Beps,\delta} := \int_{\nor{\y} > \delta} \snor{\Beps\LRp{\y}}\,d\y$ denotes the tail mass of $\Beps$. Next, for any $\x \in \LRs{-1,1}^\n$, we approximate \[ \f * \Beps\LRp{\x} = \int_{\LRs{-1,1}^\n}\Beps\LRp{\x - \y}\f\LRp{\y}\,d\y \] with the Riemann sum \eqnref{RiemannSum} \[ \S\LRp{\m,\f * \Beps}\LRp{\x} = \sum_{j^1=1,\hdots,j^n=1}^\m\f\LRp{\xi^{j^1},\hdots,\xi^{j^n}} \Beps\LRp{\x - \LRp{\xi^{j^1},\hdots,\xi^{j^n}}}\Pi_{i=1}^\n\LRp{\z^{j^i+1}-\z^{j^i}}. \] The following result is readily available by triangle inequality, continuity of $\omega\LRp{\f,\h}$ at $\h = 0$, the third condition of the definition of $\Beps$, and Lemma \lemref{QuadratureError}. \begin{lemma} Let $\f \in \C_c\LRs{-1,1}^\n$, $\Beps$ be an approximate identity, and $\S\LRp{\m,\f * \Beps}\LRp{\x}$ be the quadrature rule \eqnref{RiemannSum} for $\f * \Beps$ with the partition $\P^\m$ and quadrature point set $\Q^\m$. Then, for any $\delta > 0$, and $\varepsilon > 0$, there holds \begin{equation} \f\LRp{\x} = \S\LRp{\m,\f * \Beps}\LRp{\x} + \mc{O}\LRp{\omega\LRp{\f,\delta} + \omega\LRp{\f,\snor{\P^\m}} + \T\LRp{\Beps,\delta} }. \eqnlab{error} \end{equation} In particular, for any $\varepsilon > 0$, there exist a sufficiently small $\snor{\P^\m}$ (and hence large $\m$), and sufficiently small $\theta$ and $\delta$ such that \[ \nor{\f\LRp{\x} - \S\LRp{\m,\f * \Beps}\LRp{\x}}_\infty \le \varepsilon. \] \lemlab{abstractUniversal} \end{lemma} \begin{remark} Aiming at using only elementary means, we use Riemann sum to approximate the integral. Clearly, this approach does not give the best possible convergence rates. To improve the rates, one can resort to Monte Carlo sampling to statistically not only reduce the number of ``quadrature" points but also obtain a total number of points independent of the ambient dimension $\n$. Furthermore, this also extends the results to the whole $\Rn$. \end{remark} \section{An abstract unified framework for universality} \seclab{aAI} \begin{definition}[Network Approximate Identity (nAI) function] A function $\sigma\LRp{\x}$ admits a network approximate identity (nAI) if there exist $1\le \k \in \mathbb{N}$, $\LRc{\alpha_i}_{i=1}^\k \subset \R$, $\LRc{\w_i}_{i=1}^\k \subset \R$, and $\LRc{\b_i}_{i=1}^\k \subset \R$ such that \begin{equation} \g\LRp{\x} := \sum_{i=1}^\k\alpha_i\sigma\LRp{\w_i\x + \b_i} \in \L^1\LRp{\Rn}. \eqnlab{nAI} \end{equation} \defilab{nAI} \end{definition} \begin{lemma} If $\sigma\LRp{\x}$ is a nAI, then $\sigma\LRp{\x}$ is universal. \end{lemma} \lemlab{nAI} \begin{proof} From nAI definition \defiref{nAI}, there exist $\k\in\mathbb{N}$ and $\alpha_i,\w_i,\b_i \in \R$, $i=1,\hdots,\k$ such that \eqnref{nAI} holds. After rescaling $\g\LRp{\x}/\nor{\g\LRp{\x}}_{\L^1\LRp{\Rn}}$ we infer from Lemma \lemref{AIscaling} that $\Beps\LRp{\x} : = \frac{1}{\theta^\n}\g\LRp{\frac{\x}{\theta}}$ is an approximate identity. Lemma \lemref{abstractUniversal} then asserts that for any desired accuracy $\varepsilon > 0$, there exist a partition $\snor{\P^\m}$, and sufficiently small $\theta$ and $\delta$ such that \begin{multline} \S\LRp{\m,\f * \Beps}\LRp{\x} = \frac{1}{\theta^\n}\sum_{j^1=1,\hdots,j^n=1}^\m\Pi_{i=1}^\n\LRp{\z^{j^i+1}-\z^{j^i}}\f\LRp{\xi^{j^1},\hdots,\xi^{j^n}} \times \\ \sum_{\ell=1}^\k\alpha_\ell\sigma\LRs{\frac{\w_\ell}{\theta}\LRp{\x-\LRp{\xi^{j^1},\hdots,\xi^{j^n}}} + \b_\ell} \eqnlab{NNf} \end{multline} is within $\varepsilon$ from $\f\LRp{\x}$ in the uniform norm, and this concludes the proof. \end{proof} We have constructed a one-hidden layer neural network $\Tilde{\f}\LRp{\x}$ with an arbitrary nAI activation $\sigma$ in \eqnref{NNf} to approximate well any continuous function $\f\LRp{\x}$ with compact support in $\LRs{-1,1}^\n$. A few observations are in order: i) if we know the modulus of continuity of $\f\LRp{\x}$ and the tail behavior of $\Beps$ (from property of $\sigma$), we can precisely determine the total number of quadrature points $\m$, the scaling $\theta$, and the cut-off radius $\delta$ in terms of $\varepsilon$ (see Lemma \lemref{abstractUniversal}). That is, the topology of the network is completely determined; ii) The weights and biases of the network are also readily available from the nAI property of $\sigma$, the quadrature points, and $\theta$; iii) the coefficients of the output layer is also pre-determined by nAI (i.e. $\alpha_\ell$) and the values of the unknown function $\f$ at the quadrature points; and iv) {\bf any nAI activation function is universal}. Clearly the Gaussian activation function $e^{-\xn^2}$ is an nAI with $\k = 1$, $\alpha_1 = 1$, $\w_1 = 1$ and $\b_1 = 0$. The interesting fact is that, as we shall prove in section \secref{manyActivations}, most existing activation functions, though not integrable by itself, are nAI. \section{Many existing activation functions are nAI} \seclab{manyActivations} In this section we exploit the properties of the each activation (and its family if any) under consideration to show that they are a nAI for $\n=1$. That is, these activations generate one-dimensional approximate identity, which in turn shows that they are universal by Lemma \lemref{nAI}. We then construct an $\n$-dimensional approximate identity using a special composition of one-dimensional approximate identities, and this immediately asserts that these activations are universal for any ambient dimension $\n$. We start with the family of rectified polynomial units (RePU) with many properties (where we also discuss the parametric and leaky ReLUs) , then family of sigmoidal functions, then the exponential linear unit (ELU), then the Gaussian error linear unit (GELU), then the sigmoid linear unit (SiLU), and we conclude with the Mish activation with least properties. It is the nAI proof of the Mish activation that guides us to devise a general framework for a large class of nAI functions (including all activations in this section) in section \secref{generalAI}. \subsection{Rectified Polynomial Units (RePU) is an nAI} \seclab{relu} Following \cite{RePU19, MHASKAR92} we define the rectified polynomial unit (RePU) as \[ \RePU\LRp{\q; x} = \begin{cases} x^\q & \text{if } x\ge 0, \\ 0 & \text{otherwise}, \end{cases} \quad \q \in \mbb{N}, \] which reduces to the standard rectilinear (ReLU) unit \cite{ReLU10} when $\q = 1$, the Heaviside unit when $\q = 0$, and the rectified cubic unit \cite{deepRitz18} when $\q = 3$. The goal of this section is to construct an integrable linear combination of \RePU\ activation function for a given $\q$. We begin with constructing compact supported $\mc{B}$ell-shape functions\footnote{ $\mc{B}$ell-shape functions seemed to be first coined in \cite{CARDALIAGUET1992}.} from $\RePU$ in one dimension. Recall the binomial coefficient notation \[ \begin{pmatrix} \r \\ k \end{pmatrix} = \frac{\r!}{\LRp{\r-k}!k!}. \] and the central finite differencing operator (see Remark \remaref{FD} for forward and backward finite differences) with stepsize $\h$ \[ \deltah\LRs{\f}\LRp{\xn} := \f\LRp{\xn + \frac{\h}{2}} - \f\LRp{\xn-\frac{\h}{2}}. \] \begin{lemma}[$\B$-function for \RePU\, in one dimension] Let $\r := \q+1$. For any $\h \ge 0$, define \[ {\B}\LRp{\xn,\h} := \frac{1}{\q!}\deltah^\r\LRs{\RePU}\LRp{\xn} = \frac{1}{\q !}\sum_{i=0}^{\r}\LRp{-1}^{i}\begin{pmatrix} \r \\ i \end{pmatrix} \RePU\LRp{\q;\xn + \LRp{\frac{\r}{2}- i}\h}. \] Then \begin{enumerate}[i)] \item $\B\LRp{\xn,\h}$ is piecewise polynomial of order at most $\q$ on each interval \\ $\LRs{k\h-\frac{\r\h}{2},(k+1)\h-\frac{\r\h}{2}}$ for $k = 0,\hdots,\r-1$. \item $\B\LRp{\xn,\h}$ is $\LRp{\q-1}$-time differentiable for $\q \ge 2$, continuous for $\q = 1$, and discontinuous for $\q = 0$. Furthermore, $\supp\LRp{\B\LRp{\xn,\h}}$ is a subset $\LRs{-\frac{\r\h}{2},\frac{\r\h}{2}}$. \item $\B\LRp{\xn, \h}$ is even, non-negative, and unimodal. $\B\LRp{0, \h} = \max_{\xn\in \R}\B\LRp{\xn, \h}$ and $\int_{\R}\B\LRp{\xn, 1}\,d\xn = 1$. \end{enumerate} \lemlab{RePU} \end{lemma} \begin{proof} We start by defining $\X_\r := \sum_{i=1}^\r\U_i$, where $\U_i$ are independent identically distributed uniform random variables on $\LRs{-\frac{1}{2},\frac{1}{2}}$. Following \cite{feller1,Irwin27,Hall27}, $\X_r$ is distributed by the Irwin-Hall distribution with the probability density function \[ \f_\r\LRp{\xn} := \frac{1}{\q!}\sum_{i=1}^{\floor{\xn}}(-1)^i \begin{pmatrix} \r \\ i \end{pmatrix} \LRp{\xn + \frac{\r}{2}- i}^{\q}, \] where $\floor{\xn}$ denotes the largest integer smaller than $\xn$. Using the definition of \RePU, it is easy to see that $\f_r\LRp{\xn}$ can be also written in terms of \RePU\, as follows \[ \f_\r\LRp{\xn} := \frac{1}{\q!}\sum_{i=1}^{\r}(-1)^i \begin{pmatrix} \r \\ i \end{pmatrix} \RePU\LRp{\q; \xn + \frac{\r}{2}- i}, \] which in turns implies \begin{equation} \B\LRp{\xn,\h} = \h^\q\f_\r\LRp{\frac{\xn}{\h}}. \eqnlab{Bandf} \end{equation} In other words, $\B\LRp{\xn,\h}$ is a dilated version of $\f_\r\LRp{\xn}$. Thus, all the properties of $\f_\r\LRp{\xn}$ holds for $\B\LRp{\xn,\h}$. In particular, all the assertions of Lemma \lemref{RePU} hold. Note that the compact support can be alternatively shown using the property of the central finite differencing. Indeed, it is easy to see that for $\xn \ge \frac{\r\h}{2}$ we have \[ \deltah^\r\LRs{\RePU}\LRp{\xn} = \deltah^\r\LRs{\xn^\q} = 0. \] \end{proof} Though there are many approaches to construct $\B$-functions in $\n$ dimensions from $\B$-function in one dimensions (see, e.g. \cite{CARDALIAGUET1992, Anastassiou01,Anastassiou00,ANASTASSIOU2011,ANASTASSIOU2011b,ANASTASSIOU2011tanhnD, COSTARELLI2015, Costarelli2015sigmoid}) that are realizable by neural networks, inspired by \cite{Moon21} we construct $\B$-function in $\n$ dimensions by $\n$-fold composition of the one dimensional $\B$-function in Lemma \lemref{RePU}. \begin{theorem}[$\B$-function for \RePU\, in $\n$ dimensions] Let $\r := \q+1$, and $\x =\LRp{\x_1,\hdots,\x_n} \in \Rn$. Define \[ \Bc\LRp{\x} = \B\LRp{\x_n,\B\LRp{\x_{\n-1},\hdots,\B\LRp{\x_1,1}}}. \] The following hold: \begin{enumerate}[i)] \item $\Bc\LRp{\x}$ is non-negative with compact support. In particular $\supp\LRp{\Bc\LRp{\x}} \subseteq X_{i=1}^\n \LRs{-b_i,b_i}$ where $b_i = \underbrace{\B\LRp{0,\hdots,\B\LRp{0,1}}}_{(i-1)-\text{time composition}}$, for $2\le i\le \n$, and $b_1 = \frac{\n}{2} $. \item $\Bc\LRp{\x}$ is even with respect to each component $\x_i$, $i=1,\hdots,\n$, and unimodal with $\Bc\LRp{0} = \max_{\x\in\Rn}\Bc\LRp{\x}$. \item $\Bc\LRp{\x}$ is piecewise polynomial of order at most $\LRp{\n-i}\q$ in $\x_i$, $i =1,\hdots,\n$. Furthermore, $\Bc\LRp{\x}$ is $(\q-1)$-time differentiable in each $\x_i$, $i=1,\hdots,\n$. \item $\intR \Bc\LRp{\x}\,d\x \le 1$ and $\Bc\LRp{\x} \in \L^1\LRp{\Rn}$. \end{enumerate} \theolab{RePUnD} \end{theorem} \begin{proof} The first three assertions are direct consequences of Lemma \lemref{RePU}. For the fourth assertion, since $\Bc\LRp{\x} \ge 0$ it is sufficient to show $\intR \Bc\LRp{\x}\,d\x \le 1$ and we do so in three steps. Let $\r = \q+1$ and define $\x = \LRp{\x_{\n+1},\x_\n,\hdots,\x_1} = \LRp{\x_{\n+1},\y}$. We first show by induction that $\B\LRp{\xn,1} \le 1$ for $\q \in \mathbb{N}$ and $\xn \in \R$. The claim is clearly true for $\q = \LRc{0,1}$. Suppose the claim holds for $\q$, then \eqnref{Bandf} implies \[ \f_{\r}\LRp{\xn} \le 1, \quad \forall \xn \in \R. \] For $\q+1$ we have \[ \B\LRp{0} = \f_{\r+1}\LRp{0} = \f_{\r} * \f_1\LRp{0} = \int_\R \f_{\r}\LRp{-\yn}\f_1\LRp{\yn}\,d\yn \le \int_\R\f_1\LRp{\yn}\,d\yn = 1. \] By the second assertion, we conclude that $\B\LRp{\xn,1} \le 1$ for any $\q \in \mathbb{N}$ and $\xn \in \R$. In the second step, we show $\Bc\LRp{\x} \le 1$ by induction on $\n$ for any $\x \in \Rn$ and any $\q \in \mathbb{N}$. The result holds for $n=1$ due to the first step. Suppose the claim is true for $\n$. For $\n+1$, we have \[ \Bc\LRp{\x} = \B\LRp{\x_{\n+1},\Bc\LRp{\y}} = \LRs{\Bc\LRp{\y}}^\q \f_\r \LRp{\frac{\x_{\n+1}}{\Bc\LRp{\y}}} \le 1, \] where we have used \eqnref{Bandf} and in the last equality, and the first step together with the induction hypothesis in the last inequality. In the last step, we show $\intR \Bc\LRp{\x}\,d\x \le 1$ by induction on $\n$. For $\n = 1$, $\int_\R \Bc\LRp{\xn}\,d\xn = 1$ is clear using by the Irwin-Hall probability density function. Suppose the result is true for $\n$. For $\n+1$, applying the Fubini theorem gives \begin{multline*} \int_{\R^{\n+1}} \Bc\LRp{\x}\,d\x = \intR\LRp{\int_{\R} \B\LRp{\x_{\n+1},\Bc\LRp{\y}}\,d\x_{\n+1}}\,d\y \\= \intR\LRs{\Bc\LRp{\y}}^\q\LRp{\int_{\R} \f_{\r}\LRp{\frac{\x_{\n+1}}{\Bc\LRp{\y}}}\,d\x_{\n+1}}\,d\y = \intR\LRs{\Bc\LRp{\y}}^\r\LRp{\int_{\R} \f_{\r}\LRp{t}\,dt}\,d\y \\ = \intR\LRs{\Bc\LRp{\y}}^\r\,d\y \le \intR\Bc\LRp{\y}\,d\y \le 1, \end{multline*} where we have used \eqnref{Bandf} in the second equality, the result of the second step in the second last inequality, and the induction hypothesis in the last inequality. \end{proof} \begin{remark} Note that Lemma \lemref{RePU} and Theorem \theoref{RePUnD} also hold for parametric ReLU \cite{he2015delving} and leaky ReLU \cite{maas2013a} (a special case of parametric ReLU) with $\r = 2$. In other words, parametric ReLU is an nAI, and thus universal. \end{remark} \subsection{Sigmoidal and related activation functions are nAI} \seclab{sigmoid} \subsubsection{Sigmoidal, hyperbolic tangent, and softplus activation functions} Recall the sigmoidal, hyperbolic tangent, and softplus functions \cite{glorot2011a} are, respectively, given by \[ \sigma_s\LRp{\xn} := \frac{1}{1+e^{-x}}, \quad \sigma_t\LRp{\xn} := \frac{e^\xn+e^{-\xn}}{e^\xn+e^{-\xn}}, \quad \text{ and } \sigma_p\LRp{\xn} := \ln\LRp{1 + e^\xn}. \] It is the relationship between sigmoidal and hyperbolic tangent function \[ \sigma_s\LRp{\xn} = \half + \half\sigma_t\LRp{\frac{\xn}{2}} \] that allows us to construct their $\B$-functions a similar manner. In particular, based on the bell shape geometry of the sigmoidal and hyperbolic tangent functions we apply the central finite differencing with $\h > 0$ to obtain the corresponding $\B$-functions \begin{align*} \B_s\LRp{\xn,\h} &:= \frac{1}{2}\deltah\LRs{\sigma_s}\LRp{\xn} = \frac{e^\h-e^{-\h}}{2}\frac{e^{-x}}{\LRp{e^\h+e^{-x}}\LRp{e^{-\h}+e^{-x}}}, \\ \B_t\LRp{\xn,\h} &:= \frac{1}{4}\deltah\LRs{\sigma_t}\LRp{\xn} = \frac{e^{2\h}-e^{-2\h}}{4}\frac{e^{-2x}}{\LRp{e^{2\h}+e^{-2x}}\LRp{e^{-2\h}+e^{-2x}}}. \end{align*} Since $\sigma_s\LRp{\xn}$ is the derivative of $\sigma_p\LRp{\xn}$, we apply central difference twice to obtain a $B$-function for $\sigma_p\LRp{\xn}$ \[ \B_p\LRp{\xn,\h} = \deltah^2\LRs{\sigma_p}\LRp{\xn} = \ln\LRp{e^\h+e^{\xn}} - 2\ln\LRp{1+e^{\xn}} + \ln\LRp{e^{-\h}+e^{\xn}}. \] The following results are valid for sigmoidal, hyperbolic tangent, and softplus functions. Specifically, Lemma \lemref{sigmoid1D} and Theorem \theoref{sigmoidnD} hold for $\B\LRp{\xn,\h}$ being either $\B_s\LRp{\xn,\h}$ or $\B_t\LRp{\xn,\h}$ or $\B_p\LRp{\xn,\h}$. \begin{lemma}[$\B$-function for sigmoid, hyperbolic tangent, and solfplus in one dimension] The following hold true for any $0 < h < \infty$: \begin{enumerate} \item $\B\LRp{\xn,\h} \ge 0$ for all $\xn \in \R$, $\lim_{\snor{\xn} \to \infty}\B\LRp{\xn,\h}$ = 0, and $\B\LRp{\xn,\h}$ is even. \item $\B\LRp{\xn,\h}$ is unimodal and $\B\LRp{0,\h} = \max_{\xn \in R}\B\LRp{\xn,\h}$. Furthermore, $\int_\R\B\LRp{\xn,\h}\,d\xn < \infty$ and $\B\LRp{\xn,\h} \in \L^1\LRp{\R}$. \end{enumerate} \lemlab{sigmoid1D} \end{lemma} \begin{proof} The proof for $\B_s\LRp{\xn,\h}$ is a simple extension of those in \cite{costarelli13,CHEN2009}, and the proof for $\B_s\LRp{\xn,\h}$ follows similarly. Note that $\int_\R\B\LRp{\xn,\h}\,d\xn = \h$ for sigmoid and hyperbolic tangent. For $\B_p\LRp{\xn,\h}$, due to the global convexity of $\sigma_t\LRp{\xn}$ we have \[ \ln\LRp{1+e^{\xn}} \le \frac{\ln\LRp{e^\h+e^{\xn}} + \ln\LRp{e^{-\h}+e^{\xn}}}{2}, \quad \forall \xn \in \R, \] which is equivalently to $\B_p\LRp{\xn,\h}\ge 0$ for $\xn \in \R$. The fact that $\B_p\LRp{\xn,\h}$ is even and $\lim_{\snor{\xn} \to \infty}\B_p\LRp{\xn,\h}$ = 0 are obvious by inspection. Since the derivative of $\B_p\LRp{\xn,\h}$ is negative for $\xn \in (0,\infty)$ and $\B_p\LRp{\xn,\h}$ is even, $\B_p\LRp{\xn,\h}$ is unimodal. It follows that $\B_p\LRp{\xn,0} = \max_{\xn \in R}\B_p\LRp{\xn,\h}$. Next integrating by parts gives \begin{multline} \int_{-\infty}^\infty \B_p\LRp{\xn,\h}\,d\xn = 2\int_{0}^\infty\frac{\LRp{e^\h-1}^2\LRp{1-e^{-\xn}}}{\LRp{e^\h+e^{-x}}\LRp{e^{-\h}+e^{-x}}\LRp{1+e^{-x}}}e^{-\xn}\xn\,d\xn \\ \le \LRp{\frac{1 - e^{-\h}}{1+e^{-\h}}}^2\int_{0}^\infty e^{-\xn}\xn\,d\xn = \LRp{\frac{1 - e^{-\h}}{1+e^{-\h}}}^2 \le \min\LRc{1,\h^2}. \eqnlab{softplus} \end{multline} Thus all the assertions for $\B_p\LRp{\xn,\h}$ holds. \end{proof} Similar to Lemma \lemref{RePUnD}, we construct $\B$-function in $\n$ dimensions by $\n$-fold composition of the one dimensional $\B$-function in Lemma \lemref{sigmoid1D}. \begin{theorem}[$\B$-function for sigmoid, hyperbolic tangent, and softplus in $\n$ dimensions] Let $\x =\LRp{\x_1,\hdots,\x_n} \in \Rn$. Define \[ \Bc\LRp{\x} = \B\LRp{\x_n,\B\LRp{\x_{\n-1},\hdots,\B\LRp{\x_1,1}}}. \] Then $\Bc\LRp{\x}$ is even with respect to each component $\x_i$, $i=1,\hdots,\n$, and unimodal with $\Bc\LRp{0} = \max_{\x\in\Rn}\Bc\LRp{\x}$. Furthermore, $\intR \Bc\LRp{\x}\,d\x \le 1$ and $\Bc\LRp{\x} \in \L^1\LRp{\Rn}$. \theolab{sigmoidnD} \end{theorem} \begin{proof} We only need to show $\intR \Bc\LRp{\x}\,d\x \le 1$ as the proof for other assertions is similar to that of Lemma \lemref{RePUnD}, and thus is omitted. For sigmoid and hyperbolic tangent the result is clear as \begin{multline*} \int_{\R^{\n}} \Bc\LRp{\x}\,d\x = \int_{\R^{\n}} \B\LRp{\x_{\n},\B\LRp{\x_{\n-1},\hdots,\B\LRp{\x_1,1}}}\,d\x\\ = \int_{\R^{\n-1}} \B\LRp{\x_{\n-1},\hdots,\B\LRp{\x_1,1}}\,d\x_{\n-1}\hdots d\x_1 = \hdots = \int_{\R} \B\LRp{\x_1,1}d\x_1 = 1. \end{multline*} For softplus function, by inspection, $\B\LRp{\z,\h} \le \B\LRp{0,\h} \le 1$ for all $\z \in \R$ and $0 < \h\le 1$. Lemma \lemref{sigmoid1D} gives $\intR \Bc\LRp{\x}\,d\x \le 1$ for $\n = 1$. Define $\x = \LRp{\x_{\n+1},\x_\n,\hdots,\x_1} = \LRp{\x_{\n+1},\y}$ and suppose the claim holds for $\n$, i.e., $\intR \Bc\LRp{\y}\,d\y \le 1$. Now apply \eqnref{softplus} we have \begin{multline*} \int_{\R^{\n+1}} \Bc\LRp{\x}\,d\x = \intR\LRp{\int_{\R} \B\LRp{\x_{\n+1},\Bc\LRp{\y}}\,d\x_{\n+1}}\,d\y \le \intR\Bc\LRp{\y}\,d\y \le 1, \end{multline*} which ends the proof by induction. \end{proof} \subsubsection{Arctangent function} We next discuss arctangent activation function \[ \sigma_a\LRp{\xn} := \arctan\LRp{\xn}, \] whose shape is similar to sigmoid. Since its derivative has the bell shape geometry, we define the $\B$-function for arctangent function by approximating its derivative with central finite differencing, i.e., \[ \B_a\LRp{\xn,\h} = \frac{1}{2\pi}\deltah\LRs{\sigma_a}\LRp{\xn} = \frac{1}{2\pi} \arctan\LRp{\frac{2\h}{1+\xn^2 -\h^2}}, \] for $0 < \h \le 1$. Then simple algebra manipulations show that Lemma \lemref{sigmoid1D} holds for $\B_a\LRp{\xn,\h}$ with $\int_\R \B_a\LRp{\xn,\h}\,d\xn = \h$. Consequently, Theorem \theoref{sigmoidnD} also holds for $\n$-dimensional arctangent $\B$-function \[ \Bc\LRp{\x} = \B_a\LRp{\x_n,\B_a\LRp{\x_{\n-1},\hdots,\B_a\LRp{\x_1,1}}}, \] as we have shown for the sigmoidal function. \subsubsection{Generalized sigmoidal functions} We extend the class of sigmoidal function in \cite{costarelli13} to generalized sigmoidal (or ``squash") functions. \begin{definition}[Generalized sigmoidal functions] We say $\sigma\LRp{\xn}$ is a generalized sigmoidal function if it satisfies the following conditions: \begin{enumerate}[i)] \item $\sigma\LRp{\xn}$ is bounded, i.e., there exists a constant $\sigma_{\text{max}}$ such that $\snor{\sigma\LRp{\xn}} \le \sigma_{\text{max}}$ for all $\xn \in \R$. \item $\lim_{\xn \to \infty}\sigma\LRp{\xn} = L$ and $\lim_{\xn \to -\infty}\sigma\LRp{\xn} = \ell$. \item There exist $\xn^- < 0$ and $\alpha > 0$ such that ${\sigma\LRp{\xn} - \ell} = \mc{O}\LRp{\snor{\xn}^{-1-\alpha}}$ for $\xn < \xn^-$. \item There exist $\xn^+ > 0$ and $\alpha > 0$ such that ${L - \sigma\LRp{\xn} } = \mc{O}\LRp{\snor{\xn}^{-1-\alpha}}$ for $\xn > \xn^+$. \end{enumerate} \defilab{sigmoid} \end{definition} Clearly the standard sigmoidal, hyperbolic tangent, and arctangent activations satisfy all conditions of Definition \defiref{sigmoid}, and thus are members of the class of generalized sigmoidal functions. \begin{lemma}[$\B$-function for generalized sigmoids in one dimension] Let $\sigma\LRp{\xn}$ be a generalized sigmoidal function, and $\B\LRp{\xn,\h}$ be an associated $\B$-function defined as \[ \B\LRp{\xn,\h} := \frac{1}{2\LRp{L -\ell}}\LRs{\sigma\LRp{\xn + \h} - \sigma\LRp{\xn -\h}}. \] Then the following hold for any $\h \in \R$: \newcommand{\Z}{\mathbb{Z}} \begin{enumerate} \item There exists a constant $C$ such that $\snor{\B\LRp{\xn,\h}} \le C \LRp{\xn + \snor{\h}}^{-1-\alpha}$ for $\xn \ge \xn^+ + \snor{\h}$, and $\snor{\B\LRp{\xn,\h}} \le C \LRp{-\xn + \snor{\h}}^{-1-\alpha}$ for $\xn \le \xn^- - \snor{\h}$. \item For $x \in \LRs{m, M}$, $m, M \in \R$, $\sum_{k =-\infty}^\infty\B\LRp{\xn + k\h,\h}$ converges uniformly to $\text{sign}\LRp{\h}$. Furthermore, $\int_{\R} \B\LRp{\xn,\h}\,d\xn = \h$, and $\B\LRp{\xn,\h} \in \L^1\LRp{\R}$. \end{enumerate} \lemlab{generalizedSigmoid1D} \end{lemma} \begin{proof} The first assertion is clear by Definition \defiref{sigmoid}. For the second assertion, with a telescope trick similar to \cite{costarelli13} we have \begin{multline*} s_N\LRp{\xn} = \sum_{k = -N}^N \B\LRp{\xn+k\h,\h} = \frac{1}{2\LRp{L - \ell}} \left[\sigma\LRp{\xn + \LRp{N+1}\h} + \sigma\LRp{\xn + N\h}\right. \\ - \left.\sigma\LRp{\xn -N\h} - \sigma\LRp{\xn - \LRp{N+1}\h}\right] \xrightarrow[]{N \to \infty} \text{sign}\LRp{\h}. \end{multline*} To show the convergence is uniform, we consider only $\h > 0$ as the proof for $\h < 0$ is similar and for $\h = 0$ is obvious. We first consider the right tail. For sufficiently large $N$, there exists a constant $C > 0$ such that \begin{multline*} \sum_{k = N}^\infty \B\LRp{\xn+k\h,\h} \le C\sum_{k = N}^\infty \LRp{\xn+k\h}^{-1-\alpha} = C\sum_{k = N}^\infty \int_{k-1}^k\LRp{\xn+k\h}^{-1-\alpha}\,dy \\ \le C \int_{N-1}^\infty \LRp{\xn+y\h}^{-1-\alpha}\,dy = \frac{C}{\h\alpha}\LRs{\xn + \LRp{N-1}\h} \\ \le \frac{C}{\h\alpha}\LRs{m + \LRp{N-1}\h} \xrightarrow[]{N \to \infty} 0 \text{ independent of } \xn. \end{multline*} Similarly, the left tail converges to 0 uniformly as \[ \sum_{k = N}^\infty \B\LRp{\xn-k\h,\h} \le \frac{C}{\h\alpha}\LRs{\LRp{N-1}\h - M} \xrightarrow[]{N \to \infty} 0 \text{ independent of } \xn. \] As a consequence, we have \begin{multline*} \int_{\R} \B\LRp{\xn,\h}\,d\xn = \sum_{k=-\infty}^\infty \int_{k\snor{\h}}^{\LRp{k+1}\snor{\h}}\B\LRp{\xn,\h}\,d\xn = \sum_{k=-\infty}^\infty \int_{0}^{\snor{\h}}\B\LRp{y + k\snor{\h},\h}\,dy \\ = \int_{0}^{\snor{\h}} \sum_{k=-\infty}^\infty \B\LRp{y + k\snor{\h},\h}\,dy = \h, \end{multline*} where we have used the uniform convergence in the third equality. Using the first assertion we have \begin{multline*} \int_{\R} \snor{\B\LRp{\xn,\h}}\,d\xn = \int_{\xn^- - \snor{\h}}^{\xn^+ + \snor{\h}} \snor{\B\LRp{\xn,\h}}\,d\xn + \int_{\xn^+ + \snor{\h}}^\infty \snor{\B\LRp{\xn,\h}}\,d\xn + \int_{-\infty}^{\xn^- - \snor{\h}} \snor{\B\LRp{\xn,\h}}\,d\xn \\ \le \LRp{\xn_p - \xn_n + 2\snor{\h}} + C\int_{\xn^+ + \snor{\h}}^\infty \LRp{\xn + \snor{\h}}^{-1-\alpha}\,d\xn + C\int_{-\infty}^{\xn^- - \snor{\h}} \LRp{-\xn + \snor{\h}}^{-1-\alpha} \,d\xn \\ = \LRp{\xn_p - \xn_n + 2\snor{\h}} + \frac{C}{\alpha}\LRs{\LRp{\xn^+ + 2\snor{\h}}^{-\alpha}+\LRp{-\xn^- + 2\snor{\h}}^{-\alpha}} <\infty. \end{multline*} \end{proof} {\em Thus any generalized sigmoidal function is an nAI in one dimensions. } Note that the setting in \cite{costarelli13}\textemdash in which $\sigma\LRp{\xn}$ is non-decreasing, $L =1$, and $\ell = 0$\textemdash is a special case of our general setting. In this less general setting, it is clear that $\B\LRp{\xn,\h} \ge$ and thus $\int_{\R} \B\LRp{\xn,\h}\,d\xn = \h$ is sufficient to conclude $\B\LRp{\xn,\h} \in \L^1\LRp{\R}$ for any $\h > 0$. This is the point that we will explore in the next theorem as it is not clear, at the moment of writing this paper, how to show that the $B$-functions for generalized sigmoids reside in $\L^1\LRp{\Rn}$. \begin{theorem}[$\B$-function for generalized sigmoids in $\n$ dimensions] Suppose that $\sigma\LRp{\xn}$ is a non-decreasing generalized sigmoidal function. Let $\x =\LRp{\x_1,\hdots,\x_n} \in \Rn$. Define \[ \Bc\LRp{\x} = \B\LRp{\x_n,\B\LRp{\x_{\n-1},\hdots,\B\LRp{\x_1,1}}}. \] Then $\intR \Bc\LRp{\x}\,d\x = 1$ and $\Bc\LRp{\x} \in \L^1\LRp{\Rn}$. \theolab{generalizedSigmoidnD} \end{theorem} \begin{proof} The proof is the same as the proof of Theorem \theoref{sigmoidnD} for the standard sigmoidal unit, and thus is omitted. \end{proof} \subsection{The Exponential Linear Unit (ELU) is nAI} Following \cite{clevert2016fast,klambauer2017selfnormalizing} we define the Exponential Linear Unit (ELU) as \[ \sigma\LRp{\xn} = \begin{cases} \alpha \LRp{e^{\xn} - 1} & \xn \le 0 \\ \xn & \xn > 0, \end{cases} \] for some $\alpha \in \R$. The goal of this section is to show that ELU is an nAI. Since the unbounded part of ELU is linear, section \secref{relu} suggests us to define, for $\h \in \R$, \[ \B\LRp{\xn,\h} := \frac{\deltah^2\LRs{\sigma}\LRp{\xn}}{\gamma} = \frac{1}{\gamma} \begin{cases} \alpha e^{\xn - \snor{\h}}\LRp{e^{\snor{\h}}-1}^2 & \xn \le -\snor{\h}, \\ \xn + \snor{\h} + \alpha - \alpha e^{\xn}\LRp{2 - e^{-\snor{\h}}} & -\snor{\h} < \xn \le 0, \\ \snor{\h} - \xn -\alpha + \alpha e^{\xn -\snor{\h}} & 0 < \xn \le \snor{\h}, \\ 0 & \xn > \snor{\h}, \end{cases} \] where $\gamma = \max\LRc{1+4\snor{\alpha},2 + \snor{\alpha}}$. \begin{lemma}[$\B$-function for ELU in one dimension] Let $\alpha, \h \in \R$, then \[ \int_{\R} \B\LRp{\xn,\h}\,d\xn = \frac{\h^2}{\gamma}, \text{ and } \B\LRp{\xn,\h} \in \L^1\LRp{\R}. \] Furthermore: $\B\LRp{\xn,\h} \le 1$ for $\snor{\h} \le 1$. \lemlab{ELU1D} \end{lemma} \begin{proof} The expression of $\B\LRp{\xn,\h}$ and direct integration give $\int_{\R} \B\LRp{\xn,\h}\,d\xn = \h^2\gamma$. Similarly, simple algebra manipulations yield \[ \gamma\int_{\R} \snor{\B\LRp{\xn,\h}}\,d\xn \le \h^2 + 2\snor{\alpha}\snor{\h} + 2\snor{\alpha}\LRp{e^{-\snor{\h}}-1}^2 < \infty, \] and thus $\B\LRp{\xn,\h} \in \L^1\LRp{\R}$. The fact that $\B\LRp{\xn,\h} \le 1$ for $\snor{\h} \le 1$ holds is straightforward by inspecting the extrema of $\B\LRp{\xn,\h}$. \end{proof} \begin{theorem}[$\B$-function for ELU in $\n$ dimensions] Let $\x =\LRp{\x_1,\hdots,\x_n} \in \Rn$. Define \[ \Bc\LRp{\x} = \B\LRp{\x_n,\B\LRp{\x_{\n-1},\hdots,\B\LRp{\x_1,1}}}. \] Then $\intR \snor{\Bc\LRp{\x}}\,d\x \le 1$ and thus $\Bc\LRp{\x} \in \L^1\LRp{\Rn}$. \theolab{ELUnD} \end{theorem} \begin{proof} From the proof of Lemma \lemref{ELU1D} we infer that $\int_{\R} \snor{\B\LRp{\xn,\h}}\,d\xn \le \snor{\h}$ for all $\snor{\h} \le 1$: in particular, $\int_{\R} \snor{\B\LRp{\x_1,1}}\,d\xn \le 1$. Define $\x = \LRp{\x_{\n+1},\x_\n,\hdots,\x_1} = \LRp{\x_{\n+1},\y}$ and suppose the claim holds for $\n$, i.e., $\intR \snor{\Bc\LRp{\y}}\,d\y \le 1$. We have \[ \int_{\R^{\n+1}} \snor{\Bc\LRp{\x}}\,d\x = \intR\LRp{\int_{\R} \snor{\B\LRp{\x_{\n+1},\Bc\LRp{\y}}}\,d\x_{\n+1}}\,d\y \le \intR\snor{\Bc\LRp{\y}}\,d\y \le 1, \] which, by induction, concludes the proof. \end{proof} \subsection{The Gaussian Error Linear Unit (GELU) is nAI} GELU, introduced in \cite{hendrycks2020gaussian}, is defined as \[ \sigma\LRp{\xn} := \xn \Phi\LRp{\xn}, \] where $\Phi\LRp{\xn}$ is the cummulative distribution function of standard normal distribution. Since the unbounded part of GELU is essentially linear, Section \secref{relu} suggests us to define, for $\h \in \R$, \[ \B\LRp{\xn,\h}:= \deltah^2\LRs{\sigma}\LRp{\xn} = \LRp{\xn + \h}\Phi\LRp{\xn+\h} -2\xn\Phi\LRp{\xn} + \LRp{\xn - \h}\Phi\LRp{\xn-\h}. \] \begin{lemma}[$\B$-function for GELU in one dimension] Let $\h \in \R$, then the following hold. \begin{enumerate}[i)] \item $\B\LRp{\xn,\h}$ is an even function with respect to both $\xn$ and $\h$. \item $\B\LRp{\xn,\h}$ has two symmetric roots $\pm\xn^*$ and $\h < \xn^* < \max\LRc{2,2\h}$. Furthermore: $\snor{\B\LRp{\xn,\h}} \le \frac{1}{\sqrt{2\pi}} \h^2$. \item \[ \int_{\R} \B\LRp{\xn,\h}\,d\xn = \h^2, \text{ and } \int_{\R} \snor{\B\LRp{\xn,\h}}\,d\xn \le \frac{37}{10} \h^2 \] \end{enumerate} \lemlab{GELU1D} \end{lemma} \begin{proof} The first assertion is straightforward. For the second assertion, it is sufficient to consider $\xn \ge 0$ and $\h > 0$. Any root of $\B\LRp{\xn,\h}$ satisfy the following equation \[ f\LRp{\xn} := \frac{\Phi\LRp{\xn+h} - \Phi\LRp{\xn}}{\Phi\LRp{\xn} - \Phi\LRp{\xn-h}} = \frac{\xn - \h}{\xn+\h} =: g\LRp{\xn}. \] Since, given $\h$, $f\LRp{\xn}$ is a positive decreasing function and $g\LRp{\xn}$ is an increasing function (starting from $-1$), they can have at most one intersection. Now for sufficiently large $\xn$, using the the mean value theorem it can be shown that \[ f\LRp{\xn} \approx e^{-\xn\h} \xrightarrow[]{\xn \to \infty} 0, \text{ and } g\LRp{\xn} \xrightarrow[]{\xn \to \infty} 1, \] from which we deduce that that there is only one intersection, and hence only one positive root $\xn^*$ for $\B\LRp{\xn,\h}$. Next, we notice $\g\LRp{\xn} \le 0 < f\LRp{\xn}$ for $0\le \xn \le h$. For $\h \ge 1$ it is easy to see $\f\LRp{2\h} < g\LRp{2\h} = 1/3$. Thus, $\h < \xn^* < 2h$ for $h\ge 1$. For $0 < \h < 1$, by inspection we have $f\LRp{2} < g\LRp{2}$, and hence $ \h < \xn^* < 2$. In conclusion $h < \xn^* < \max\LRc{2,2\h}$. The preceding argument also shows that $\B\LRp{\xn,\h} \ge 0$ for $0 \le \xn \le \xn^*$, $\B\LRp{\xn,\h} < 0$ for $\xn > \xn^*$, and $\B\LRp{\xn,\h} \xrightarrow[]{\xn \to \infty} 0^-$. Using the Taylor formula we have \[ \snor{\B\LRp{\bar\xn,\h}} \le \frac{1}{\sqrt{2\pi}}\h^2. \] For the third assertion, it is easy to verify the following indefinite integral (ignoring the integration constant as it will be canceled out) \[ \int \xn \Phi\LRp{\xn}\,d\xn = \half\LRp{\xn^2-1}\Phi\LRp{\xn} + \frac{xe^{{-\frac{\xn^2}{2}}}}{2\sqrt{2\pi}}, \] which, together with simple calculations, yields \[ \int_{\R} \B\LRp{\xn,\h}\,d\xn = \h^2. \] From the proof of the second assertion and the Taylor formula we can show \begin{multline*} \int_{\R} \snor{\B\LRp{\xn,\h}}\,d\xn = 2\int_{0}^{\xn^*} \B\LRp{\xn,\h}\,d\xn - 2\int_{\xn^*}^{\infty} \B\LRp{\xn,\h}\,d\xn \\= -3\h^2 + 2\deltah^2\LRs{\LRp{\LRp{\xn^*}^2-1}\Phi(\xn^*)} + \deltah^2\LRs{\frac{\xn^*e^{{-\frac{\LRp{\xn^*}^2}{2}}}}{\sqrt{2\pi}}} \le \frac{37}{10}\h^2. \end{multline*} Thus, $ \B\LRp{\xn,\h} \in \L^1\LRp{\R}$. \end{proof} \begin{theorem}[$\B$-function for GELU in $\n$ dimensions] Let $\x =\LRp{\x_1,\hdots,\x_n} \in \Rn$. Define \[ \Bc\LRp{\x} = \B\LRp{\x_n,\B\LRp{\x_{\n-1},\hdots,\B\LRp{\x_1,1}}}. \] Then $\Bc\LRp{\x} \in \L^1\LRp{\Rn}$. \theolab{GELUnD} \end{theorem} \begin{proof} From the proof of Lemma \lemref{GELU1D} we have $\int_{\R} \snor{\B\LRp{\xn,\h}}\,d\xn \le C\h^2$ and $\snor{\B\LRp{\xn,\h}} \le M\h^2$ for all ${\h} \in \R$ where $M = \frac{2}{\sqrt{2\pi}}$, $C = \frac{37}{10}$: in particular, $\int_{\R} \snor{\B\LRp{\x_1,1}}\,d\xn \le C$. It is easy to see by induction that \[ \snor{\Bc\LRp{\x}} \le M^{2^\n-1}. \] We claim that \[ \int_{\R^{\n}} \snor{\Bc\LRp{\x}}\,d\x \le C^\n M^{2^\n- \n-1}, \] and thus $\Bc\LRp{\x} \in \L^1\LRp{\Rn}$. We prove the claim by induction. Clearly it holds for $\n = 1$. Suppose it is valid for $\n$. Define $\x = \LRp{\x_{\n+1},\x_\n,\hdots,\x_1} = \LRp{\x_{\n+1},\y}$, we have \begin{multline*} \int_{\R^{\n+1}} \snor{\Bc\LRp{\x}}\,d\x = \intR\LRp{\int_{\R} \snor{\B\LRp{\x_{\n+1},\Bc\LRp{\y}}}\,d\x_{\n+1}}\,d\y \le C\intR\snor{\Bc\LRp{\y}}^2\,d\y \\ \le C M^{2^\n-1}\intR\snor{\Bc\LRp{\y}}d\y \le C^{\n+1}M^{2^{\n+1}-n-2}, \end{multline*} where we have used the induction hypothesis in the last inequality, and this ends the proof. \end{proof} \subsection{The Sigmoid Linear Unit (SiLU) is an nAI} SiLU has other names such as sigmoid shrinkage or swish \cite{hendrycks2020gaussian,ELFWING20183,ramachandran2017searching,Atto08} and is defined as \[ \sigma\LRp{\xn} := \frac{\xn}{1+e^{-\xn}}. \] By inspection, the second derivative of $\sigma\LRp{\xn}$ is bounded and its graph is quite close to bell shape. This suggests us to define \[ \B\LRp{\xn,\h}:= \deltah^2\LRs{\sigma}\LRp{\xn}. \] The proofs of the following results are similar to those of Lemma \lemref{GELU1D} and Theorem \theoref{GELUnD}. \begin{theorem}[$\B$-function for SiLU in $n$ dimension] Let $\h \in \R$, then the following hold. \begin{enumerate}[i)] \item $\B\LRp{\xn,\h}$ is an even function with respect to both $\xn \in \R$ and $\h \in \R$. \item $\B\LRp{\xn,\h}$ has two symmetric roots $\pm\xn^*$ and $\h < \xn^* < \max\LRc{3,2\h}$. Furthermore, $\snor{\B\LRp{\xn,\h}} \le \half \h^2$. \item $ \int_{\R} \B\LRp{\xn,\h}\,d\xn = \h^2, \text{ and } \int_{\R} \snor{\B\LRp{\xn,\h}}\,d\xn \le \frac{26}{5} \h^2. $ \item Let $\x =\LRp{\x_1,\hdots,\x_n} \in \Rn$. Define \[ \Bc\LRp{\x} = \B\LRp{\x_n,\B\LRp{\x_{\n-1},\hdots,\B\LRp{\x_1,1}}}. \] Then $\Bc\LRp{\x} \in \L^1\LRp{\Rn}$. \end{enumerate} \theolab{SiLU} \end{theorem} \subsection{The Mish unit is an nAI} \seclab{Mish} Mish unit, introduced in \cite{misra2020mish}, is defined as \[ \sigma\LRp{\xn} := \xn\tanh\LRp{\ln\LRp{1+e^\xn}}. \] Due to its similarity with SiLU, we define \[ \B\LRp{\xn,\h}:= \deltah^2\LRs{\sigma}\LRp{\xn}. \] Unlike any of the preceding activation functions and their corresponding $\B$-functions, it is not straightforward to manipulate Mish analytically. This motivates us to devise a new approach to show that Mish is nAI. As shall be shown in section \secref{generalAI}, this approach allows us to unify the nAI property for all activation functions. We begin with the following result on the second derivative of Mish. \begin{lemma} The second derivative of Mish, $\sigma^{(2)}\LRp{\xn}$, as a function on $\R$ is continuous, bounded, and integrable. That is, $\snor{\sigma^{(2)}\LRp{\xn}} \le M$ for some positive constant $M$ and $\sigma^{(2)}\LRp{\xn} \in \L^1\LRp{\R}$. \lemlab{Mish2ndDiff} \end{lemma} \begin{proof} It is easy to see that for $\xn \ge 2$ \[ \snor{\sigma^{(2)}\LRp{\xn}} \le 12e^{-3\xn}\LRp{\xn-2} +8e^{-2\xn}\LRp{\xn-1} + 8e^{-5\xn}\LRp{\xn+2} + 8e^{-4\xn}\LRp{\xn+4}, \] and \[ \snor{\sigma^{(2)}\LRp{\xn}} \le 12e^{3\xn}\LRp{2-\xn} +8e^{4\xn}\LRp{1-\xn} - 8e^{\xn}\LRp{\xn+2} - 8e^{2\xn}\LRp{\xn+4}, \] for $\xn \le -4$. That is, both the right and the left tails of $\sigma^{(2)}\LRp{\xn}$ decay exponentially, and this concludes the proof. \end{proof} \begin{theorem} Let $\h \in \R$, then the following hold. \begin{enumerate}[i)] \item There exists some positive constant $M< \infty$ such that $\snor{\B\LRp{\xn,\h}} \le M \h^2$. \item $ \int_{\R} \B\LRp{\xn,\h}\,d\xn \le \nor{\sigma^{(2)}}_{\L^1\LRp{\R}} \h^2, \text{ and } \int_{\R} \snor{\B\LRp{\xn,\h}}\,d\xn \le \nor{\sigma^{(2)}}_{\L^1\LRp{\R}} \h^2. $ \item Let $\x =\LRp{\x_1,\hdots,\x_n} \in \Rn$. Define \[ \Bc\LRp{\x} = \B\LRp{\x_n,\B\LRp{\x_{\n-1},\hdots,\B\LRp{\x_1,1}}}. \] Then $\Bc\LRp{\x} \in \L^1\LRp{\Rn}$. \end{enumerate} \theolab{Mish} \end{theorem} \begin{proof} For the first assertion, we note that $\sigma^{(2)}\LRp{\xn}$ is continuous and thus the following Taylor theorem with integral remainder for $\sigma\LRp{\xn}$ holds \[ \sigma\LRp{\xn+\h} = \sigma\LRp{\xn} + \sigma'\LRp{\xn}\h +\h^2\int_{0}^1\sigma^{(2)}\LRp{\xn + s\h}\LRp{1-s}\,ds. \] As a result, \begin{multline*} \snor{\B\LRp{\xn,\h}} \le \h^2\LRs{\int_{0}^1\snor{\sigma^{(2)}\LRp{\xn + s\h}}\LRp{1-s}\,ds + \int_{0}^1\snor{\sigma^{(2)}\LRp{\xn - s\h}}\LRp{1-s}\,ds} \le M\h^2, \end{multline*} where we have used the boundedness of $\sigma^{(2)}\LRp{\xn}$ from Lemma \lemref{Mish2ndDiff} in the last inequality. For the second assertion, we have \begin{multline*} \intR\B\LRp{\xn,\h}\,d\xn = \h^2 \int_\R\int_{0}^1{\sigma^{(2)}\LRp{\xn +s\h}}\LRp{1-s}\,dsd\xn \\+ \h^2\int_\R\int_{0}^1{\sigma^{(2)}\LRp{\xn - s\h}}\LRp{1-s}\,dsd\xn, \end{multline*} whose right hand side is well-defined owing to $\sigma^{(2)}\LRp{\xn} \in \L^1\LRp{\R}$ (see Lemma \lemref{Mish2ndDiff}) and the Fubini theorem. In particular, \[ \intR\B\LRp{\xn,\h}\,d\xn \le \nor{\sigma^{(2)}}_{\L^1\LRp{\R}}\h^2. \] The proof for $\intR\snor{\B\LRp{\xn,\h}}\,d\xn \le \nor{\sigma^{(2)}}_{\L^1\LRp{\R}}\h^2$ follows similarly. The proof of the last assertion is the same as the proof of Theorem \theoref{GELUnD} and hence is omitted. \end{proof} \section{General framework for nAI} \seclab{generalAI} In this section, inspired by the nAI proof of the Mish activation function in Section \secref{Mish}, we develop a general framework for nAI that requires only two conditions on the $\k$th derivative of any activation $\sigma\LRp{\xn}$. The beauty of the general framework is that it provides a single nAI proof that is valid for a large class of functions including all activation functions that we have considered. The trade-off is that less can be said about the $\B$-function. To begin, suppose that there exists $\k \in \mathbb{N}$ such that \begin{enumerate}[label = \textbf{C\arabic*}] \item \label{enum:AC} \underline{Integrability}: $\sigma^{(\k)}\LRp{\xn}$ is integrable, i.e., $\sigma^{(\k)}\LRp{\xn} \in \L^1\LRp{\R}$. \item \label{enum:UB} \underline{Essential boundedness}: there exists $M < \infty$ such that $\nor{\sigma^{(\k)}}_\infty \le M$. \end{enumerate} Note that if the two conditions \ref{enum:AC} and \ref{enum:UB} hold for $\k =0$ (e.g. Gaussian activation functions) then the activation is obviously an nAI and thus we only consider $\k \in \mathbb{N}$. In this case, the $\B$-function for $\sigma\LRp{\xn}$ can be defined via the $\k$-order central finite difference: \begin{equation} \B\LRp{\xn,\h} := \deltah^\k\LRs{\sigma}\LRp{\xn} = \sum_{i=0}^\k\LRp{-1}^i \begin{pmatrix} \k \\ i \end{pmatrix} \sigma\LRp{\xn+\LRp{\frac{\k}{2}-i}\h}. \eqnlab{kCD} \end{equation} \begin{lemma} For any $\h \in \R$, there holds: \[ \B\LRp{\xn,\h} = \frac{\h^\k}{\LRp{\k-1}!} \sum_{i=0}^\k\LRp{-1}^i \begin{pmatrix} \k \\ i \end{pmatrix} \LRp{\frac{\k}{2}-i}^\k\int_0^1\sigma^{(\k)}\LRp{\xn + s\LRp{\frac{\k}{2}-i}\h}\LRp{1-s}^{\k-1}\,ds. \] \lemlab{BellGeneral} \end{lemma} \begin{proof} Applying the Taylor theorem gives \begin{multline*} \sigma\LRp{\xn + \LRp{\frac{\k}{2}-i}\h} = \sum_{j=0}^{\k-1}\sigma^{(j)}\LRp{\xn}\LRp{\frac{\k}{2}-i}^j\frac{\h^j}{j!} \\+ \frac{\h^\k}{\LRp{\k-1}!}\LRp{\frac{\k}{2}-i}^\k\int_0^1\sigma^{(\k)}\LRp{\xn + s\LRp{\frac{\k}{2}-i}\h}\LRp{1-s}^{\k-1}\,ds. \end{multline*} The proof is concluded if we can show that \[ \sum_{i=0}^\k\LRp{-1}^i \begin{pmatrix} \k \\ i \end{pmatrix} \sum_{j=0}^{\k-1}\sigma^{(j)}\LRp{\xn}\LRp{\frac{\k}{2}-i}^j\frac{\h^j}{j!} = \sum_{j=0}^{\k-1}\sigma^{(j)}\LRp{\xn}\frac{\h^j}{j!} \sum_{i=0}^\k\LRp{-1}^i \begin{pmatrix} \k \\ i \end{pmatrix} \LRp{\frac{\k}{2}-i}^j= 0, \] but this is clear by the alternating sum identity \[ \sum_{i=0}^\k\LRp{-1}^i \begin{pmatrix} \k \\ i \end{pmatrix} i^j = 0, \quad \text{ for } j=0,\hdots,\k-1. \] \end{proof} \begin{theorem} Let $\h \in \R$. \begin{enumerate} \item There exists $N < \infty$ such that $\snor{\B\LRp{\xn,\h}} \le N\snor{\h}^\k$. \item There exists $C < \infty$ such that $\intR\snor{\B\LRp{\xn,\h}}\,d\xn \le C\snor{\h}^\k$ \item Let $\x =\LRp{\x_1,\hdots,\x_n} \in \Rn$. Define \[ \Bc\LRp{\x} = \B\LRp{\x_n,\B\LRp{\x_{\n-1},\hdots,\B\LRp{\x_1,1}}}. \] Then $\Bc\LRp{\x} \in \L^1\LRp{\Rn}$. \end{enumerate} \theolab{generalLCAI} \end{theorem} \begin{proof} The first assertion is straightforward by invoking assumption \ref{enum:UB}, Lemma \lemref{BellGeneral}, and defining \[ N = \frac{M}{\k!}\sum_{i=0}^\k \begin{pmatrix} \k \\ i \end{pmatrix} \snor{\frac{\k}{2}-i}^\k. \] For the second assertion, using triangle inequalities and the Fubini theorem yields \begin{multline*} \intR\snor{\B\LRp{\xn,\h}}\,d\xn \le \\ \frac{\snor{\h}^\k}{\LRp{\k-1}!}\sum_{i=0}^\k \begin{pmatrix} \k \\ i \end{pmatrix} \snor{\frac{\k}{2}-i}^\k \int_0^1\LRp{1-s}^{\k-1}\intR\snor{\sigma^{(\k)}\LRp{\xn + s\LRp{\frac{\k}{2}-i}\h}}d\xn\,ds \\= \frac{N}{M} \nor{\sigma^{(\k)}}_{\L^1\LRp{\R}}\snor{\h}^\k, \end{multline*} which, by defining $C = \frac{N}{M} \nor{\sigma^{(\k)}}_{\L^1\LRp{\R}}$, ends the proof owing to assumption \ref{enum:AC}. The proof of the last assertion is the same as the proof of Theorem \theoref{GELUnD} and hence is omitted. In particular, we have \[ \int_{\R^\n}\snor{\Bc\LRp{\x}}\,d\x \le C^\k M^{\frac{\n^\k-1}{\n-1}-\k} < \infty. \] \end{proof} \begin{remark} Note that Theorem \theoref{generalLCAI} is valid for all activation functions considered in Section \secref{manyActivations} with appropriate $\k$: for example, $\k = \q + 1$ for RePU of order $\q$, $\k = 1$ for generalized sigmoidal functions, $\k = 2$ for ELU, GELU, SiLU and Mish, $\k = 0$ for Gaussian, and etc. \end{remark} \begin{remark} Suppose a function $\sigma\LRp{\x}$ satisfies both conditions \ref{enum:AC} and \ref{enum:UB}. Theorem \theoref{generalLCAI} implies that $\sigma\LRp{\x}$ and all of its $j$th derivative, $j = 1,\hdots,\k$, are not only a valid activation function but also universal. \end{remark} \begin{remark} In one dimension, if $\sigma$ is of bounded variation, i.e. its total variation $TV\LRp{\sigma}$ is finite, then $\B\LRp{\xn,\h}$ in \eqnref{kCD} resides in $\L^1\LRp{\R}\cap\L^\infty\LRp{\R}$ by taking $\k = 1$. A simple proof of this fact can be found in \cite[Corollary 3]{SIEGEL2020313}. Thus, $\sigma$ is an nAI in this case. \end{remark} \begin{remark} We have used central finite differences for convenience, but Lemma \lemref{BellGeneral}, and hence Theorem \theoref{generalLCAI}, also holds for $\k$th-order forward and backward finite differences. The proofs are indeed almost identical. \remalab{FD} \end{remark} \section{Conclusions} \seclab{conclusion} We have presented a unified and constructive framework for neural network universality. At the heart of the framework are the neural network approximate identity concept, and approximation of integrals with Riemann sums or Monte Carlo method. We have shown that most of existing activations are neural network approximate identity, and thus universal in the space of continuous of functions on compacta. The framework induces several advantages compared contemporary approaches. First, our approach is constructive with elementary means from functional analysis, probability theory, and numerical analysis. Second, it is the first attempt to unify the universality for most of existing activations. Third, as a by product, the framework provides the first university proof for some of the existing activation functions including the Mish, SiLU, ELU, GELU, etc. Fourth, it discovers new activations with guaranteed universality property. Indeed, any activation\textemdash whose $\k$th derivative, where $\k \in \mathbb{N}$, is integrable and essentially bounded\textemdash is universal. Fifth, for each activation, the framework provides precisely the architecture of the one-hidden neural network with predetermined number of of neuron, and the values of weights/biases. \bibliographystyle{siamplain} \bibliography{references,NewRefs} \end{document}
23,916
TITLE: Combinatorics squares QUESTION [1 upvotes]: Calculate the number of smaller rectangles (including squares) that are contained in a n × n grid (such as the smaller grey rectangle in the 5 × 5 grid below), by first considering choosing the sides of the rectangles. How do I answer this given the following method that is stated also the image is attached https://i.stack.imgur.com/ngn07.jpg This is my attempt to answer: https://i.stack.imgur.com/epJuK.jpg REPLY [0 votes]: I cannot see an image but I assume you are meant to first choose $2$ vertical edges from the $n+1$ available. This can be done in $\begin{pmatrix}n+1\\2\\\end{pmatrix}$ ways. Doing the same for horizontal edges gives the same number of choices and so there are $$\begin{pmatrix}n+1\\2\\\end{pmatrix}^2$$ rectangles.
19,531
\begin{document} \begin{center} {\bf\Large Jet Bundles and the Formal Theory of \\ Partial Differential Equations} \vspace{0.4cm} Richard Baker\footnote{e-mail: \texttt{[email protected]}} and Chris Doran\footnote{e-mail: \texttt{[email protected]}, \texttt{http://www.mrao.cam.ac.uk/$\sim$cjld1/}} \vspace{0.4cm} Astrophysics Group, Cavendish Laboratory, Madingley Road, \\ Cambridge CB3 0HE, UK. \vspace{0.4cm} \begin{abstract} Systems of partial differential equations lie at the heart of physics. Despite this, the general theory of these systems has remained rather obscure in comparison to numerical approaches such as finite element models and various other discretisation schemes. There are, however, several theoretical approaches to systems of PDEs, including schemes based on differential algebra and geometric approaches including the theory of exterior differential systems~\cite{amp} and the so-called ``formal theory''~\cite{th:seiler} built on the jet bundle formalism. This paper is a brief introduction to jet bundles focusing on the completion of systems to equivalent involutive systems for which power series solutions may be constructed order by order. We will not consider the mathematical underpinnings of involution (which lie in the theory of combinatorial decompositions of polynomial modules~\cite{calmet,seiler}) nor other applications of the theory of jet bundles such as the theory of symmetries of systems of PDEs~\cite{olver} or discretisation schemes based on discrete approximations to jet bundles~\cite{marsden}. \end{abstract} \end{center} \section{Fibre Bundles and Sections} A bundle is a triple $\left( M, X, \pi \right)$, where $M$ is a manifold called the total space, $X$ is a manifold called the base space, and $\pi : M \rightarrow X$ is a continuous surjective mapping called the projection. Where no confusion can arise, we shall often find it convenient to denote the bundle either by its total space or its projection. A trivial bundle is a bundle whose total space $M$ is homeomorphic to $X\times U$, where $U$ is a manifold called the fibre. A bundle which is locally a trivial bundle is called a fibre bundle (an example of a fibre bundle which is not trivial is the M\"obius band). In the following we shall only be concerned with trivial bundles. We will denote the coordinates on the base manifold $x = \{x^i, i=1, \dots ,p\}$ where $p$ is the dimension of the base manifold and the coordinates on the fibre $u = \{u^\alpha, \alpha=1, \dots, q\}$ where $q$ is the dimension of the fibre (when we consider jet bundles, we will need to extend this notation slightly). In the case of a trivial bundle, or in a local coordinate patch on a fibre bundle, the projection takes the simple form \[ \pi : \left\{ \begin{array}{l} X \times U \rightarrow X \\ (x,u) \mapsto (x) \end{array} \right. \] A section of a fibre bundle is a map \[ \Phi_{f} : \left\{ \begin{array}{ll} X \rightarrow X \times U \\ x \mapsto \left( x, f \left( x \right) \right) \end{array} \right. \] such that $\pi \circ \Phi_{f}$ is the identity map on $X$. In other words, a section assigns to each point in $X$ a point in the fibre over that point. The graph of the function $f(x)$ is: \[ \Gamma_{f} = \left\{ \left( x,f(x) \right) : x \in \Omega \right\} \subset X \times U \] where $\Omega$ is the domain of definition of $f$. We will find it convenient to refer to sections, functions and graphs interchangably. \section{Jet Bundles} We define a multi-index $J$ as a $p$-tuple $[j_1,j_2,\dots,j_p]$ with $j_i \in \mathbb{N}$. The order of the multi-index $J$, denoted $|J|$ is given by the sum of the $j_i$. We will often find it more convenient to use a repeated-index notation for $J$. In this notation $J$ is represented by a string of $|J|$ independent coordinate labels, with $j_i$ copies of the $i$-th coordinate label. For example, if $p=3$ and the coordinates are labelled then $x$, $y$ and $z$ then the second order multi-indices in repeated index notation are $xx$, $xy$, $xz$, $yy$, $yz$ and $zz$. We introduce the special notation $J,i$ where $i$ is an independent coordinate label for the multi-index given by $[j_1,\dots,j_i+1,\dots,j_p]$. For example, $xyy,x = xxyy$. If our independent variables are $x^i$ and our dependent variables are $u^\alpha$ then we introduce jet variables $u^\alpha_J$ where $J$ is a multi-index. Notice that we can put the jet variables of order $n$ in one-to-one correspondance with the derivatives of the dependent variables of order $n$. We will later introduce further structures that enable us to make a full correspondance between jet variables and derivatives. Associated with these jet variables we introduce a set of Euclidean spaces $U_i$, whose coordinates are $u^\alpha_J$ with $|J|=i$. We call the space $M^{(1)} = X \times U \times U_1$ the first order jet bundle over the space $M = X \times U$. We now introduce the notation \[ U^{(n)} = U \times U_1 \times \dots \times U_n \] and call the space $M^{(n)} = X \times U^{(n)}$ the $n$-th order jet bundle over $M$. \begin{example} Let $p=2$ and $q=1$. Label the independent variables $x$ and $y$ and the dependent variable $u$. The first order jet bundle, $M^{(1)}$, then has coordinates $(x,y,u,u_x,u_y)$, the second order jet bundle, $M^{(2)}$, has coordinates $(x,y,u,u_x,u_y,u_{xx},u_{xy},u_{yy})$ and so on. \end{example} We will often consider a jet bundle as a bundle over a lower order jet bundle. We denote the natural projection between the $(m+n)$-th order jet bundle and the $n$-th order jet bundle as \[ \pi^{m+n}_{n} : M^{(m+n)} \rightarrow M^{(n)} \] Note that although $M^{(m+n)}$ is a bundle over $M^{(n)}$ it is not a jet bundle over $M^{(n)}$, but rather a subset of such a bundle. \section{Differential Functions and Formal Derivatives} A differential function is a smooth, real-valued function defined on $M^{(n)}$. We denote the algebra of differential functions defined on $M^{(n)}$ by $\mathcal{A}_{(n)}$. If $F \in \mathcal{A}_{(n)}$ then $F \in \mathcal{A}_{(m+n)}$ too, as the coordinates on $M^{(n)}$ are a subset of the coordinates on $M^{(m+n)}$. If the lowest order space on which $F$ is defined is $M^{(n)}$ then we will say that $F$ is an $n$-th order differential function. These will be used to describe sections of $M$ and differential equations. The most fundamental maps between lower order and higher order jet bun\-dles are provided by formal derivatives. A formal derivative operator $D_i$, called the formal derivative with respect to $x^i$, maps each differential function $F \in \mathcal{A}_{(n)}$ to a differential function $D_iF \in \mathcal{A}_{(n+1)}$ via \[ D_iF = \rcdbpd{F}{x^i} + \sum_{\alpha=1}^q \sum_J \rcdbpd{F}{u^\alpha_J} \, u^\alpha_{J,i} \] It is convenient to extend the notation for formal and partial derivatives to encompass our multi-index notation: \[ D_J = (D_1)^{j_1} (D_2)^{j_2} \dots (D_p)^{j_p} \] and similarly for partial derivatives. Clearly if $F \in \mathcal{A}_{(n)}$ then $D_JF \in \mathcal{A}_{(n+|J|)}$. \section{Prolongation of Sections} If $\Gamma_f \subset M$ is a section defined by $u = f(x)$ where $f$ is a smooth function of $x$ then we can use the formal derivative to prolong it to a section $\Gamma_f^{(1)} \subset M^{(1)}$. The equations defining $\Gamma_f^{(1)}$ are simply $u^\alpha = f^a(x)$ and the equations found by applying each of the $D_i$ to $u^\alpha = f^\alpha(x)$: \[ \Gamma_f^{(1)} = \left\{ \begin{array}{ccc} u^\alpha & = & f^\alpha(x) \\ u^\alpha_i & = & \partial_i f^\alpha(x) \end{array} \right. \] Similarly we can use the $D_i$ multiple times to prolong $\Gamma_f$ to a section $\Gamma_f^{(n)} \subset M^{(n)}$ defined by the equations \[ u_J^\alpha = \partial_J f^\alpha(x) \] where $J$ ranges over all multi-indices such that $0 \leqslant |J| \leqslant n$. We will sometimes talk about the $n$-th prolongation of a function $f(x)$, and write this prolongation as $f^{(n)}(x)$. \section{Differential Equations and Solutions} We intend to view systems of PDEs as geometric objects. In keeping with this programme, we will simply call such a system a ``differential equation''. An $n$-th order differential equation, $I_\Delta$, is a fibred submanifold of $M^{(n)}$. The differential equation is often stated as the kernel of a set of differential functions $\Delta_\nu \in \mathcal{A}_{(n)}$: \[ \begin{array}{lr} \Delta_\nu(x,u^{(n)}) = 0, & \nu = 1,\dots,l \end{array} \] We can map any system of partial differential equations onto such a submanifold, simply by replacing all of the derivatives of dependent variables by the corresponding jet variables. Indeed we have chosen our notation in such a way that this process is entirely transparent. \begin{example} The two dimensional wave equation \[ \frac{\partial^2 u}{\partial t^2} - \frac{\partial^2 u}{\partial x^2} - \frac{\partial^2 u}{\partial y^2} = 0 \] maps onto the submanifold of $M^{(2)}$ (with the obvious coordinates) determined by \[ u_{tt} - u_{xx} - u_{yy} = 0 \] \end{example} In keeping with our compact notations $x$, $u$ and $f$, we shall write \[ \Delta\left(x,u^{(n)}\right) = \left(\Delta_1(x,u^{(n)}),\dots,\Delta_l(x,u^{(n)})\right) \in \mathcal{A}_{(n)}^l \] $\Delta$ is therefore a map from $M^{(n)}$ to $\mathbb{R}^l$. The differential equation $I_\Delta$ is the submanifold in which the map $\Delta$ vanishes: \[ I_\Delta = \left\{ (x,u^{(n)}) : \Delta(x,u^{(n)}) = 0 \right\} \] A smooth solution of $I_\Delta \subset M^{(n)}$ is a smooth function $f(x)$ such that \[ \Delta(x,f^{(n)}(x)) = 0 \] or, in terms of our geometric formulation \[ \Gamma_f^{(n)} \subset I_\Delta \] Note that not every section of $M^{(n)}$ which lies entirely within $I_\Delta$ is a prolongation of a section of $M$ - the prolongation of a section of $M$ automatically respects the correspondence between jet variables and derivatives, whereas an arbitrary section of $M^{(n)}$ does not. \section{Prolongation and Projection of Differential \\ Equations} The $k$-th prolongation of the differential equation \[ I_\Delta = \left\{ (x,u^{(n)}) : \Delta_\nu(x,u^{(n)})=0 \right\} \subset M^{(n)} \] is \[ I^{(k)}_\Delta = \left\{ (x,u^{(n+k)}) : (D_J\Delta_\nu)(x,u^{(n+k)})=0 \right\} \subset M^{(n+k)} \] where $J$ runs over all multi-indices up to order $k$. Differential equations may be projected along the fibers onto lower order jet bundles. In general, this is a complicated procedure in local coordinates, but it is much easier if the differential equation is known to be the prolongation of a lower order system. For the remainder of this paper this will always be the case. \begin{example} Let the differential equation $\mathcal{I} \subset M^{(2)}$ be defined by \[ \mathcal{I} : \left\{ \begin{array}{r} u_{zz} + u_{xy} + u = 0 \\ u_x - u = 0 \\ u_y - u^2 = 0 \\ \end{array} \right. \] The projection of $\mathcal{I}$ into $M^{(1)}$ is \[ \pi^2_1 (\mathcal{I}) : \left\{ \begin{array}{r} u_x - u = 0 \\ u_y - u^2 = 0 \\ \end{array} \right. \] \end{example} \section{Power Series Solutions} A smooth solution of $\mathcal{I} \subset M^{(n)}$ in a neighbourhood of $x_0$ may be written as the power series \[ u^\alpha = f^\alpha (x) = \sum_{|J|=0}^{\infty} \frac{a^\alpha_J}{J!} \left( x-x_0 \right)^J \] for some constants $a^\alpha_J$. Here $J! = j_1! \; j_2! \; \dots \; j_p!$ and \[ \left( x-x_0 \right)^J = \prod_{i=1}^p \left( x^i-x^i_0 \right)^{j_i} \] All we have to do is choose the values of the $a^\alpha_J$ so that $\Gamma_f^{(n)} \subset \mathcal{I}$. It will, however, prove to be easiest to fix the $a^\alpha_J$ by using the condition that $\Gamma_f^{(n+k)} \subset \mathcal{I}^{(k)}$ for all $k$ and working entirely at the point $x_0$. By applying the formal derivative repeatedly to the power series we can obtain power series expressions for each of the $u^\alpha_J(x)$. Evaluating each of these at $x_0$ shows that $a^\alpha_J = u^\alpha_J|_{x_0}$. Therefore we require that \[ D_J \Delta_\nu(x_0,a^\alpha_J) = 0 \] for all $J$. We have exchanged the solution of a set of partial differential equations for the solution of an infinite number of algebraic equations. For a class of systems known as formally integrable differential equations we can construct a power series solution order by order. We first substitute the general form for the power series into each of the equations and evaluate at $x=x_0$. This gives us a set of algebraic equations for the $a^\alpha_J(x)$ with $|J| \leqslant n$. We then make a partition of the jet variables into parametric derivatives whose values we can choose and principal derivatives whose values are then fixed by the system, and solve for the latter in terms of the former. We then prolong the differential equation and repeat the process. This time the equations of order less than $n+1$ will automatically be satisfied by the previously chosen constants, and we will be left with a new set of equations $D_i \Delta_\nu (x_0,a^\alpha_J) = 0$ for $|J|=n+1$. The nature of the formal derivative means that these equations will be linear. We may repeat the procedure to calculate ever higher terms in the power series. \section{Integrability Conditions} If a differential equation is not formally integrable the solutions are subject to constraints which we call integrability conditions. These are extra equations that are differential rather than algebraic consequences of the equations $\Delta = 0$. In other words, projecting the prolongation of a differential equation may not return the original equation but only a proper subset thereof: \[ \pi^{n+k}_n ( \mathcal{I}^{(k)}_\Delta ) \varsubsetneq \mathcal{I}_\Delta \subset M^{(n)} \] and so the order by order construction of a power series solution will be disrupted. To streamline the notation, we will write the $j$-th projection of the $k$-th prolongation of $\mathcal{I}$ as $\mathcal{I}^{(k)}_j$. For example, the expression above may be rewritten $\mathcal{I}^{(k)}_k \varsubsetneq \mathcal{I}$. Integrability conditions arise in two ways: through the differentiation of equations with order less than $n$ in $\mathcal{I} \subset M^{(n)}$, and through the effects of cross-derivatives, as shown in the following example: \begin{example} \label{ex:integrability} Let $p=3$ with coordinates $x$, $y$ and $z$, and $q=1$ with coordinate $u$, and consider the differential equation \[ \mathcal{I} : \left\{ \begin{array}{r} u_z + y u_x = 0 \\ u_y = 0 \end{array} \right. \] which prolongs to \[ \mathcal{I}^{(1)} : \left\{ \begin{array}{r} u_{xz} + y u_{xx} = 0 \\ u_{yz} + u_x + y u_{xy} = 0 \\ u_{zz} + y u_{xz} = 0 \\ u_{xy} = 0 \\ u_{yy} = 0 \\ u_{yz} = 0 \\ u_z + y u_x = 0 \\ u_y = 0 \end{array} \right. \] We see that the equations $u_{yz} = 0$ and $u_{xy} = 0$ substituted into the equation $u_{yz} + u_x + y u_{xy} = 0$ imply that $u_x = 0$. This is a first order equation and so forms part of $\mathcal{I}^{(1)}_1$ on projection. Hence $\mathcal{I}^{(1)}_1 \varsubsetneq \mathcal{I}$. \end{example} A differential equation $\mathcal{I} \subset M^{(n)}$ is formally integrable if $\mathcal{I}^{(k+1)}_1 = \mathcal{I}^{(k)}$ for all $k$. Notice that to check for formal integrability requires an infinite number of operations, for integrability conditions may in general arise after an arbitrarily large number of prolongations. \section{Involutive Differential Equations} We now turn to the consideration of a subset of formally integrable differential equations known as involutive equations. Two facts make this class of equations interesting and useful. Firstly, it is possible to determine whether a given differential equation is involutive using only a finite number of operations. Secondly, for any differential equation it is possible to produce an involutive equation with the same solution space using only a finite number of operations. There is a more systematic method for determining the integrability conditions that arise upon a single prolongation and projection. Let us look at the Jacobi matrix of $\mathcal{I}^{(1)} \subset M^{(n+1)}$. This matrix can be divided into four blocks: \[ \left( \, \begin{array}{|c|c|} \hline \begin{array}{lr} \rcdbdpd{D_i\Delta_\nu}{u^\alpha_J}, & 0 \leqslant |J| \leqslant n \end{array} & \begin{array}{lr} \rcdbdpd{D_i\Delta_\nu}{u^\alpha_J}, & |J| = n+1 \end{array} \\ \hline \begin{array}{lr} \rcdbdpd{\Delta_\nu}{u^\alpha_J}, & 0 \leqslant |J| \leqslant n \end{array} & 0 \\ \hline \end{array} \, \right) \] We order the columns according to increasing $|J|$, and within each order by the first non-vanishing component of J (we will call this component the class of J). When we project the $\mathcal{I}^{(1)}$ into $M^{(n)}$ to form $\mathcal{I}^{(1)}$ we must include only those equations that are independent of the $u^\alpha_J$ with $|J|=n+1$. In other words we must include all those equations which have a full row of zeros in the right hand block. This clearly includes the equations corresponding to rows in the bottom part of the Jacobi matrix. However, if the upper right submatrix is not of maximal rank then we may be able to form integrability conditions. If for a row with all zeros in the right hand section we find that the left hand part is independent of the rows in the lower part of the matrix then there is indeed an integrability condition, which can be determined by performing the same operations on the full equations $D_i \Delta_\nu = 0$. We will call the system of equations defined by the upper right block of the Jacobi matrix the symbol of $\mathcal{I}$, and denote this $\mathsf{Sym}\,\mathcal{I}$: \[ \mathsf{Sym}\,\mathcal{I} : \left\{ \sum_{\alpha,|J|=n} \left( \rcdbpd{\Delta_\nu}{u^\alpha_J} \right) v^\alpha_J = 0 \right. \] where the $v^\alpha_J$ are a new set of variables, which we order in the same way as the the $u^\alpha_J$ when displaying the symbol as a matrix. Notice that the entries in the matrix of $\mathsf{Sym}\,\mathcal{I}$ are the coefficients of the highest order jet variables in the equations defining $\mathcal{I}^{(1)}$ as can be seen by comparison with the formal derivative. Comparison of the ranks of $\mathcal{I}$, $\mathcal{I}^{(1)}$ and $\mathsf{Sym}\,\mathcal{I}^{(1)}$ will enable us to determine if an integrability condition will occur on a single prolongation and projection. There is an integrability condition if $\mathsf{rank}\, \mathcal{I}^{(1)}_1 > \mathsf{rank}\, \mathcal{I}$, or equivalently if $\mathsf{dim}\, \mathcal{I}^{(1)}_1 < \mathsf{dim}\, \mathcal{I}$. Furthermore from inspection of the Jacobi matrix of $\mathcal{I}^{(1)}$ \[ \mathsf{rank}\, \mathcal{I}^{(1)}_1 = \mathsf{rank}\, \mathcal{I}^{(1)} - \mathsf{rank}\, \mathsf{Sym}\,\mathcal{I}^{(1)} \] We can thus systematically determine if integrability conditions arise from a single prolongation, and if necessary find the new equations. Henceforth we will always consider the row echelon form of the symbol. We call $\mathsf{Sym}\,\mathcal{I}$ involutive if \[ \mathsf{rank}\,\mathsf{Sym}\,\mathcal{I}^{(1)} = \sum_k k \beta_k \] where $\beta_k$ is the number of rows of class $k$ in $\mathsf{Sym}\,\mathcal{I}$. For a row of class $k$ we call the variables $x^1,x^2,\dots,x^k$ multiplicative variables. We now consider prolonging each equation by its multiplicative variables only. The equations obtained in this manner will be independent as they will have distinct pivots in $\mathsf{Sym}\,\mathcal{I}^{(n)}$. As there are $\beta_k$ equations of class $k$ and each has $k$ multiplicative variables then this means there will be at least $\sum k \beta_k$ independent equations of order $n+1$ in $\mathcal{I}^{(n+1)}$. If $\mathsf{Sym}\,\mathcal{I}$ is involutive then we obtain all the independent equations of order $n+1$ in this manner. The equations obtained from the other prolongations required to prolong $\mathcal{I}$ to $\mathcal{I}^{(1)}$ will thus be dependent, of lower order, or both. The importance of involutive symbols arises from the following theorem which provides a criterion for involution that can be tested in a finite number of operations: \begin{theorem} $\mathcal{I}$ is involutive if and only if $\mathsf{Sym}\,\mathcal{I}$ is involutive and $\mathcal{I}^{(1)}_1 = \mathcal{I}$. \end{theorem} \section{Cartan-Kuranishi Completion} The central theorem of the theory of involutive sytems of differential equations is the Cartan-Kuranishi theorem: \begin{theorem} For every differential equation $\mathcal{I}$ there are two integers $j$, $k$ such that $\mathcal{I}^{(k)}_j$ is an involutive equation with the same solution space. \end{theorem} The Cartan-Kuranishi completion algorithm is a straightforward application of the two previous theorems: \begin{tabbing} \textbf{input} $\mathcal{I}$ \\ \textbf{repeat} \\ \qquad \=\textbf{while} $\mathsf{Sym}\,\mathcal{I}$ is not involutive \textbf{repeat} $\mathcal{I} := \mathcal{I}^{(1)}$ \\ \>\textbf{while} $\mathcal{I} \ne \mathcal{I}^{(1)}_1$ \textbf{repeat} $\mathcal{I} := \mathcal{I}^{(1)}_1$ \\ \textbf{until} $\mathsf{Sym}\,\mathcal{I}$ is involutive and $\mathcal{I} = \mathcal{I}^{(1)}_1$ \\ \textbf{output} involutive $\mathcal{I}$ \end{tabbing} Therefore, given a differential equation we may first complete it to an involutive system (if it is not already involutive) and then construct a power series solution order by order using the algorithm described earlier. \section{Conclusion} Although the algorithms described in this paper may be used to construct formal solutions to systems of partial differential equations, they suffer from several shortcomings in practice. Firstly, there is the problem of setting the values of the parametric derivatives. Typically, these must be calculated from the values of functions on submanifolds of the base space or from values of those functions at the points of a lattice within the base space. Secondly, many terms of the power series must be calculated to provide solutions of comparable accuracy to those produced by discretisation schemes, and the symbolic manipulations involved rapidly become computationally intensive as the order increases. Thirdly, there is the problem of the convergence of the series. A promising approach to the circumvention of these difficulties is the use of a hybrid method that first uses a discretisation scheme to calculate values at lattice points, uses these values to determine approximate values of the parametric derivatives and then constructs power series about each of the points and smoothly joins them together using a functional interpolation scheme. This method is currently being implemented. Completion to involution and the construction of power series solutions are far from the only applications of the jet bundle formalism. As mentioned in the introduction, jet bundles also provide the natural setting for the analysis of the symmetry groups of systems of PDEs and of the variational symmetries of Lagrangian systems (which are linked by Noether's theorem to conservation laws). Symmetry analysis is also closely related to the construction of solutions possessing specified symmetries. Unfortunately, these fascinating and important subjects are beyond the scope of the current paper.
92,688
… The OpenStack Foundation announced several significant improvements and solutions today at the OpenStack Summit Sydney. The foundation introduced a four-part plan to integrate OpenStack with other open-source technologies. The plan includes: documenting cross-project use cases; collaborating across communities, including contributions to other open-source projects; fostering new projects at the OpenStack Foundation; and coordinating end-to-end testing … continue reading VMWare, a cloud infrastructure company, had a lot to announce at their annual VMWorld conference in Las Vegas earlier this week. You can check out our coverage of their collaboration with Pivotal and Google Cloud on the new Pivotal Container Service, and with Cohesity on increased recoverability. Here’s a run-down of the announcements made about the … The … continue reading Intel is beginning to square in on AI with the announcement of a single cross-Intel organization: The Artificial Intelligence Products Group (AIPG). According to the company, AIPG strengthens its focus on AI, and will include engineering, labs, software and resources as it continues to work on its AI portfolio: The Intel Nervana platform. In addition, … Core … continue reading President Barack Obama will host the White House Frontiers Conference in Pittsburgh today to address the future of science and technology and what America can do to advance the frontier of artificial intelligence. To prepare the U.S. for the future of AI, the White House is releasing a report that surveys the current state of … continue reading
244,969
The hottest startups in Tel Aviv Tel Aviv is a deep-tech powerhouse. A combination of strong technical education (backed by local entrepreneurs’ experience in the Israeli Defense Forces), high levels of venture capital and a support network of serial entrepreneurs has established the city’s reputation for building innovative companies. Register to Learn More Please fill out this form and we will get you the information you need.
364,427