text
stringlengths
100
500k
subset
stringclasses
4 values
We consider the theory of very weak solutions of the stationary Stokes system with nonhomogeneous boundary data and divergence in domains of half space type, such as $\mathbb R^n_+$, bent half spaces whose boundary can be written as the graph of a Lipschitz function, perturbed half spaces as local but possibly large perturbations of $\mathbb R^n_+$, and in aperture domains. The proofs are based on duality arguments and corresponding results for strong solutions in these domains, which have to be constructed in homogeneous Sobolev spaces. In addition to very weak solutions we also construct corresponding pressure functions in negative homogeneous Sobolev spaces. Fichera, G.: The trace operator. Sobolev and Ehrling lemmas. Linear Elliptic Differential Systems and Eigenvalue Problems Lecture Notes in Mathematics 8 Springer, Berlin (1965), 24-29. Franzke, M.: Die Navier-Stokes-Gleichungen in "Offnungsgebieten. PhD thesis Shaker, Aachen German (2000).
CommonCrawl
Its my model a Mixed model? The joint distribution of Y=AX and Z=BX given a projection matrix A and residual maker matrix B, and a random vector X with known pdf? What does the pmf of a discrete random variable look like if it can take on the value $\infty$? How to derive the joint distribution of Y=AX and Z=BX given a random vector X with known pdf? How is the %IncMSE importance measure for random forests calculated, exactly? How can I prove the following relation between the probabiloity of X and its expectation using Cauchy-Schwarz inequality? How can random forest variable importance of a pooled sample be greater than variable importance of individual samples?
CommonCrawl
Baydar N., Fošner A., Strašek R. Let $n$ be a fixed positive integer, let $R$ be a $(2n)!$ -torsion-free semiprime ring, let $\alpha$ be an automorphism or an anti-automorphism of $R$, and let $D_1 , D_2 : R → R$ be derivations. We prove the following result: If $(D_1^2 (x) + D_2(x))^n ∘ α(x)^n = 0 $ holds for all $x Є R$, then $D_1 = D_2 = 0$. The same is true if $R$ is a 2-torsion free semiprime ring and F(x) ° β(x) = 0 for all x ∈ R, where $F(x) = (D_1^2 (x) + D_2(x)) ∘ α(x),\; x ∈ R$, and $β$ is any automorphism or antiautomorphism on $R$.
CommonCrawl
$\# A(x) \gg x / \log x$ and $\# A(x) =o(x)$ as $x\to \infty$. As a consequence, we obtain that the set of all integers $n$ such that $n$ divides $F_n$ has zero asymptotic density relative to $A$. Remark: Paolo Leonetti is a new PostDoc researcher at the Institute of Analysis and Number Theory since February 2019.
CommonCrawl
Doctoral Committee Chair(s): Kapoor, Shiv G. Abstract: In today's competitive marketplace, companies must continually strive for improved quality and productivity of its products and processes just to maintain its competitive position. For this reason, quality systems were developed to improve quality through the design and manipulation of process resources over time. When examining a specific portion of a process, called a QC window, statistical process control methods, such as Shewhart charts, are responsible for the observation, evaluation, and diagnosis of processes to aid in the decision and implementation of corrective action. This dissertation develops methods to improve the evaluation and diagnostic elements of the QC window. To enhance the evaluation capability of Shewhart control charts, performance measures in terms of operational characteristics, are defined and developed for the X chart, S$\sp2$ chart, and both charts combined. These performance measures are analyzed with respect to process shifts in both mean and variance. In addition, performance is enhanced by: (a) enforcing four rules on the S$\sp2$ chart to improve sensitivity to variance shifts (b) modifying the control limits on both the X and S$\sp2$ charts to capitalize on the distinct relationships between rules as well as quantifying the tradeoff between the probability of a false alarm ($\alpha$) and the probability of a false indication of control ($\beta$). To support the diagnostic capability of Shewhart charts, a sequential diagnostic procedure is developed that estimates the mean and variance of a process given observed control chart performance. Once the current state of the process is identified, the decision and implementation of corrective action can then be performed to position the process at its desired state. By improving the evaluation and diagnostic capability of control charts, the work presented in this thesis expands the role of control charts, thus enabling them to reach their full potential.
CommonCrawl
If you need any more rolls to determine the answer, comment them. Please try not to look it up, and if you do, don't answer it, it'll ruin the fun for others. Only answer if YOU figured it out. Also note that the name is especially important. Guess who took A LONG TIME to figure this out? The name "Petals Around the Rose" is especially important. The correct number is the number of "petals" (pips on the dice) that are around "roses" (pips in the center of the die). Even numbers have no center pip, therefore no petals. Odd numbers have pips in the center and thus have roses whose petals are counted. However, 1 has no non-rose pips. That leaves 3, with 2 other pips, and 5, with 4 other pips. In effect, each 3 rolled adds 2 to the count and each 5 adds 4 to the count. Which is generalised for any number of rolls in a $n$-side dice. The sum goes from 1 to the number of rolls, $x_i$ is the result of each roll and $x_i-1$ is the number of 'Petals around the rose' for each roll.
CommonCrawl
Every three-connected planar graph with $n$ vertices has a drawing on an $O(n^2) \times O(n^2)$ grid in which all faces are strictly convex polygons. These drawings are obtained by perturbing (not strictly) convex drawings on $O(n) \times O(n)$ grids. Tighter bounds are obtained when the faces have fewer sides. In the proof, we derive an explicit lower bound on the number of primitive vectors in a triangle. 2000 Mathematics Subject Classification: Primary 05C62; Secondary 52C05. Keywords and Phrases: Graph drawing, planar graphs. Full text: dvi.gz 39 k, dvi 87 k, ps.gz 777 k, pdf 454 k.
CommonCrawl
At each step, choose two adjacent elements and swap them. At each step, choose any two elements and swap them. At each step, choose any element and move it to another position. At each step, choose any element and move it to the front of the array. Given a permutation of numbers $1,2,\ldots,n$, calculate the minimum number of steps to sort the array using the above methods. The second line contains $n$ integers that describe the permutation. Print four numbers: the minimum number of steps using each method.
CommonCrawl
I heard of this fun little combinatorics problem from David Dralle, who (I think) heard about it from Sean Rule. Imagine a group of $n$ people is sitting in a circle. Let's label the participants from 1 to $n$ counterclockwise. A person on the outside of the circle begins by eliminating person 1. She then skips one remaining person and eliminates the next remaining person. She repeats this process until only one person remains. For a given value of $n$, where is the last person to be eliminated sitting? Let's denote the position of this person $f(n)$. Let's do an example to get our bearings. Here's one with $n = 8$ people. If the first person chosen is in position 1 and we number people counterclockwise, then we have $f(8) = 7$. Here's another with $n = 20$ people. Here we have $f(20) = 8$. Notice that the last remaining person is located in very different parts of the circle in the two examples. One thing you might have noticed is during the first pass around the circle, we simply eliminate alternating people without much fuss. Once we're done with this first pass, we're left with $\lceil n/2 \rceil$ remaining people in the circle. We could also confirm that if $n$ is odd, then we begin the next revolution around the circle by skipping person 2, and if $n$ is even, we begin by eliminating person 2. Let's continue with the $n$ even case first. There are now $n/2$ remaining people in the circle, namely those indexed 2,4,$\ldots$,n. Since all eliminated people are ignored in the choosing process, this setup is nearly identical to starting a new game of duck-duck-goose with $n/2$ people. The only difference is the indexing; the fresh game as people indexed $1,2,\ldots, n~/~2$ and the original game has participants index $2,4,\ldots,n$. If we imagine starting a fresh game with $n/2$ people and finishing with a goose at position $f(n/2)$, then the goose will have been sitting at position $2 f(n/2)$ in the original game. So for $n$ even, we have the recursive definition $f(n) = 2 f(n/2)$. What if $n$ is odd? Here, we skip person 2 at the beginning of our second pass around the circle. Again, since eliminated participants are ignored when choosing the next person to eliminate, we can consider starting a new game with $n/2$ with participants index $1,2,\ldots,\lfloor n/2 \rfloor -1, \lfloor n/2 \rfloor$. Note that these new indices correspond to indices $4,6,\ldots,n,2$ in the original game. The conversion between new indices and old indices is $i \mapsto 2(i + 1 \bmod \lfloor n/2 \rfloor).$ So, if we finish the fresh game on a goose at position $f(\lfloor n/2 \rfloor)$, then the goose will have been sitting in position $2(f(\lfloor n/2 \rfloor) + 1 \bmod \lfloor n/2 \rfloor)$.
CommonCrawl
The processing of dietary lipids can be distinguished in several sequential steps, including their emulsification, hydrolysis and micellization, before they are absorbed by the enterocytes. Emulsification of lipids starts in the stomach and is mediated by physical forces and favoured by the partial lipolysis of the dietary lipids due to the activity of gastric lipase. The process of lipid digestion continues in the duodenum where pancreatic triacylglycerol lipase (PTL) releases 50 to 70% of dietary fatty acids. Bile salts at low concentrations stimulate PTL activity, but higher concentrations inhibit PTL activity. Pancreatic triacylglycerol lipase activity is regulated by colipase, that interacts with bile salts and PTL and can release bile salt mediated PTL inhibition. Without colipase, PTL is unable to hydrolyse fatty acids from dietary triacylglycerols, resulting in fat malabsorption with severe consequences on bioavailability of dietary lipids and fat-soluble vitamins. Furthermore, carboxyl ester lipase, a pancreatic enzyme that is bile salt-stimulated and displays wide substrate reactivities, is involved in lipid digestion. The products of lipolysis are removed from the water-oil interface by incorporation into mixed micelles that are formed spontaneously by the interaction of bile salts. Monoacylglycerols and phospholipids enhance the ability of bile salts to form mixed micelles. Formation of mixed micelles is necessary to move the non-polar lipids across the unstirred water layer adjacent to the mucosal cells, thereby facilitating absorption. Armand, M., P. Borel, C. Dubois, M. Senft, J. Peyrot, J. Salducci, H. Lafont and D. Lairon. 1994. Characterization of emulsions and lipolysis of dietary lipids in the human stomach. Am. J. Physiol. 266:G372-381. Armand, M., B. Pasquier, M. André, P. Borel, M. Senft, J. Peyrot, J. Salducci, H. Portugal, V. Jaussan and D. Lairon. 1999. Digestion and absorption of 2 fat emulsions with different droplet size in the human digestive tract. Am. J. Clin. Nutr. 70:1096-1106. Borgström, B. and C. Erlanson-Albertsson. 1984. Pancreatic colipase. In: Lipases (Ed. B. Borgström and H. L. Brockman). Elseviers Science Publishers, Amsterdam, The Netherlands, pp. 151-183. Borgström, B. and H. Hildebrand. 1975. Lipase and co-lipase activities of human small intestinal contents after a liquid test meal. Scand. J. Gastroenterol. 10:585-591. Carey, M. C., D. M. Small and C. M. Bliss. 1983. Lipid digestion and absorption. Annu. Rev. Physiol. 45:651-677. Carrière, F., J. A. Barrowman, R. Verger and R. Laugier. 1993. Secretion and contribution to lipolysis of gastric and pancreatic lipases during a test meal in humans. Gastroenterology 105:876-888. Carrière, F., Y. Gargouri, H. Moreau, S. Ransac, E. Rogalska and R. Verger. 1994. Lipases: Their Structure, Biochemistry, and Application (Ed. P. Woolley and S. B. Petersen). Cambridge University Press, Cambridge England, pp. 181-205. D'Agostino, D., R. A. Cordle, J. Kullman, C. Erlanson-Albertsson, L. J. Muglia and M. E. Lowe. 2002. Decreased postnatal survival and altered body weight regulation in procolipasedeficient mice. J. Biol. Chem. 277:7170-7177. Demarne, Y., T. Corring, A. Pihet and E. Sacquet. 1982. Fat absorption in germ-free and conventional rats artificially deprived of bile secretion. Gut 23:49-57. DeNigris, S. J., M. Hamosh, D. K. Kasbekar, T. C. Lee and P. Hamosh. 1988. Lingual and gastric lipases: species differences in the origin of pre-pancreatic digestive lipases and in the localization of gastric lipase. Biochim. Biophys. Acta 959:38-45. Drackley, J. D. 2000. Lipid metabolism. In: Farm Animal Metabolism and Nutrition (Ed. J. P. F. D'Mello). CAB International publishing, UK, pp. 97-119. Entressangles, B. and P. Desnuelle. 1968. Action of pancreatic lipase on aggregated glyceride molecules in an isotropic system. Biochim. Biophys. Acta 159:285-295. Erlanson-Albertsson, C., B. Weström, S. Pierzynowski, S. Karlsson and B. Ahren. 1991. Pancreatic procolipase activation peptide-enterostatin-inhibits pancreatic enzyme secretion in the pig. Pancreas 6:619-624. Fält, H., O. Hernell and L. Bläckberg. 2002. Does bile saltstimulated lipase affect cholesterol uptake when bound to rat intestinal mucosa in vitro? Pediatr. Res. 52:509-515. Heaton, K. W. 1985. Bile salts. In: Liver and Biliary Disease: Pathophysiology, Diagnosis, Management (Ed. R. Wright, G. H. Millward-Sadler, K. G. M. M. Alberti and S. Karran). BaillèreTindall, W.D. Saunders Co., Philadelphia, p. 277. Hofmann, A. F. and D. M. Small. 1967. Detergent properties of bile salts: correlation with physiological function. Annu. Rev. Med. 18:333-376. Holt, P. R. 1971. Fats and bile salts. J. Am. Diet Assoc. 60:491-498. Kellow, J. E., T. J. Borody, S. F. Philips, R. L. Tucker and A. C. Hadda. 1986. Human interdigestive motility: variations in patterns from oesophagus to colon. Gastroenterology 91:386-395. Lapey, A., J. Kattwinkel, P. A. Di Sant Agnese and L. Laster. 1974. Steatorrhea and azotorrhea and their relation to growth and nutrition in adolescents and young adults with cystic fibrosis. J. Pediatr. 84:328-334. Lee, P. C. and E. Lebenthal. 1993. Prenatal and postnatal development of the human exocrine pancreas. In: The Pancreas: Biology, Pathobiology, and Disease (Ed. V. Liang and W. Go). Raven Press, New York, pp. 57-173. Li, F. and D. Y. Hui. 1997. Modified low density lipoprotein enhances the secretion of bile salt-stimulated cholesterol esterase by human monocyte-macrophages. Species-species difference in macrophage cholesterol ester hydrolase. J. Biol. Chem. 272:28666-28671. Malagelada, J. R. and F. Azpiroz. 1989. Determinants of Gastric Emptying and Transit in the Small Intestine. Oxford Univ. Press, New York, pp. 909-937. Mattson, F. H. and R. A. Volpenhein. 1972. Rate and extent of absorption of the fatty acids of fully esterified glycerol, erythritol, and sucrose as measured in thoracic duct cannulated rats. J. Nutr. 102:1177-1180. Maylié, M. F., M. Charles, C. Gache and P. Desnuelle. 1971. Isolation and partial identification of a pancreatic colipase. Biochim. Biophys. Acta 229:286-289. Maynard, L. A., J. K. Loosli, H. F. Hintz and R. G. Warner. 1979. Animal Nutrition (7th ed.) McGraw-Hill Book Co., New York, pp. 199-200. Miled, N., S. Canaan, L. Dupuis, A. Roussel, M. Rivière, F. Carrière, A. De Caro, C. Cambillau and R. Verger. 2000. Digestive lipases: from three-dimensional structure to physiology. Biochimie 82:973-986. Moreau, H., R. Laugier, Y. Gargouri, F. Ferrato and R. Verger. 1988b. Human preduodenal lipase is entirely of gastric fundic origin. Gastroenterology 95:1221-1226. Nair, P. P. and D. Kritchevsky. 1971. The Bile Acids; Physiology and Metabolism. Vol.1, Plenum Press, New York. Nano, J.-L. and P. Savary. 1976. Hydrolysis of an aliphatic monoester in emulsion by swine pancreas lipase: influence of interfacial bile salts molecules upon reaction rate. Biochimie 58:917-926. NIH Consensus Conference. 1993. Triacylglycerol high-density lipoprotein, and coronary heart disease. JAMA 269:505-510. Patton, S. and T. W. Keenan. 1975. The milk fat globule membrane. Biochim. Biophys. Acta 415:273-309. Rigtrup, K. M. and D. E. Ong. 1992. A retinyl ester hydrolase activity intrinsic to the brush border membrane of rat small intestine. Biochemistry 31:2920-2926. Roussel, A., S. Canaan, M. P. Egloff, M. Rivière, L. Dupuis, R. Verger and C. Cambillau. 1999. Crystal structure of human gastric lipase and model of lysosomal acid lipase, two lipolytic enzymes of medical interest. J. Biol. Chem. 274:16995-17002. Rudd, E. A. and H. L. Brockman. 1984. Pancreatic carboxyl ester lipase (cholesterol esterase). In: Lipases (Ed. B. Borgstrom and H. L. Brockman). Elsevier Science, New York, pp. 185-204. Schonheyder, F. and K. Volquartz. 1946. The gastric lipase in man. Acta Physiol. Scand. 11:349-380. Sörhede, M., H. Mulder, J. Mei, F. Sundler and C. Erlansson-Albertsson. 1996. Procolipase is produced in the rat stomach –a novel source of enterostatin. Biochim. Biophys. Acta 1301:207-212. Tsujita, T., N. K. Mizuno and H. L. Brockman. 1987. Nonspecific high affinity binding of bile salts to carboxylester lipase. J. Lipid Res. 28:1434-1443. Tsujita, T. and H. Okuda. 1990. Effect of bile salts on the interfacial inactivation of pancreatic carboxylester lipase. J. Lipid Res. 31:831-838. Van Tilbeurgh, H., M. P. Egloff, C. Martinez, N. Rugani, R. Verger and C. Cambillau. 1993. Interfacial activation of the lipaseprocolipase complex by mixed micelles revealed by X-ray crystallography. Nature 362:814-820. Vandermeers, A., M. C. Vandermeers-Piret, J. Rathe and J. Christophe. 1975. Effect of colipase on adsorption and activity of rat pancreatic lipase on emulsified tributyrin in the presence of bile salt. FEBS Lett. 49:334-337. Verger, R. 1997. Interfacial activation of lipases facts and artifacts. Trends Biochem. Tech. 15:32-38. Ville, E., F. Carrière, C. Renou and R. Laugier. 2002. Physiological study of pH stability and sensitivity to pepsin of human gastric lipase. Digestion 65:73-81. Wang, C. S. and J. A. Hartsuck. 1993. Bile salt-activated lipase: a multiple function lipolytic enzyme. Biochim. Biophys. Acta 1166:1-19. Andersson, L., F. Carrière, M. E. Lowe, A. Nilsson and R. Verger. 1996. Pancreatic lipase-related protein 2 but not classical pancreatic lipase hydrolyzes galactolipids, Biochim. Biophys. Acta 1302:236-240. Fukunaga, T., M. Nagahama, K. Hatsuzawa, K. Tani, A. Yamamoto and M. Tagaya. 2000. Implication of sphingolipid metabolism in the stability of the Golgi apparatus. J. Cell Sci. 113:3299-3307. Hernell, O., J. E. Staggers and M. C. Carey. 1990. Physicalchemical behaviour of dietary and biliary lipids during intestinal digestion and absorption. 2. Phase analysis and aggregation states of luminal lipids during duodenal fat digestion in healthy adult human beings. Biochemistry 29:2041-2056. Liang, Y., R. Medhekar, H. L. Brockman, D. M. Quinn and D. Y. Hui. 2000. Importance of arginines 63 and 423 in modulating the bile salt-dependent and bile salt-independent hydrolytic activities of rat carboxyl ester lipase. J. Biol. Chem. 275:24040-24046. Moreau, H., Y. Gargouri, D. Lecat, J. L. Junien and R. Verger. 1988a. Screening of preduodenal lipases in several mammals. Biochim. Biophys. Acta 959:247-252. Newport, M. J. and G. L. Howarth. 1985. Contribution of gastric lipolysis to the digestion of fat in the neonatal pig. In: Proceedings of the 3rd International Seminar on Digestive Physiology in the Pig (Ed. A. Just, H. Jørgensen and J. A. Fernandez) Copenhagen, Denmark, p. 143. Li, F. and D. Y. Hui. 1998. Synthesis and secretion of the pancreatic-type carboxyl ester lipase by human endothelial cells. Biochem. J. 329:675-679. Northfield, T. C. and I. McColl. 1973. Postprandial concentrations of free and conjugated bile acids down the length of the normal human small intestine. Gut 14:513-518. Bodmer, M. W., S. Angal, G. T. Yarranton, T. J. R. Harris, A. Lyons, D. J. King, G. Piéroni, C. Rivière, R. Verger and P. A. Lowe. 1987. Molecular cloning of a human gastric lipase and expression of the enzyme in yeast. Biochim. Biophys. Acta 909:237-244. Cohen, M., R. G. H. Morgan and A. F. Hofmann. 1971. Lipolytic activity of human gastric and duodenal juice against medium and long chain triglycerides. Gastroenterology 60(1):1-15. Hamosh, M., H. Klaeveman, R. O. Wolf and R. O. Scow. 1975. Pharyngeal lipase and digestion of dietary triacylglycerol in man. J. Clin. Invest. 55:908-913. Verger, R. 1984. Pancreatic lipases. In: Lipases (Ed. B. Borgström and H. L. Brockman). Elsevier, New York, pp. 84-150. Roy, C. C., M. Roulet, D. Lefebvre, L. Chartrand, G. Lepage and L.-A. Fournier. 1979. The role of gastric lipolysis on fat absorption and bile acid metabolism in the rat. Lipids 14(9):811-815. Bernbäck, S., L. Bläckberg and O. Hernell. 1990. The complete digestion of human milk triacylglycerol in vitro requires gastric lipase, pancreatic colipase-dependent lipase and bile salt-stimulated lipase. J. Clin. Invest. 85:1221-1226. Borgström, B. 1980. Importance of phospholipids, pancreatic phospholipase A2, and fatty acid for the digestion of dietary fat. In vitro experiments with the porcine enzymes. Gastroenterology 78:954-962. Dietschy, J. M. 1978. General principles governing movement of lipids across biological membranes. In: Disturbances in Lipid Lipoprotein Metabolism (Ed. J. M. Dietschy, A. M. Gotto Jr. and J. A. Ontko). Bethesda, American Physiological Society, Washington. pp. 1-28. Erlanson-Albertsson, C. 1992. Enterostatin: the pancreatic procolipase activation peptide - a signal for regulation of fat intake. Nutr. Rev. 50:307-310. Friedman, H. I. and B. Nylund. 1980. Intestinal fat digestion, absorption and transport. A review. Am. J. Clin. Nutr. 33:1108-1139. Rosenwald, A. G. and R. E. Pagano. 1993. Inhibition of glycoprotein traffic through the secretory pathway by ceramide. J. Biol. Chem. 268:4577-4579. Sayari, A., H. Mejdoub and Y. Gargouri. 2000. Characterization of turkey pancreatic lipase. Biochimie 82:153-159. Alemi, B., M. Hamosh, J. W. Scanlon, C. Salzman-Mann and P. Hamosh. 1981. Fat digestion in very low birth-weight infants: effect of addition of human milk to low birth weight formula. Pediatrics 68:484-489. Carey, M. C. and O. Hernell. 1992. Digestion and absorption of fat. Semin. Gastrointest. Dis. 3:189-208. Martins, I. J., B. C. Mortimer, J. Miller and T. G. Redgrave. 1996. Effects of particle size and number on the plasma clearance of chylomicrons and remnants. J. Lipid Res. 37:2696-2705. Momsen, W. E. and H. L. Brockman. 1976. Effects of colipase and taurodeoxycholate on the catalytic and physical properties of pancreatic lipase B at an oil-water interface. J. Biol. Chem. 251:378-383. Winkler, K. E., E. H. Harrison, J. B. Marsh, J. M. Glick and A. C. Ross. 1992. Characterization of a bile salt-dependent cholesteryl ester hydrolase activity secreted from HepG2 cells. Biochim. Biophys. Acta 1126:151-158. Clark, S. B., B. Brause and P. R. Holt. 1969. Lipolysis and absorption of fat in the rat stomach. Gastroenterology 56:214-222. Helander, H. F. and T. Olivecrona. 1970. Lipolysis and lipid absorption in the stomach of the suckling rat. Gastroenterology 59:22-35. McMurry, J. and M. E. Castellion. 2002. Fundamentals of General, Organic, and Biological Chemistry. Prentice Hall, New York. Bernbäck, S., L. Bläckberg and O. Hernell. 1989. Fatty acids generated by gastric lipase promote human milk triacylglycerol digestion by pancreatic colipase-dependent lipase. Biochim. Biophys. Acta 1001:286-293. Bosc-Bierne, I., J. Rathelot, C. Perrot and L. Sarda. 1984. Studies on chicken pancreatic lipase and colipase. Biochim. Biophys. Acta 794:65-71. Chen, Q., L. Bläckberg, A. Nilsson, B. Sternby and O. Hernell. 1994. Digestion of triacylglycerols containing long-chain polyenoic fatty acids in vitro by colipase-dependent pancreatic lipase and human milk bile salt-stimulated lipase. Biochim. Biophys. Acta 1210:239-243. Moreau, H., A. Bernadac, Y. Gargouri, F. Benkouka, R. Laugier and R. Verger. 1989. Immunocytolocalization of human gastric lipase in chief cells of the fundic mucosa. Histochemistry 91:419-423. Brindley, D. N. 1974. The intracellular phase of fat absorption. In: Biomembranes, Vol. 4B (Ed. D. H. Smyth). Plenum Press, London and New York, pp. 621-671. Brockman, H. L. 1984. General features of lipolysis: reaction scheme, interfacial structure and experimental approaches. In: Lipases (Ed. B. Borgström and H. L. Brockman). Elsevier Science Publishers B.V., Amsterdam, pp. 3-46. Gargouri, Y., G. Pieroni, C. Rivière, P. A. Lowe, J. F. Saunière, L. Sarda and R. Verger. 1986b. Importance of human gastric lipase for intestinal lipolysis: an in vitro study. Biochim. Biophys. Acta 879:419-423. Krogdahl, Å. 1985. Digestion and absorption of lipids in poultry. J. Nutr. 115:675-685. Lindstrom, M., B. Sternby and B. Borgström. 1988. Concerted action of human carboxyl ester lipase and pancreatic lipase during digestion in vitro: importance of the physicochemical state of the substrate. Biochim. Biophys. Acta 959:178-184. Pignol, D., L. Ayvazian, B. Kerfelec, P. Timmins, I. Crenon, J. Hermoso, J. C. Fontecilla-Camps and C. Chapus. 2000. Critical role of micelles in pancreatic lipase activation revealed by small angle neutron scattering. J. Biol. Chem. 275:4220-4224. Van Bennekum, A. M., E. A. Fisher, W. S. Blaner and E. H. Harrison. 2000. Hydrolysis of retinyl esters by pancreatic triacylglycerol lipase. Biochemistry 39:4900-4906. Wickham, M., M. Garrood, J. Leney, P. D. G. Wilson and A. Fillery-Travis. 1998. Modification of a phospholipid stabilized emulsion interface by bile salt: effect on pancreatic activity. J. Lipid Res. 39:623-632. Alvaro, D., A. Cantafora, A. F. Attili, S. Ginanni Corradini, C. De Luca, G. Minervini, A. Di Biase and M. Angelico. 1986. Relationships between bile salts hydrophylicity and phospholipid composition in bile of various animal species. Comp. Biochem. Physiol. Vol. 83B:551-554. Hermoso, J., D. Pignol, S. Penel, M. Roth, C. Chapus and J. C. Fontecilla-Camps. 1997. Neutron crystallographic evidence of lipase-colipase complex activation by a micelle. EMBO J. 16:5531-5536. Borel, P., M. Armand, P. Ythier, G. Dutot, C. Melin, M. Senft, H. Lafont and D. Lairon. 1994. Hydrolysis of emulsions with different triacylglycerol and droplet sizes by gastric lipase in vitro, effect on pancreatic lipase activity. J. Nutr. Biochem. 5:124-133. Liao, T. H., P. Hamosh and M. Hamosh. 1983. Gastric lipolysis in the developing rat-ontogeny of the lipases active in the stomach. Biochim. Biophys. Acta 754:1-9. Armand, M., P. Borel, B. Pasquier, C. Dubois, M. Senft, M. André, J. Peyrot, J. Salducci and D. Lairon. 1996. Physicochemical characteristics of emulsions during fat digestion in human stomach and duodenum. Am. J. Physiol. 271:G172-183. Borel, P., P. Grolier, M. Armand, A. Partier, H. Lafont, D. Lairon and V. Azais-Braesco. 1996. Carotenoids in biological emulsions: solubility, surface-to-core distribution, and release from lipid droplets. J. Lipid Res. 37: 250-261. Chen, Q., B. Sternby and A. Nilsson. 1989. Hydrolysis of triacylglycerol arachidonic and linoleic acid ester bonds by human pancreatic lipase and carboxyl ester lipase. Biochim. Biophys. Acta 1004:372-385. Crandall, W. V. and M. E. Lowe. 2001. Colipase residues Glu$^64$ and Arg$^65$ are essential for normal lipase-mediated fat digestion in the presence of bile salt micelles. J. Biol. Chem. 276:12505-12512. Gargouri, Y., H. Moreau and R. Verger. 1989. Gastric lipases: biochemical and physiological studies. Biochim. Biophys. Acta 1006:255-271. Kirby, R. J., S. Zheng, P. Tso, P. N. Howles and D. Y. Hui. 2002. Bile salt-stimulated carboxyl ester lipase influences lipoprotein assembly and secretion in intestine. J. Biol. Chem. 277:4101-4109. Lowe, M. E. 1994. Pancreatic Triacylglycerol lipase and colipase: insights into dietary fat digestion. Gastroenterology 107:1524-1536. Mu, H. and C.-E. Hoy. 2004. The digestion of dietary triacylglycerols. Prog. Lipid Res. 43:105-133. Patton, J. S. and M. C. Carey. 1981. Inhibition of human pancreatic lipase-colipase activity by mixed bile saltphospholipid micelles. Am. J. Physiol. 241:G328-G336. Weng, W., L. Li, A. M. van Bennekum, S. H. Potter, E. H. Harrison, W. S. Blaner, J. L. Breslow and E. A. Fisher. 1999. Intestinal absorption of dietary cholesteryl ester is decreased but retinyl ester absorption is normal in carboxyl ester lipase knockout mice. Biochemistry 38:4143-4149. Fillery-Travis, A. J., L. H. Foster and M. M. Robins. 1995. Interactions between two physiological surfactants: L-$\alpha$- phosphatidylcholine and sodium taurocholate. Biophys. Chem. 54:253-260. Hildebrand, H., B. Borgström, A. Békássy, C. Erlansson-Albertsson and A. Helin. 1982. Isolated colipase deficiency in two brothers. Gut 23:243-246. Laws, B. M. and J. H. Moore. 1963. The lipase and esterase activities of the pancreas and small intestine of the chick. Biochem. J. 87:632-638. Miller, K. W. and D. M. Small. 1982. The phase behaviour of triolein, cholesterol and lecithin emulsions. J. Colloid Interface Sci. 89:466-478. Overland, M., M. D. Tokach, S. G. Cornelius, J. E. Pettigrew and J. W. Rust. 1993. Lecithin in swine diets: I. weanling pigs. J. Anim. Sci. 71:1187-1193. Borgström, B. 1974. Fat digestion and absorption. In: Biomembranes, Vol. 4B (Ed. D. H. Smyth). Plenum Press, London and New York, pp. 555-620. Borgström, B. 1975. On the interaction between pancreatic lipase and colipase and the substrate and the importance of bile salts. J. Lipid Res. 16:411-417. Brockerhoff, H. and R. G. Jensen. 1974. Lipolytic Enzymes. Academic Press, New York, pp. 34-90. Prince, L. M. 1974. In: Emulsions and Emulsion Technology (Ed. K. J. Lissant). Marcel Dekker, New York, pp.125-178. Van Tilbeurgh, H., L. Sarda, R. Verger and C. Cambillau. 1992. Structure of the pancreatic lipase-colipase complex. Nature 359:1599-1622. Hui, D. Y. and P. N. Howles. 2002. Carboxyl ester lipase: structure-function relationship and physiological role in lipoprotein metabolism and atherosclerosis. J. Lipid Res. 43:2017-2030. Engstrom, J. F., J. J. Rybak, M. Duber and N. J. Greenberger. 1968. Evidence for a lipase system in canine gastric juice. Am. J. Med. Sci. 256:346-351. Howles, P. N., C. P. Carter and D. Y. Hui. 1996. Dietary free and esterified cholesterol absorption in cholesterol esterase (bile salt-stimulated lipase) gene-targeted mice. J. Biol. Chem. 271:7196-7202. Bläckberg, L., O. Hernell and Olivecrona. 1981. Hydrolysis of human milk fat globules by pancreatic lipase: role of colipase, phospholipase A2, and bile salts. J. Clin. Invest. 67:1748-1752. Gargouri, Y., G. Pieroni, C. Rivière, J. F. Saunière, P. A. Lowe, L. Sarda and R. Verger. 1986a. Kinetic assay of human gastric lipase on short- and long-chain triacylglycerol emulsions. Gastroenterology 91:919-925. Jensen, M. S., S. K. Jensen and K. Jakobsen. 1997. Development of digestive enzymes in pigs with emphasis on lipolytic activity in the stomach and pancreas. J. Anim. Sci. 75:437-445. Linthorst, J. M., S. Bennett Clark and P. R. Holt. 1977. Triacylglycerol emulsification by amphipaths present in the intestinal lumen during digestion of fat. J. Colloid Interface Sci. 60:1-10. Young, S. C. and D. Y. Hui. 1999. Pancreatic lipase-colipase mediated triacylglycerol hydrolysis is required for cholesterol transport from lipid emulsions to intestinal cells. Biochem. J. 339:615-620. Egloff, M. P., L. Sarda, R. Verger, C. Cambillau and H. Van Tilbeurgh. 1995. Crystallographic study of the structure of colipase and of the interaction with pancreatic lipase. Protein Sci. 4:44-57. Shamir, R., W. J. Johnson, R. Zolfaghari, H. S. Lee and E. A. Fisher. 1995. Role of bile salt-dependent cholesteryl ester hydrolase in the uptake of micellar cholesterol by intestinal cells. Biochemistry 34:6351-6358. Abrams, C. K., M. Hamosh, S. K. Dutta, V. S. Hubbard and P. Hamosh. 1987. Role of non pancreatic lipolytic activity in exocrine pancreatic insufficiency. Gastroenterology 92:125-129. Borgström, B., A. Dahlquist, G. Lundh and J. Sjövall. 1957. Studies of intestinal digestion and absorption in the human. J. Clin. Invest. 36:1521-1536. Levy, E., R. Goldstein, S. Freier and E. Shafrir. 1982. Gastric lipase in the newborn rat. Pediatr. Res. 16:69-74. Brindley, D. N. 1984. Digestion, absorption and transport of fats: general principles. In: Fats in Animal Nutrition (Ed. J. Wiseman). Butterworths, London, pp. 85-103. Hamosh, M. 1990. Lingual and gastric lipases: their role in fat digestion. CRC Press, Boca Raton, Fl., pp. 1-239. Armand, M., P. Borel, P. Ythier, G. Dutot, C. Melin, M. Senft, H. Lafont and D. Lairon. 1992. Effects of droplet size, triacylglycerol composition, and calcium on the hydrolysis of complex emulsions by pancreatic lipase: an in vitro study. J. Nutr. Biochem. 3:333-341. Borgström, B. and C. Erlanson. 1971. Pancreatic juice colipase: physiological importance. Biochim. Biophys. Acta 242:509-513. Borgström, B. and C. Erlanson. 1973. Pancreatic lipase and colipase. Interactions and effects of bile salts and other detergents. Eur. J. Biochem. 37:60-68. Howles, P., B. Wagner and L. Davis. 1998. Bile salt stimulated lipase is required for proper digestion and absorption of milk triacylglycerols in neonatal mice. FASEB J. 12:A851 (Abstr.). Gaull, G. E. and C. E. Wright. 1987. Taurine conjugation of bile acids protects human cells in culture. Adv. Exp. Med. Biol. 217:61-67. Schonheyder, F. and K. Volquartz. 1954. Studies on the lipolytic enzyme action. VI. Hydrolysis of trilauryl glycerol by pancreatic lipase. Biochim. Biophys. Acta 15:288-290.
CommonCrawl
Farmer John is attempting to take a photograph of his herd of cows. From past experience, he knows this particular endeavor never usually ends well. Here, an 'R' means a cow facing right, and an 'L' means a cow facing left. Since the cows are packed together, Farmer John cannot walk up to an individual cow to make it turn around. All he can do is shout at any row or column of cows to turn around, causing L's to change to R's and R's to L's within the row or column in question. Farmer John can yell at as many rows or columns as he wants, even at the same row or column more than once. As expected, Farmer John observes that he is unable to make his cows all face one common direction. The best he can do is get all but one of the cows to face the same direction. Please determine the identity of such a cow. The first line contains $N$. The next $N$ lines describe rows $1 \ldots N$ in the grid of cows, each containing a string of length $N$. Print the row and column index of a cow such that if that cow were flipped, Farmer John could make all his cows face the same direction. If no such cow exists, print -1. If multiple such cows exist, print the one with the smallest row index, or if multiple such cows have the same smallest row index, print the one with the smallest column index. In the example above, the cow in row 1, column 1 (the upper-left corner) is the offending cow, since Farmer John can shout at row 2 and column 3 to make all other cows face left, with just this cow facing right.
CommonCrawl
Abstract: In this talk, we will investigate a system of the 2D magnetohydrodynamic (MHD) equations with the kinematic dissipation given by the fractional operator $(-\Delta)^\alpha$ and the magnetic diffusion by partial Laplacian. We will show that this system with any $\alpha>0$ always possesses a unique global smooth solution when the initial data is sufficiently smooth. In addition, we make a detailed study on the large-time behavior of these smooth solutions and obtain optimal large-time decay rates.
CommonCrawl
Abstract: QCD evolution equations can be recast in terms of parton branching processes. We present a new numerical solution of the equations. We show that this parton-branching solution can be applied to analyze infrared contributions to evolution, order-by-order in the strong coupling $\alpha_s$, as a function of the soft-gluon resolution scale parameter. We examine the cases of transverse-momentum ordering and angular ordering. We illustrate that this approach can be used to treat distributions which depend both on longitudinal and on transverse momenta.
CommonCrawl
Abstract: The Carroll group was originally introduced by Levy-Leblond by considering the limit of the Poincaré group as $c\to0$. In this paper an alternative definition, based on the geometric properties of a non-Minkowskian, non-Galilean but nevertheless boost-invariant, space-time structure is proposed. A "duality" with the Galilean limit $c\to\infty$ is established. Our theory is illustrated by Carrollian electromagnetism.
CommonCrawl
The notion of $\alpha$-topological vector space is introduced and several properties are studied. A complete comparison between this class and the class of topological vector spaces is presented. In particular, $\alpha$-topological vector spaces are shown to be independent from topological vector spaces. Finally, a sufficient condition for $\alpha$-regularity of $\alpha$-topological vector spaces is given. Al-Hawary,T. $\omega$-generalized closed sets, Int. J. Appl. Math. 16 (3), 341-353, 2004. Al-Hawary,T. Generalized preopen sets, Questions Answers Gen. Topology 29(1), 73-80, 2011. Al-Hawary,T. $\rho$-closed sets, Acta Univ. Apulensis Math. Inform. 35, 29-36, 2013. Al-Hawary,T. $\zeta$-open sets, Acta Scientiarum-Technology, 35(1), 111-115, 2013. Al-Hawary,T. Decompositions of continuity via $\zeta-$open sest, Acta Universitatis aplulensis 34, 137-142, 2013. Al-Hawary,T. $\epsilon$-closed set. To appear in Thai J. Mathematics. Al-Hawary, T. and Al-Nayef, A. On Irresolute-Topological Vector Spaces, Math. Sci. Res. Hot-Line.5, 49-53, 2001. Al-Hawary, T. and Al-Nayef, A. Irresolute-Topological Vector Spaces, Al-Manarah J. 9 (2), 119-126, 2003. Al-Hawary,T. and Al-Omari, A. Between open and $\omega$-open sets, Ques. and Answer. in GeneralTopology 24, 67-77, 2006. Al-Hawary,T. and Al-Omari, A. Decompositions of Continuity, Turkish J. Math., 30 (20). 187-195, 2006. Al-Hawary,T. and Al-Omari, A. Generalized $b$-closed sets, Mutah Lil-Buhuth Wad-Dirasat5 (1), 27-39, 2006. Crossley, S. and Hildebrand, S. $\alpha-$closure, Texas J. Sci. 22, 99-112, 1971. Crossley, S. and Hildebrand, S. $\alpha-$topological properties, Fund. Math. LXXI, 233-254, 1972. Ganster, M. and Reilly, I. A decomposition of continuity, Acta Math. Hung. 56 (3-4), 299-301, 1990. Jarchow, Ph. Locally convex spaces, B. G. Teubner Stuttgart, 1981. Köthe, G. Topological Vector Spaces I, Berlin-Heidelberg, 1969. Maheshwari, S.N. and Thakur, S.S. On $\alpha$-irresolute mappings, Tamkang J. Nath. 11, 9-14,1980. Takashi, N. On $\alpha$-continuous functions,Časopis pro pěstování matematiky 109 (2), 118-126, 1984. ISNAD Al-Hawary, Talal . "$\alpha$-Topological Vector Spaces". Hacettepe Journal of Mathematics and Statistics 47 / 5 (October 2018): 1102-1107.
CommonCrawl
What is nearest neighbors search? In the world of deep learning, we often use neural networks to learn representations of objects as vectors. We can then use these vector representations for a myriad of useful tasks. All of these vectors were extracted from a ResNet50 model. Notice how the values in the query vector are quite similar to the vector in the top left of known identities. The process of finding vectors that are close to our query is known as nearest neighbors search. A naive implementation of nearest neighbors search is to simply calculate the distance between the query vector and every vector in our collection (commonly referred to as the reference set). However, calculating these distances in a brute force manner quickly becomes infeasible as your reference set grows to millions of objects. Imagine if Facebook had to compare each face in a new photo against all of its users every time it suggested who to tag, this would be computationally infeasible! A class of methods known as approximate nearest neighbors search offer a solution to our scaling dilemma by partitioning the vector space in a clever way such that we only need to examine a small subset of the overall reference set. Approximate methods alleviate this computational burden by cleverly partitioning the vectors such that we only need to focus on a small subset of objects. In this blog post, I'll cover a couple of techniques used for approximate nearest neighbors search. This post will not cover approximate nearest neighbors methods exhaustively, but hopefully you'll be able to understand how people generally approach this problem and how to apply these techniques in your own work. The first approximate nearest neighbors method we'll cover is a tree-based approach. K-dimensional trees generalize the concept of a binary search tree into multiple dimensions. A toy 2-dimensional example is visualized below. At the top level, we select a random dimension (out of the two possible dimensions, $x_0$ and $x_1$) and calculate the median. Then, we follow the same procedure of picking a dimension and calculating the median for each path independently. This process is repeated until some stopping criterion is satisfied; each leaf node in the tree contains a subset of vectors from our reference set. We can view how the two-dimensional vectors are partitioned at each level of the k-d tree in the figure below. Take a minute to verify that this visualization matches what is described in the tree above. At the top level, we look at the first dimension of the query vector and ask whether or not its value is greater than or equal to 1. Since 4 is greater than 1, we walk down the "yes" path to the next level down. We can safely ignore any of the nodes that follow the first "no" path. Now we look at the second dimension of the vector and ask whether its value is greater than or equal to 0. Since -2 is less than 0, we now walk down the "no" path. Notice again how the area of interest in our overall vector-space continues to shrink. Finally, once we reach the bottom of the tree we are left with a collection of vectors. Thankfully, this is a small subset relative to the overall size of the reference set, so calculating the distance between the query vector and each vector in this subset is computationally feasible. K-d trees are popular due to their simplicity, however, this technique struggles to perform well when dealing with high dimensional data. Further notice how we only returned vectors which are found in the same cell as the query point. In this example, the query vector happened to fall in the middle of a cell, but you could imagine a scenario where the query vector lies near the edge of a cell and we miss out on vectors which lie just outside of the cell. Another approach to the approximate nearest neighbors problem is to collapse our reference set into a smaller collection of representative vectors. We can find these "representative" vectors by simply running the K-means algorithm on our data. In the literature, this collection of "representative" vectors is commonly referred to as the codebook. The right figure displays a Voronoi diagram which essentially partitions the space according to the set of points for which a given centroid is closest. We'll then "map" all of our data onto these centroids. By doing this, we can represent our reference set of a couple hundred vectors with only 7 representative centroids. This greatly reduces the number of distance computations we need to perform (only 7!) when making an nearest neighbors query. We can then maintain an inverted list to keep track of all of the original objects in relation to which centroid represents the quantized vector. You can optionally retrieve the full vectors for all of the ids maintained in the inverted list for a given centroid, calculating the true distances between each vector and our query. This is a process known as re-ranking and can improve your query performance. Similar to before, let's now look at how we can use this method to perform a query. For a given query vector, we'll calculate the distances between the query vector and each centroid in order to find the closest centroid. We can then look up the centroid in our inverted list in order to find all of the nearest vectors. Unfortunately, in order to get good performance using quantization, you typically need to use very large numbers of centroids for quantization; this impedes on original goal of alleviating the computational burden of calculating too many distances. Product quantization addresses this problem by first subdividing the original vectors into subcomponents and then quantizing (ie. running K-means on) each subcomponent separately. A single vector is now represented by a collection of centroids, one for each subcomponent. To illustrate this, I've provided two examples. In the 8D case, you can see how our vector is divided into subcomponents and each subcomponent is represented by some centroid value. However, the 2D example shows us the benefit of this approach. In this case, we can only split our 2D vector into a maximum of two components. We'll then quantize each dimension separately, squashing all of the data onto the horizontal axis and running k-means and then squashing all of the data onto the vertical axis and running k-means again. We find 3 centroids for each subcomponent with a total of 6 centroids. However, the total set of all possible quantized states for the overall vector is the Cartesian product of the subcomponent centroids. In other words, if we divide our vector into $m$ subcomponents and find $k$ centroids, we can represent $k^m$ possible quantizations using only $km$ vectors! The chart below shows how many centroids are needed in order to get 90% of the top 5 search results correct for an approximate nearest neighbors query. Notice how using product quantization ($m>1$) vastly reduces the number of centroids needed to represent our data. One of the reasons why I love this idea so much is that we've effectively turned the curse of dimensionality into something highly beneficial! Product quantization alone works great when our data is distributed relatively evenly across the vector-space. However, in reality our data is usually multi-modal. To handle this, a common technique involves first training a coarse quantizer to roughly "slice" up the vector-space, and then we'll run product quantization on each individual coarse cell. Below, I've visualized the data that falls within a single coarse cell. We'll use product quantization to find a set of centroids which describe this local subset of data, and then repeat for each coarse cell. Commonly, people encode the vector residuals (the difference between the original vector and the closest coarse centroid) since the residuals tend to have smaller magnitudes and thus lead to less lossy compression when running product quantization. In simple terms, we treat each coarse centroid as a local origin and run product quantization on the data with respect to the local origin rather than the global origin. Pro-tip: If you want to scale to really large datasets you can use product quantization as both the coarse quantizer and the fine-grained quantizer within each coarse cell. See this paper for the details. The ideal goal for quantization is to develop a codebook which is (1) concise and (2) highly representative of our data. More specifically, we'd like all of the vectors in our codebook to represent dense regions of our data in vector-space. A centroid in a low-density area of our data is inefficient at representing data and introduces high distortion error for any vectors which fall in its Voronoi cell. One potential way we can attempt to avoid these inefficient centroids is to add an alignment step to our product quantization. This allows for our product quantizers to better cover the local data for each coarse Voronoi cell. We can do this by applying a transformation to our data such that we minimize our quantization distortion error. One simple way to minimize this quantization distortion error is to simply apply PCA in order to mean-center the data and rotate it such that the axes capture most of the variance within the data. Recall my earlier example where we ran product quantization on a toy 2D dataset. In doing so, we effectively squashed all of the data onto the horizontal axis and ran k-means and then repeated this for the vertical axis. By rotating the data such that the axes capture most of the variance, we can more effectively cover our data when using product quantization. This technique is known as locally optimized product quantization, since we're manipulating the local data within each coarse Voronoi cell in order to optimize the product quantization performance. The authors who introduced this technique have a great illustrative example of how this technique can better fit a given set of vectors. The authors who introduced product quantization noted that the technique works best when the vector subcomponents had similar variance. A nice side effect of doing PCA alignment is that during the process we get a matrix of eigenvalues which describe the variance of each principal component. We can use this to our advantage by allocating principal components into buckets of equal variance. I didn't cover binary codes in this post - but I should have! I may come back and edit the post to include more information soon. Until then, enjoy this paper.
CommonCrawl
I have a not-secret love affair with blogging the curve complex: I (intro), II (dead ends), III (connected). I'm surprised I didn't blog the surprising and cute and wonderful proof that the curve complex is hyperbolic, which came out two years ago. Maybe I'll do that next math post (but I have a large backlog of math I want to blog). Anyways, I was idly scrolling through arXiv (where mathematicians put their papers before they're published) and saw a new paper by the two who did the dead ends paper, plus a new co-author. So I thought I'd tell you about it! If you don't remember or know what the curve complex is, you'd better check out that blog post I (intro) above (it is also here in case you didn't want to reread the last paragraph). Remember that we look at curves (loops) up to homotopy, or wriggling. In this post we'll also talk about arcs, which have two different endpoints (so they're lines instead of loops), still defined up to homotopy. The main thing we'll be looking at in this post are geodesics, which are the shortest path between two points in a space. There might be more than one geodesic between two spaces, like in the taxicab metric. In fact, in the curve complex there are infinitely many geodesics between any two points. Infinity is sort of a lot, so we'll be considering specific types of geodesics instead. First we need a little bit more vocabulary. Let's say I give you an arc and a simple (doesn't self intersect) closed curve (loop) in a surface, and you wriggle them around up to homotopy. If you give me a drawing of the two of them, I'll tell you that they're in minimal position if the drawings you give me intersect the least number of times of all such drawings. All three toruses have the same red and green homotopy classes of curves, but only the top right is in minimal position – you can homotope the red curve in the other two pictures to decrease the number of times red and green intersect. I just couldn't make a picture w/out a cute blushing square. If you have three curves a, b, c all in minimal position with each other, then a reference arc for a,b,c is an arc which is in minimal position with b, and whose interior is disjoint from both a and c. Now if you give me a series of curves on a surface, I can hop over to the curve complex of that surface and see that series as a path. If the path $latex v_0,v_1,\ldots,v_n$ is geodesic, then we say it is initially efficient if any choice of reference arc for intersects at most n-1 times. The geodesic is an efficient geodesic if all n of these geodesics are initially efficient: . In this paper, Birman, Margalit, and Menasco prove that efficient geodesics always exist if have distance at least three. Note that there are a bunch of choices for reference arcs, even in the picture above, and at first glance that "bunch" looks like "infinitely many," which sort of puts us back where we started (infinity is a lot). Turns out that there's only finitely many reference arcs we have to consider as long as . Remember, if you've got two curves that are distance three from each other, they have to fill the surface: that means if you cut along both of them, you'll end up with a big pile of topological disks. In this case, they take this pile and make them actual polygons with straight sides labeled by the cut curves. A bit more topology shows that you only end up with finitely many reference arcs that matter (essentially, there's only finitely many interesting polygons, and then there are only so many ways to draw lines across a polygon). So the main theorem of the paper is that efficient geodesics exist. The reason why we'd care about them is the second part of the theorem: that there are at most many curves that can appear as the first vertex in such a geodesic, which means that there are finitely many efficient geodesics between any two vertices where they exist. Here's the link to the paper if you feel like checking it out. I DID NOT MAKE THIS PICTURE IT IS FROM BIRMAN, MARGALIT, MENASCO. But look at how cool it is!!! Look at this picture! The red curve and blue curve are both vertices in the curve complex, and they have distance 4 in the curve complex, and here they are on a surface! So pretty! If you feel like wikipedia-ing, check out one of the authors on this paper. Birman got her Ph.D. when she was 41 and is still active today (she's 88 and a badass and I want to be as cool as she is when I grow up). Great post! Does the minimality condition on reference arcs also require that you can't reduce the number of crossings between green and orange by sliding the endpoints of the green arc around? For example, in your picture, there's a triangle between green, yellow and orange. If I take that triangle literally (which I'm probably not supposed to do) then sliding the green arc to the right along yellow will eliminate that point of intersection. Would that make the original green not a reference arc? Thanks, Jesse! My picture is misleading because the three original curves aren't pairwise in minimal position, which means that green isn't even a candidate to be a reference arc. But ignoring that to answer your question, I looked up Leasure's thesis (https://www.lib.utexas.edu/etd/d/2002/leasurejp46295/leasurejp46295.pdf) where he defines reference arcs, and I think that endpoints are fixed (otherwise you can move arcs to be disjoint from whatever you want). In practice, since the interior of the arc is disjoint from the filling 1st and 3rd curves, any arc has to live in one of those polygons with sides labeled by 1st and 3rd. BMM consider (the finitely many) arcs with endpoints at the midpoints of those sides labeled by 1st, and say that any other potential reference arc encodes the same (intersection) info as these finitely many ones do.
CommonCrawl
A semi-group in which each monogenic sub-semi-group (cf. Monogenic semi-group) is finite (in other words, each element has finite order). Every periodic semi-group has idempotents. The set $K_e$ of all elements in a periodic semi-group some power (depending on the element) of which is equal to a given idempotent $e$ is called the torsion class corresponding to that idempotent. The set $G_e$ of all elements from $K_e$ for which $e$ serves as the unit is an $\mathcal H$-class (see Green equivalence relations). It is the largest subgroup in $K_e$ and an ideal in the sub-semi-group $\langle K_e\rangle$ generated by $K_e$; therefore, $\langle K_e\rangle$ is a homogroup (see Minimal ideal). A periodic semi-group containing a unique idempotent is called unipotent. The unipotency of a periodic semi-group $S$ is equivalent to each of the following conditions: $S$ is an ideal extension of a group by a nil semi-group, or $S$ is a subdirect product of a group and a nil semi-group. The decomposition of a periodic semi-group into torsion classes plays a decisive part in the study of many aspects of periodic semi-groups. An arbitrary torsion class is not necessarily a sub-semi-group: A minimal counterexample is the five-element Brandt semi-group $B_2$, which is isomorphic to a Rees semi-group of matrix type over the unit group having as unit the sandwich matrix of order two. In a periodic semi-group $S$, all torsion classes are sub-semi-groups if and only if $S$ does not contain sub-semi-groups that are ideal extensions of a unipotent semi-group by $B_2$; in this case, the decomposition of $S$ into torsion classes is not necessarily a band of semi-groups. Various conditions are known (including necessary and sufficient ones) under which a periodic semi-group is a band of torsion classes; this clearly occurs for commutative semi-groups, and it is true for periodic semi-groups having two idempotents . The Green relations $\mathcal D$ and $\mathcal J$ coincide in any periodic semi-group; a $0$-simple periodic semi-group is completely $0$-simple. The following conditions are equivalent for a periodic semi-group $S$: 1) $S$ is an Archimedean semi-group; 2) all idempotents in $S$ are pairwise incomparable with respect to the natural partial order (see Idempotent); and 3) $S$ is an ideal extension of a completely-simple semi-group by a nil semi-group. Many conditions equivalent to the fact that a periodic semi-group $S$ decomposes into a band (and then also into a semi-lattice) of Archimedean semi-groups are known; they include the following: a) for any $a\in S$ and for any idempotent $e\in S$, if $e\in SaS$, then $e\in Sa^2S$ (cf. ); b) in $S$, each regular $\mathcal D$-class is a sub-semi-group; and c) each regular element of $S$ is a group element. Let $S$ be an infinite periodic semi-group and let $E_S$ be the set of all its idempotents. If $E_S$ is finite, $S$ contains an infinite unipotent sub-semi-group, while if $E_S$ is infinite, $S$ contains an infinite sub-semi-group that is a nilpotent semi-group or a semi-group of idempotents (cf. Idempotents, semi-group of) . An important subclass of periodic semi-groups is constituted by the locally finite semi-groups (cf. Locally finite semi-group). A more extensive class is constituted by the quasi-periodic semi-groups ($S$ is called quasi-periodic if some power of each of its elements lies in a subgroup $G\subseteq S$). Many properties of periodic semi-groups can be transferred to quasi-periodic ones. Quasi-periodic groups are also called epigroups. This page was last modified on 17 October 2014, at 19:22.
CommonCrawl
The use of futures provides a flexible way to express parallelism and can generate arbitrary dependences among parallel subcomputations. The additional flexibility that futures provide comes with a cost, however. When scheduled using classic work stealing, a program with futures, compared to a program that uses only fork-join parallelism, can incur a much higher number of ``deviations,'' a metric for evaluating the performance of parallel executions. All prior works assume a parsimonious work-stealing scheduler, however, where a worker thread (surrogate of a processor) steals work only when its local deque becomes empty. In this work, we investigate an alternative scheduling approach, called ProWS, where the workers perform proactive work stealing when handling future operations. We show that ProWS, for programs that use futures, can provide provably efficient execution time and equal or better bounds on the number of deviations compared to classic parsimonious work stealing. Given a computation with $T_1$ work and $T_\infty$ span, ProWS executes the computation on $P$ processors in expected time $O(T_1 / P + T_\infty \lg P)$, with an additional $\lg P$ overhead on the span term compared to the parsimonious variant. For structured use of futures, where each future is single touch with no race on the future handle, the algorithm incurs $O(P T_\infty^2)$ deviations, matching that of the parsimonious variant. For general use of futures, the algorithm incurs $O(m_k T_\infty + P T_\infty \lg P)$ deviations, where $m_k$ is the maximum number of future touches that are logically parallel. Compared to the bound for the parsimonious variant, $O(k T_\infty + P T_\infty)$, with $k$ being the total number of touches in the entire computation, this bound is better assuming $m_k = \Omega(P \lg P)$ and is smaller than $k$, which holds true for all the benchmarks we examined.
CommonCrawl
While maximum likelihood exploratory factor analysis (EFA) provides a statistical test that $k$ dimensions are sufficient to account for the observed correlations among a set of variables, determining the required number of factors in least-squares based EFA has essentially relied on heuristic procedures. Two methods, Revised Parallel Analysis (R-PA) and Comparison Data (CD), were recently proposed that generate surrogate data based on an increasing number of principal axis factors in order to compare their sequence of eigenvalues with that from the data. The latter should be unremarkable among the former if enough dimensions are included. While CD looks for a balance between efficiency and parsimony, R-PA strictly test that $k$ dimensions are sufficient by ranking the next eigenvalue, i.e. at rank $k+1$, of the actual data among those from the surrogate data. Importing two features of CD into R-PA defines four variants that are here collectively termed Next Eigenvalue Sufficiency Tests (NESTs). Simulations implementing 144 sets of parameters, including correlated factors and presence of a doublet factor, show that all four NESTs largely outperform CD, the standard Parallel Analysis, the Mean Average Partial method and even the maximum likelihood approach, in identifying the correct number of common factors. The recommended, most successful NEST variant is also the only one that never overestimates the correct number of dimensions beyond its nominal $\alpha$ level. This variant is made available as R and MATLAB code as well as a complement incorporated in a Microsoft Excel file.
CommonCrawl
@ Xiao-Gang Wen Thank you very much. we see the RVB state does not refer to one state, it can refer to many different states (with different topological orders). The RVB states with different topological orders can have different properties. There is a RVB state (with a $Z_2$ topological order) where the spinons are fermions while the holons are bosons. There is another RVB state (with a different $Z_2$ topological order) where spinons are bosons while holons are fermions. There is a third RVB state (chiral spin liquid) where spinons and holons are both semions obeying fractional statistics. @Xiao-Gang Wen Thank you Prof.Wen. What about a chiral spin liquid with a $Z_2$ topological order ? Chiral spin liquid can also be described by the slave-fermion formalism, if you put the fermions into a mean-field ansatz with nontrivial topological band structure (i.e. Chern number). @Meng Yes, I agree with you. Do you know some existing studies on the $Z_2$ chiral spin liquid (a $Z_2$ spin-liquid with broken time-reversal symmetry) ? Thanks a lot! There is a large body of literature on chiral spin liquid. A recent VMC study is http://journals.aps.org/prb/abstract/10.1103/PhysRevB.91.041124, and you can find in the references of the paper many recent (exact) numerical studies of chiral spin liquid in Heisenberg models on the kagome lattice. As to my first comment, I think I can answer it now: The TR symmetry is NOT essential to the topological order for a $Z_2$ SL, since the topological degeneracy of a $Z_2$ SL on a torus is always 4 while the total ground-state degeneracy (TGSD) is TGSD$=2\times 4$ for the spontaneous TR breaking $Z_2$ SL ($Z_2$ CSL) and TGSD$=4$ for the TR symmetric $Z_2$ SL.
CommonCrawl
Sorry if this topic has already been treated. It's my first time here. Five days that I am searching for a solution but can't find it. My problem : I am trying to create a Facebook application that takes your profile picture and inserts it into a PNG picture. Everything is working fine with some profile pictures and with others, I'm getting a black screen where profile picture is supposed to be inserted. But variables are OK. File paths are OK. I tried with many different profile pictures. With original photos, it's OK. With some cropped pics, its OK also. And some other won't work. Square cropped pics won't work. //this is the core function. Have you turned error reporting on and see what it says? From a quick glance at your script, it appears that if $x_dim < 150 or $y_dim < 170, that $final_x and $final_y are never set. This could cause problems. Also, make sure you're receiving an image in the function. Is $_POST['image'] actually an image? does getimagesize() fail on getimagesize($url); ? Thank you already for your help. You're right about the problem! But my $_POST['image'] is returning my image and getimagesize($url); is also returning a size. That's OK. Now I already know where I have to work to get it fixed. But if you have any advice, it would be more than welcome! Anyway, thank you alreday very much! Well, I think I'm getting it working right!! THANK YO SOOOOO MUCH !!! Put $mult inside of the else too. Thank you again. Everything seems to be OK now.
CommonCrawl
Bolsinov A. V., Borisov A. V., Mamaev I. S. In the paper we consider a system of a ball that rolls without slipping on a plane. The ball is assumed to be inhomogeneous and its center of mass does not necessarily coincide with its geometric center. We have proved that the governing equations can be recast into a system of six ODEs that admits four integrals of motion. Thus, the phase space of the system is foliated by invariant 2-tori; moreover, this foliation is equivalent to the Liouville foliation encountered in the case of Euler of the rigid body dynamics. However, the system cannot be solved in terms of quadratures because there is no invariant measure which we proved by finding limit cycles. The paper is devoted to the bifurcation analysis and the Conley index in Hamiltonian dynamical systems. We discuss the phenomenon of appearance (disappearance) of equilibrium points under the change of the Morse index of a critical point of a Hamiltonian. As an application of these techniques we find new relative equilibria in the problem of the motion of three point vortices of equal intensity in a circular domain. The problem of Hamiltonization of nonholonomic systems, both integrable and non-integrable, is considered. This question is important in the qualitative analysis of such systems and it enables one to determine possible dynamical effects. The first part of the paper is devoted to representing integrable systems in a conformally Hamiltonian form. In the second part, the existence of a conformally Hamiltonian representation in a neighborhood of a periodic solution is proved for an arbitrary (including integrable) system preserving an invariant measure. Throughout the paper, general constructions are illustrated by examples in nonholonomic mechanics. Bolsinov A. V., Oshemkov A. A. A Hamiltonian system on a Poisson manifold $M$ is called integrable if it possesses sufficiently many commuting first integrals $f_1, \ldots f_s$ which are functionally independent on $M$ almost everywhere. We study the structure of the singular set $K$ where the differentials $df_1, \ldots, df_s$ become linearly dependent and show that in the case of bi-Hamiltonian systems this structure is closely related to the properties of the corresponding pencil of compatible Poisson brackets. The main goal of the paper is to illustrate this relationship and to show that the bi-Hamiltonian approach can be extremely effective in the study of singularities of integrable systems, especially in the case of many degrees of freedom when using other methods leads to serious computational problems. Since in many examples the underlying bi-Hamiltonian structure has a natural algebraic interpretation, the technology developed in this paper allows one to reformulate analytic and topological questions related to the dynamics of a given system into pure algebraic language, which leads to simple and natural answers. The work introduces a naive description of dynamics of point vortices on a plane in terms of variables of distances and areas which generate Lie–Poisson structure. Using this approach a qualitative description of dynamics of point vortices on a plane and a sphere is obtained in the works [14,15]. In this paper we consider more formal constructions of the general problem of n vortices on a plane and a sphere. The developed methods of algebraization are also applied to the classical problem of the reduction in the three-body problem. Bolsinov A. V., Dullin H. R. Using two classical integrable problems, we demonstrate some methods of a new theory of orbital classification for integrable Hamiltonian systems with two degrees of freedom. We show that the Liouville foliations (i.e., decompositions of the phase space into Liouville tori) of the two systems under consideration are diffeomorphic. Moreover, these systems are orbitally topologically equivalent, but this equivalence cannot be made smooth.
CommonCrawl
I am reading Rational Canonical form from The Abstract Algebra book by Dummit and Foote. I have some doubt in Smith normal form. Smiths normal for says for any $n\times n$ square matrix $A$ over an arbitrary field $F,$ $xI-A$ is equivalent to diagonal matrix in $F[x]$ whose diagonal elements are either $1$ or the invariant factors of the pair $(F^n,A)$. But after looking at other references it seems to me that $xI-A$ is not only equivalent but Similar to such diagonal matrix in $F[x].$ I can't understand how they are similar. I need some help to understand the similarity. And also I want to know if there are references for the Canonical form in the modern approach by what I mean using the results of modules over PID. I don't want to set up all the machinery related to this problem but only give some basic ideas. Let $A\in M_n(F)$, where $F$ is a field, $m$ be its minimal polynomial and $\chi_A$ be its characteristic polynomial. If $p\in F[x]$, then $C_p$ denotes the companion matrix of $p$. Note that $F^n$ can be viewed as a finitely generated module over the PID $F[A]$: $p(A).v=p(A)v$. Then there is the so called "structure theorem" cf. (*) the polynomials $(p_i)_i$ uniquely define the similarity class of $A$ over $F$. Roughly speaking, Frobenius and Smith say pretty much the same thing. $xI-A,xI-B\in M_n(F[x])$ have same Smith normal form IFF $A,B$ are similar over $F$. Note that $xI-A$ is absolutely not similar to its Smith normal form (in general $xI-A$ is not diagonalizable). Not the answer you're looking for? Browse other questions tagged linear-algebra abstract-algebra matrices modules smith-normal-form or ask your own question. Confusion with Smith normal form and rational canonical form. What is the purpose of these extra steps in the algorithm for converting to rational canonical form? Is the product of two Smith Normal Forms a the Smith normal form of the product?
CommonCrawl
Two parties wish to carry out certain distributed computational tasks, and they are given access to a source of correlated random bits. It allows the parties to act in a correlated manner, which can be quite useful. But what happens if the shared randomness is not perfect? We study shared randomness in the context of multi-party number-in-hand communication protocols in the simultaneous message passing model. We show that with three or more players, shared randomness exhibits new interesting properties that have no direct analogues in the two-party case. We prove a Chernoff-like large deviation bound on the sum of non-independent random variables that have the following dependence structure. The variables $Y_1,\ldots,Y_r$ are arbitrary Boolean functions of independent random variables $X_1,\ldots,X_m$, modulo a restriction that every $X_i$ influences at most $k$ of the variables $Y_1,\ldots,Y_r$. We introduce a new type of cryptographic primitive that we call hiding fingerprinting. No classical fingerprinting scheme is hiding. We construct quantum hiding fingerprinting schemes and argue their optimality. The problem was open for $k\geq3$. We demonstrate a two-player communication problem that can be solved in the one-way quantum model by a 0-error protocol of cost O(log n) but requires exponentially more communication in the classical interactive (two-way) model.
CommonCrawl
A question that's been on my mind for a while is whether any precise statement to the effect of "Heegaard Floer homology is a TQFT," for some reasonable definition of TQFT, can be made. Of course, a lot of effort is being spent right now on HF as an extended TQFT (e.g. the bordered theory of Lipshitz/Ozsvath/Thurston and, more recently, the cornered theory of Douglas/Lipshitz/Manolescu). But right now I'm just wondering about the 3+1 structure. The issue in 3+1 dimensions (leaving aside the mixed invariants and how to derive them from a TQFT framework) is that only cobordisms between connected 3-manifolds induce maps on HF. This is, in some sense, a fundamental feature of the theory, since the induced maps for closed 4-manifolds are zero. This was discussed in the MO question Seiberg-Witten theory on 4-manifolds with boundary. So, what if one were to make this feature into a definition? "Some variant of TQFT" := a functor which only allows these cobordisms with connected inputs and outputs? Does this correspond to some definition that's already out there? Is it a reasonable thing to consider in the framework of, e.g., Lurie's classification of fully extended TQFTs? Or is there some other definition which could be used instead, more amenable to this framework? I'm putting a "reference-request" tag on this question, because answering it as stated probably would consist of pointing out a relevant paper or two, but I'd be interested more generally in anything that continues the discussion from the MO question I linked above. Katrin Wehrheim has this issue too for 2+1 dimensions; she's referred to it as "connected TFT" (at least in private communication). She and Chris Woodward are currently working on it (using Lagrangian correspondences and Cerf theory), and she has posted on her website the preprint: Floer Field Theory. Look at Definition 2.2.1. It is related to her other notes on the Symplectic 2-Category. In particular, because the surfaces are required to be connected, you don't have the product axiom. Morphisms are assigned to the cobordisms through the Cerf relations. This is spelled out in their other note (available on her webiste): Connected Cerf Theory. Not the answer you're looking for? Browse other questions tagged reference-request gt.geometric-topology tqft heegaard-floer-homology or ask your own question. Where are $+$, $-$ and $\infty$ in bordered Heegaard-Floer theory? What else is Seiberg-Witten Theory equal to? What are TQFTs that are multiplicative under connected sums? Do bordisms with connected sum as monoidal product exist? Can non-chiral 3D TQFTs be extended to non-orientable manifolds whereas chiral ones cannot?
CommonCrawl
When downsampling an image by an integer factor $n$, the obvious method is to set the pixels of the output image to the average of the corresponding $n \times n$ blocks in the input image. Is it true that there is a better method (and if so, where does the above method fail, although it seems "obviously" correct)? I do not know a lot about signal processing, this question just interests me. Downsampling an image reduces the number of samples that can represent the signal. In terms of frequency domain, when a signal is downsampled, the high-frequency portion of the signal will be aliased with the low-frequency portion. When applied to image processing, the desired outcome is to preserve only the low-frequency portion. In order to do this, the original image needs to be preprocessed (alias-filtered) to remove the high-frequency portion so that aliasing will not occur. The optimal digital filter to remove the high-frequency portion (with the sharpest cutoff) is sinc function. The reason is that the Sinc function's frequency domain representation is a nearly constant 1 over the entire low-frequency region, and nearly constant 0 over the entire high-frequency region. The impulse response of the sinc filter is infinite. Lanczos filter is a modified sinc filter which attenuates the sinc coefficients and truncates them once the values drop to insignificance. However, being optimal in frequency domain does not imply being optimal in human eyes. There are upsampling and downsampling methods that do not obey linear transformations but produce better results than linear ones. With regard to the statement about $n \times n$, it is important to keep in mind that during image sampling, the choice of coordinates correspondence between the high-resolution signal and the low-resolution signal is not arbitrary, nor is it sufficient to align them to the same origin (0) on the real or discrete number line. Upsampling an image containing arbitrary random values by an integer factor, then downsampling by the same integer factor, should result in the same image with minimal change numerically. Upsampling/downsampling an image consisting of just one uniform value, followed by the opposite operation, should result in an image consisting of the same value uniformly, with minimal numerical deviations. Repeatedly applying pairs of upsampling/downsampling should minimize the shift in image content as much as possible. If you want to improve on this, you need to first accept the fact that it's impossible to reduce blurring in some cases, so the only way to get uniform output involves increasing the blurring. The ideal way is to use a gaussian kernel with radius larger than N/2, rather than a step function, as the convolution function with the source image. A cheap way way to tack on an approximation, however, if you already have your N-by-N area averaging implementation, is just to apply a (1/4,1/2,1/4) blur convolution to the resulting downsampled image. Not the answer you're looking for? Browse other questions tagged image-processing sampling or ask your own question.
CommonCrawl
Constructing Voronoi diagrams in L(1) and L(infinity) metrics with two plane sweeps. A plane-sweep method without using transformation but using two sweeps is used to construct the Voronoi diagram in $L\sb1\ (L\sb\infty,$ respectively) metric of a set of n point sites in O(nlogn) time and O(n) space. The two sweeps advance from opposite directions and produce two symmetrical data structures called the Left-to-Right Shortest-Path-Map and Right-to-Left Shortest-Path-Map. The two maps are then tailored to produce the desired Voronoi diagram. Source: Masters Abstracts International, Volume: 34-02, page: 0796. Adviser: Y. H. Tsin. Thesis (M.Sc.)--University of Windsor (Canada), 1994. Wang, Jianan., "Constructing Voronoi diagrams in L(1) and L(infinity) metrics with two plane sweeps." (1994). Electronic Theses and Dissertations. 1555.
CommonCrawl
K-Means Clustering is a machine learning technique for classifying data. It's best explained with a simple example. Below is some (fictitious) data comparing elephants and penguins. We've plotted 20 animals, and each one is represented by a (weight, height) coordinate. You can see that the coordinate points of the elephants and penguins form two clusters: elephants are bigger and heavier, penguins are smaller and lighter. Now suppose we've got one more datapoint, but we've forgotten whether it's an elephant or a penguin. Let's plot it, too. We've marked it in orange. If you were to make a guess, you'd say that the orange datapoint probably belongs to an elephant, and not to a penguin. We say this because the orange datapoint seems to belong to the elephant cluster, not to the penguin cluster. This is the essence of clustering. We take some labelled data — like heights and weights of animals, where each animal is labeled as either a penguin or an elephant. We use an algorithm to figure out which datapoints belong to which (weight, height) clusters. We look at the labels of the clusters to understand what label each cluster corresponds to. Then we take an unlabelled datapoint, see into which cluster it fits best, and thereby assign the unlabelled datapoint a label. We call the process k-means clustering because we assume that there are $k$ clusters, and each cluster is defined by its center point — its mean. To find these clusters, we use Lloyd's Algorithm: we start out with $k$ random centroids. A centroid is simply a datapoint around which we form a cluster. For each centroid, we find the datapoints that are closer to that centroid than to any other centroid. We call that set of datapoints its cluster. Then we take the mean of the cluster, and let that be the new centroid. We repeat this process (using the new centroids to form clusters, etc.) until the algorithm stops moving the centroids. We do this in order to minimize the total sum of distances from every centroid to the points in its cluster — that is our metric for how well the clusters split up the data. For every digit, each pixel can be represented as an integer in the range [0,255] where 0 corresponds to the pixel being completely white, and 255 corresponds to the pixel being completely black. This gives us a 28 $\times$ 28 matrix of integers. We can then flatten this matrix into a 784 $\times$ 1 vector, which is like a coordinate pair, except for that instead of 2 coordinates it has 784. Now that the data is in coordinate form, we can run k-means clustering. Let's do it. I will be using Python 2.7 in an iPython notebook. We start by importing all the libraries we will use. Next, we write a function to read in the MNIST data. Then we use that function to read in the data. We read every datapoint into a tuple containing a label and the data vector. # each containing a labelled digit and its vector representation. Then we split the data into a training and a validation set. We'll construct our clusters with the training set, and then use those clusters to classify the datapoints in the validation set, i.e. to assign labels to these datapoints. We can then check those inferred labels against the known labels to see how often the algorithm misclassifies a datapoint. # into a training and a validation set. Now we write a function to take a datapoint and display the digit. This is mostly for debugging and checking our results. Now we begin writing Lloyd's algorithm. There are many libraries that have already implemented this algorithm, but it's good practice to write it by hand. Notice that the means in k-means clustering comes from taking the mean of a cluster, and relocating the centroid to that mean. A mean, however, is not robust to outliers. It's possible to take the median instead of the mean — that's known as k-medians clustering. As usual, there are many variants of this algorithm for various use cases. Following are helper functions for Lloyd's algorithm. For clarity, I've written labelled_x when we can expect x to be a tuple of (label,data), or a list of such tuples. element-wise sums a list of arrays. take the sum and then divide by the size of the cluster. The main parts of Lloyd's algorithm: forming clusters and moving centroids. datapoint to its closest centroid. This forms clusters. # than to any other. That list is the cluster of that centroid. # for each datapoint, pick the closest centroid. # allocate that datapoint to the cluster of that centroid. returns list of mean centroids corresponding to clusters. We want to repeat the forming and moving steps until the algorithm converges — when the movements of centroids are arbitrarily small. In this case, I've chosen to determine convergence by when the movements are zero. This takes longer depending on the size of $k$, so some implementations don't wait for convergence, and instead run some sufficiently large number of iterations (e.g. 100) to get appropriately close. until the moves are no longer significant. # between centroid positions every time. # difference change is nan once the list of differences is all zeroes. runs k-means clustering on the data. However, our centroids aren't labelled yet. We'll write a function to label each centroid with the most common label in its cluster. depends on clusters and centroids being in the same order. But we're not just interested in clustering the known data; we want to classify unknown data using those clusters! So let's write a function to classify an unlabelled digit, and let's write another function to classify many of them, so we can get an error rate that tells us about the performance of our algorithm. and thus classify the digit. classifies a list of labelled digits. returns the error rate. We're done implementing this tool. Let's test it out. We'll try clustering with k=16, and we'll display the 16 centroids. We see a few interesting results. Most people tend to draw the figure eight the same way, so there's only one centroid for it. On the other hand, there are three centroids (and clusters) for the figure zero, even though they don't look very different. The centroids for the figure two reflect that some people draw their twos with a kind of cursive loop, and some people draw their twos without a loop. Notice that there's no centroid for the figure five. Thus, we never classify any digit as a five. In this set, we actually have a "five" centroid. Note that it's pretty messy — it seems that fives are drawn least consistently, so their centroid (average) is the least clear, and as we saw in the first set, they're apparently easily misclassified. Beyond that, though the two sets of centroids seem quite different, their error rates in classifying the validation set are not: the first set classifies with an error rate of 0.342, the second with an error rate of 0.304. On the other hand, if we run the same code to see which digits are classified into the "nine" cluster, the results aren't as good. There is a surprising number of digits that you would very clearly expect to have been classified in another cluster, such as the top-left one, the two in the left column, or the seven in the bottom right. Looking at these digits and the centroids further up, it seems as if those centroids would be a much better match than the centroid for the figure nine. I can currently only guess as to why the classification didn't work better. Now let us iterate over various values for $k$, and see how the performance improves as we use more clusters. However, since Lloyd's algorithm's time complexity is polynomial in $k$ and I did not constrain the number of iterations, I only ran one trial for a large $k$ (of 100). We can see that increasing the number of clusters steadily improves the error rate, and that with $k=100$, we get an error rate around 0.12. It is conceivable that we could further decrease error with larger k. We can also make many heuristic improvements to k-means: for example, the weakness of randomly selecting initial centroids, which may lead to suboptimal clusterings, is addressed by the k-means++ algorithm. Nonetheless, despite many possible improvements, it is rare to find a dataset in which k-means is competitive with more advanced machine learning techniques. Convolutional neural networks can classify the MNIST data with error rates below 0.01. However, for purposes of education, k-means clustering is a great way to introduce machine learning. It is technically reasonably accessible, and it illustrates in broad strokes how machine learning and data mining techniques are used in practice. This project was inspired by a homework assignment in John Lafferty's Large-Scale Data Analysis course that I took at UChicago in the Spring of 2015. I collaborated with Elliott Ding on that assignment. In the class, we used Apache Spark and a map-reduce framework on AWS to take advantage of parallelization. To make the algorithm more accessible, I've rewritten the code for this article to not use distributed systems. A GitHub repository containing the iPython notebook, dataset, etc. is available here. We say that the algorithm converges when the centroids cease moving. Note that Lloyd's algorithm converges only to a local optimum. Lloyd's algorithm does not guarantee finding a global optimum. This can be a critical pitfall. For this reason, different runs of the k-means algorithm with the same settings will often result in different clusterings. Using more clusters raises the perennial danger of overfitting.
CommonCrawl
We consider the problem of placing $k$ queens on an $n \times n$ chessboard so that the number of unattacked squared is as large as possible. We focus on the domain where $k$ is small relative to $n$. We are able to solve this problem by relating it to various related problems in additive combinatorics.
CommonCrawl
When looking at the discrete model of a Sigma-Delta Modulator as shown below, we can see that the quantizer is modelled as a white-noise source $e[n]$. From this model, we can derive the noise shaping property of the modulator. In a typical Sigma-Delta Modulator, the quantizer is realized by a 1-bit ADC, i.e. a comparator. However, the discrete model does not require 1-bit. So why only use an ADC with a resolution of one bit, and not e.g. two or three bit? The noise is shaped to be mostly present at higher frequencies, yes, but reducing the noise even further by choosing an ADC with greater resolution is an advantage anyhow. First of all, because it's easy to build a 1-bit ADC. It's a comparator. It's literally the easiest ADC you can build. The $\Delta\Sigma$ ADC was invented (or, rather, published) in 1962¹ ! The 2-bit ADC is more than twice as complex as that, you need some window decision: so if you have the choice of making your 1-bit ADC run faster or building a somewhat exact 2-bit ADC, there's a solid chance you'd go for the 1-bit ADC, simply because there's less analog semiconductor design to do, and, more importantly, to go wrong! Imagine the 1-bit ADC doing a sign decision ("is the analog voltage > 0V?"). No matter how you scale the analog voltage (multiply it with a factor $\alpha$), the result will always be the same. It's absolutely not obvious how you'd change above diagram to fit a 2-bit ADC! But, above diagram is very close to the original publication¹, so abstraction yields that the threshold is just a single-bit quantizer. However, it's been done, there's multiple patents² covering higher-quantization methods. You'd have to limit the bandwidth of the forward chain³, and, depending on what you want to achieve (class-D amplifier, faster ADC, lower power consumption…), you'd then have a bit of clever logic that translates the ADC output into different kinds of feedback "pulses" and different kinds of counter increments. So, your original claim "we don't use more than one bit in the quantization within the loop of a $\Delta\Sigma$ ADC is wrong; it is only right for the very classical implementation of the original inventors. To totally disprove your point: There's a lot of $\Delta\Sigma$ ADCs on the market that actually do higher-order quantization; for example, the Analog Devices AD9267 uses a nine-level quantizer⁴ (3.something bit, which was a pretty impressive feat on its own to integrate, considering the speed of 640 MS/s of that quantizer). ¹ Inose, H., Yasuda, Y. and Murakami, J., 1962. A telemetering system by Code Modulation – $\Delta$-$\Sigma$ Modulation. IRE Transactions on Space Electronics and Telemetry, (3), pp.204-209. ² WO2008028142, which already cites multi-bit feedback as "prior art". ³ otherwise, you'd always only get the lowest ADC output before you could get any of the higher ADC outputs, and with infinite bandwidth, that would mean you'd already emit a pulse to "reset" the integrator, and you'd never use any but 1-bit of the ADC. That bandwidth-limiting typically happens "involuntarily" because electronics are limited by physics. ⁴ AD9267 Datasheet, p. 13, "Theory of operation" Not the answer you're looking for? Browse other questions tagged sampling quantization adc or ask your own question. How does Delta-Sigma Modulation convert a 1-bit signal to higher resolution signal? What sample rate should I use on my ADC? Why is dirac delta used in continuous signal sampling? How to plot noise shaped spectrum of First order Incremental Sigma Delta ADC's output?
CommonCrawl
UMZh Author: Nikmehr M. J. For a monoid M, we introduce strongly M-semicommutative rings obtained as a generalization of strongly semicommutative rings and investigate their properties. We show that if G is a finitely generated Abelian group, then G is torsion free if and only if there exists a ring R with |R| ≥ 2 such that R is strongly G-semicommutative. We generalize the concepts of semicommutative, skew Armendariz, Abelian, reduced, and symmetric left ideals and study the relationships between these concepts. Nikmehr M. J., Pazoki M., Tavallaee H. A. We introduce the concept of weak $\alpha$-skew Armendariz ideals and investigate their properties. Moreover, we prove that $I$ is a weak $\alpha$-skew Armendariz ideal if and only if $I[x]$ is a weak $\alpha$-skew Armendariz ideal. As a consequence, we show that $R$ is a weak $\alpha$-skew Armendariz ring if and only if $R[x]$ is a weak $\alpha$-skew Armendariz ring. Heidari S., Nikandish R., Nikmehr M. J. Let $R$ be a commutative ring with identity, $M$ an $R$-module and $K_1,..., K_n$ submodules of $M$. In this article, we construct an algebraic object, called product of $K_1,..., K_n$. We equipped this structure with appropriate operations to get an $R(M)$-module. It is shown that $R(M)$-module $M^n = M... M$ and $R$-module $M$ inherit some of the most important properties of each other. For example, we show that $M$ is a projective (flat) $R$-module if and only if $M^n$ is a projective (flat) $R(M)$-module.
CommonCrawl
How does Higgs field relate to Aether theories? Does $SO(32) \sim_T E_8 \times E_8$ relate to some group theoretical fact? How much radiation does the moon emit? How does the instability of a radioactive atom relate to quantum mechanics? In a popular sense, it is like emitting photons by an excited atom while its transition to lower energy levels. question in other online platforms that value such questions. Try asking at https://physics.stackexchange.com/ .
CommonCrawl
Here's a variation of Discrete Peaceful Encampments: 9 queens on a chessboard (which itself is a variation of Peaceful Encampments). Can you find a way to place more than 4 queens of each color "peacefully" on an 8x8 chessboard? There's no way to get more than 4 of every colour. Also, there is no simple way to prove this. Annoyingly, adding any two colours always excludes every option of adding the third colour, no matter how much you shuffle the pieces around. any two colours to 5, but not all. Also, you could get white to 5 (e1) and red to 6 (g2, g7), but black still stays at 4, so you get a 4-5-6 solution. There's so much wiggle room in the above diagram, and you can get so very very close to a 5-5-5, that any simple impossibility proof (like "there aren't enough diagonals on the chess board") is not going to work. This all is a result of feeding this problem into a highly sophisticated self learning neural network *, making it start from random (and later self-selected) positions, where every improvement path always led to this position, or one of its descendants, showing that this position is at least a local optimum. b) a 5-5-5 solution, or a simple proof of its impossibility. If anyone can provide case b, I'll happily buy that person a beer, after a solid stint of banging my head against a wall. 0 0 . . . . . . . . 2 . . . . . . . . . 2 2 . . The orange piece can be any color. One of the yellow square can the 9th queen of appropriate color, this then has $9+9+8$ on an $11\times11$ board.
CommonCrawl
Kokoro Connect's premise made a lot of people raise their eyebrows, because really, what good can come from body-switching shenanigans? Well, let's think about this for a second. We have a group of five kids and every once in a while, at random, they switch into the others' bodies at random. What does that sound like? That's right, a permutation! Interestingly enough, the idea of connecting body-switching with permutations isn't new. The Futurama writers did it and apparently got a new theorem out of it. What differs in the case of Kokoro Connect and Futurama is that in Futurama, the body-switching could only happen in twos. These are called transpositions. Obviously, this isn't the case for Kokoro Connect. This doesn't make too much of a difference since it turns out we can write out any permutation we want as a series of transpositions, but that wouldn't be very fun for Heartseed. While it's helpful for seeing exactly what goes where, especially when we start dealing with multiple permutations, this notation is a bit cumbersome, so we'll only write the second line ($(12354)$) to specify a permutation. For the purposes of this little exercise, we'll consider applying a permutation as taking whoever's currently in a given body. That is, say we permute Aoki and Taichi to get $(4 2 3 1 5)$. In order to get everyone back into their own bodies, we have to apply $(4 2 3 1 5)$ again, which takes Aoki, who's in Taichi's body, back into Aoki's body. So let's begin with something simple. How many different ways are there for the characters to body switch? Both who is switched and who they switch with is entirely random. Again, since the switches aren't necessarily transpositions, this means that we can end up with cycles like in episode 2, when Yui, Inaban, and Aoki all get switched at the same time. This can be written as $(1 2 4 5 3)$. But this is just the number of permutations that can happen on a set of five elements, which is just 5! = 120. Of course, that includes the identity permutation, which just takes all elements to themselves, so the actual number of different ways the characters can be swapped is actually 119. In this case, we can think of the permutations themselves as elements of a group and we take permutation composition as the group operation. Let's go through these axioms. Closure says that if have two different configurations of body swamps, say Taichi and Iori ($(2 1 3 4 5)$) and Iori and Yui ($(1 5 3 4 2)$), then we can apply them one after the other and we'd still have a body swap configuration: $(2 5 3 4 1)$. That is, we won't end up with something that's not a body swap. This seems like a weird distinction to make, but it's possible to define a set that doesn't qualify as a group. Say I want to take the integers under division as a group ($(\mathbb Z, \div)$). Well, it breaks closure because 1 is an integer and 2 is an integer but $1 \div 2$ is not an integer. The identity means that there's a configuration that we can apply and nothing will change. That'd be $(12345)$. And inverse means that there's always a single body swap that we can make to get everyone back in their own bodies. As it turns out, the group of all permutations on $n$ objects is a pretty fundamental group. These groups are called the symmetric groups and are denoted by $S_n$. So the particular group we're working with is $S_5$. So what's so special about $S_5$? Well, as it turns out it's the first symmetric group that's not solvable, a result that's from Galois theory and has a surprising consequence. Évariste Galois was a cool dude, proving a bunch of neat stuff up until he was 20, when he got killed in a duel because of some drama which is speculated to be of the relationship kind, maybe not unlike Kokoro Connect (it probably wasn't anything like Kokoro Connect at all). Among the things that he developed was the field that's now known as Galois theory, which is named after him. What's cool about Galois theory is that it connects two previously unrelated concepts in algebra: groups and fields. This neat little formula gives us an easy way to find the complex roots of any second degree polynomial. It's not too difficult to derive. And we can do that for cubic polynomials too, which takes a bit more work to derive. And if we want to really get our hands dirty, we could try deriving the general form of roots for polynomials of degree four. And wait until you try to do it for degree five polynomials. That's because, eventually, you'll give up. Why? Well, it's not just hard, but it's impossible. There is no general formula using radicals and standard arithmetic operations for the roots for any fifth degree (or higher!) polynomial. The reason behind this is because $S_5$ is the Galois group for the general polynomial of degree 5. Unfortunately, proving that fact is a bit of a challenge to do here since it took about 11 weeks of Galois theory and group theory to get all the machinery in place, so we'll have to leave it at that. This entry was posted in Anime and tagged Anime, galois theory, group theory, kokoro connect, math, π day by blkmage. Bookmark the permalink.
CommonCrawl
UV resonance Raman spectroscopy is a well established technique for probing peptide and protein secondary structure. Excitation between 180 to 215 nm, within the $\pi $ to $\pi $* electronic transitions of the peptide backbone, results in the enhancement of amide vibrations. We use UVRR excitation profiles and depolarization ratios to examine the underlying peptide bond electronic transitions. The present consensus is that three electronic transitions (n to $\pi $* and two $\pi $ to $\pi $*) occur in simple amides between 230 and 130 nm. In $\alpha $-helices a weak n to $\pi $* electronic transition occurs at 220 nm, while a higher frequency $\pi $ to $\pi $* transition occurs at 190 nm. This $\pi $ to $\pi $* transition undergoes exciton splitting, giving rise to two dipole-allowed transitions: one perpendicular to the helical axis (190 nm) and the second parallel to the axis (205 nm). The melted state of alpha-helices resembles left-handed poly-proline II (PPII) helices. The PPII helix electronic transitions have been defined as an n to $\pi $* transition at $\sim $ 220 nm and a $\pi $ to $\pi $* transition at $\sim $ 200 nm. For beta-sheets, the $\pi $ to $\pi $* transition occurs at $\sim $ 194 nm for parallel and $\sim $196 nm for anti-parallel sheets. n to pi* transition occurs at $\sim $217 nm for both.
CommonCrawl
The KArlsruhe TRItium Neutrino (KATRIN) experiment aims at the model-independent measurement of the electron neutrino mass. KATRIN is designed for a neutrino mass sensitivity of 0.2 eV (90\% CL) after three years of measurement time. In May 2018, KATRIN performed its First Tritium measurements. Along with the beta electrons, tritium beta decay creates ions inside the tritium source. The tritium ions are guided by the magnetic field to the Pre- and Main Spectrometer and could create background. Preventing ion induced background is imperative for KATRIN. Therefore, the ions are blocked by ring electrodes with positive potential and removed by electric dipole electrodes via $\vec E\times\vec B$-drift. Various ion detectors continuously monitor the ion blocking and removal. The results of the ion monitoring during the First Tritium measurements will be presented in this poster.
CommonCrawl
I apologize if this question doesn't make any sense. I'll just go ahead and delete it if that's the case. But the question is just the title. Is there a notion of forcing in homotopy type theory? Presumably we do homotopy type theory in some $(\infty,1)$-topos, so we can axiomatize the notions accordingly? Does anyone know of a reference for this kind of thing if it does exist or makes sense? In as far as we regard forcing as forming internal sheaves, the question is asking how to say "internal category of sheaves" in homotopy type theory. reduction modality $\dashv$ infinitesimal shape modality $\dashv$ infinitesimal flat modality . Using this we say: a function is formally étale if its naturality square of the unit of the "infinitesimal shape modality" is a homotopy pullback square. Then we have available in the homotopy type theory the sub-slice over any $X$ on those maps that are formally étale. This is internally the $\infty$-topos of $\infty$-stacks over $X$, hence the "forcing of $X$" in terms of the standard interpretation of forcing as passing to sheaves. Not the answer you're looking for? Browse other questions tagged forcing homotopy-type-theory infinity-topos-theory lo.logic type-theory or ask your own question. Forcing as a new chapter of Galois Theory? What are finite homotopy types? What would be an infinity-groupoid analogue of the duality between sets and complete atomic boolean algebras? Just a little absoluteness might be cheaper? Is it consistent with ZFC that no nontrivial forcing notion has automatic mutual genericity?
CommonCrawl
Fitting models to large datasets and/or models involving a large number of random effects (for the rma.mv() function) can be time consuming. Admittedly, some routines in the metafor package are not optimized for speed and efficient memory usage by default. However, there are various ways for speeding up the model fitting, which are discussed below. Most meta-analytic datasets are relatively small, with less than a few dozen studies (and often even much smaller than that). However, occasionally, one may deal with a much larger dataset, at which point the rma() (i.e., rma.uni()) function may start to behave sluggishly. To illustrate this, let's simulate a large dataset with $k = 4000$ studies based on a random-effects model (with $\mu = 0.50$ and $\tau^2 = 0.25$). We then measure how long it takes to fit a random-effects model to these data. The values are given in seconds and the one we are interested in is the last one (the elapsed time). So, it took almost 10 minutes to fit this model, although this was on a pretty outdated workstation with an Intel Xeon E5430 CPU running at 2.67 Ghz. A more modern/faster CPU will crunch through this much quicker, but even larger values of $k$ will eventually lead to the same problem. In fact, the model fitting time will tend to increase at a roughly quadratic rate as a function of $k$, as can be seen in the figure below. The reason for this is that rma() carries out some computations that involve $k \times k$ matrices. As these matrices grow in size, the time it takes to carry out the matrix algebra increases at a roughly quadratic rate. Note that this generally won't be an issue unless the number of studies is in the thousands, but at that point, model fitting can become really slow. So, instead of almost ten minutes, it now only took about 40 seconds, which is quite an improvement. An alternative approach for speeding up the model fitting is to make use of optimized routines for the matrix algebra. The BLAS (Basic Linear Algebra Subprograms) library supplied with "vanilla R" is quite good, but enhanced linear algebra routines can be quite a bit faster. It is beyond the scope of this note to discuss how we can get R to make use of such enhanced routines, but the interested reader should take a look at the relevant section in the R Installation and Administration manual. However, one of the easiest ways for getting the benefits of enhanced math routines is to install Microsoft R Open (MRO), which is a 100% compatible distribution of R (with a few added components), but most importantly, it provides the option to automatically install Intel's Math Kernel Library (MKL) alongside MRO. As the name implies, MKL will be most beneficial on Intel CPUs, so those with AMD processors may not see as much of a performance boost (or none at all? haven't tested this). At any rate, here are the results when fitting the model above with MRO+MKL using the rma() function. So, about 45 seconds, as opposed to almost 10 minutes with vanilla R. Interestingly, MRO+MKL is about as quick for this example as using vanilla R with rma.mv(..., sparse=TRUE). However, we can get even better performance with MKL if we allow for multicore processing (one of the benefits of MKL is that it can run multithreaded and hence make use of multiple cores). The workstation I am using for these analyses actually has two quad-core CPUs, so there are 8 cores available we can make use of. Using the setMKLthreads() command, we can set the number of cores that MKL is allowed to make use of. Let's see what happens if we give it all 8 cores. So now we are down to less than 12 seconds! That's almost 50 times faster than what we started out with (567 seconds). I also examined the model fitting time as a function of the number of cores we make available to MKL. The results are shown in the graph below. With more cores, things do speed up, but the gains diminish as we add more cores. Also, with just a single core, MKL took about 45 seconds as opposed to vanilla R, which took almost 10 minutes. So the largest gains come from switching to MKL in the first place, not the use of multiple cores. One can even try combining the use of MRO+MKL with rma.mv(..., sparse=TRUE). In this particular example, this didn't yield any additional performance benefits (and actually slowed things a little bit). I suspect that the performance benefits of MKL in this example are actually related to using numerical routines that can take advantage of the sparseness of the matrices automatically. Hence, there is no benefit in trying to exploit this characteristic of the data twice. However, with even larger values of $k$, use of rma.mv(..., sparse=TRUE) together with MKL can yield some additional benefits due to more efficient storage of the underlying matrices. First, let's look at the first 10 rows of the dataset. So, we have experiments nested within studies that were conducted with various combinations of plant species and fungi. We can familiarize ourselves a bit more with this dataset using the following commands. The output (not shown) indicates that there are 2000 observed outcomes, 868 studies, and most studies included between 1 and 4 experiments. Furthermore, 35 different plant species were studied and 25 different fungi, each with different frequencies. We will fit a three-level meta-analytic model to these data (with experiments nested within studies) with crossed random effects for the plant and fungus factors. In addition, the R.plant and R.fungus correlation matrices loaded earlier reflect phylogenetic correlations for the various plant species and fungi studied. Hence, we will include phylogenetic random effects in the model based on these correlation matrices. For more details on models of this type, see Konstantopoulos (2011) and Nakagawa and Santos (2012). The code for fitting such a model is shown below. So, using vanilla R, fitting this model on my workstation took about 95 minutes. Ouch! So now it took almost 7 hours to fit the same model, so this attempt really backfired. The reason for this is that the underlying matrices that the optimization routine has to deal with are not sparse at all (due to the crossed and correlated random effects). Forcing the use of sparse matrices then creates additional (and unnecessary) overhead, leading to a substantial slowdown. So, even with just one core, MRO+MKL took about 20 minutes. Again, we see diminishing returns as we make more cores available to MKL, but with all 8 cores, the model fitting was down to less than 8 minutes. This is more than 12 times faster than vanilla R. Quite a difference. A few other adjustments can be tried to speed up the model fitting. First of all, the rma.mv() function provides a lot of control over the optimization routine (see help(rma.mv) and especially the information about the control argument). For example, switching to a different optimizer may speed up the model fitting – but could also slow things down. So, whether this is useful is a matter of trial and error. Furthermore, the starting values for the optimization are not chosen in a terribly clever way at the moment and could be far off, in which case convergence may be slow. One can set the starting values manually via the control argument. This could be useful, for example, when fitting several different but similar models to the same dataset. One thing I would like to clarify. The comparison between "vanilla R" and "MRO+MKL" is really a comparison between the reference BLAS library of R versus MKL (so vanilla R versus MRO is not the issue here). Note that there are other linear algebra libraries (especially ATLAS and OpenBLAS) that can also be used with R and that may provide similar speedups. If anybody has made comparisons between these different options in conjunction with metafor, I would be curious to hear about it. Konstantopoulos, S. (2011). Fixed effects and variance components estimation in three-level meta-analysis. Research Synthesis Methods, 2(1), 61–76. Nakagawa, S., & Santos, E. S. A. (2012). Methodological issues and advances in biological meta-analysis. Evolutionary Ecology, 26(5), 1253–1274.
CommonCrawl
This function returns a pointer to an accelerator object, which is a kind of iterator for interpolation lookups. It tracks the state of lookups, thus allowing for application of various acceleration strategies. This function performs a lookup action on the data array $x_array of size $size, using the given accelerator $a. This is how lookups are performed during evaluation of an interpolation. The function returns an index i such that $x_array[i] <= $x < $x_array[i+1]. This function frees the accelerator object $a. This function returns a newly allocated interpolation object of type $T for $size data-points. $T must be one of the constants below. This function initializes the interpolation object interp for the data (xa,ya) where xa and ya are arrays of size size. The interpolation object (gsl_interp) does not save the data arrays xa and ya and only stores the static state computed from the data. The xa data array is always assumed to be strictly ordered, with increasing x values; the behavior for other arrangements is not defined. This function returns the name of the interpolation type used by $interp. This function returns the minimum number of points required by the interpolation type of $interp. For example, Akima spline interpolation requires a minimum of 5 points. This functions returns the interpolated value of y for a given point $x, using the interpolation object $interp, data arrays $xa and $ya and the accelerator $acc. The function returns 0 if the operation succeeded, 1 otherwise and the y value. This functions returns the interpolated value of y for a given point $x, using the interpolation object $interp, data arrays $xa and $ya and the accelerator $acc. This function computes the derivative value of y for a given point $x, using the interpolation object $interp, data arrays $xa and $ya and the accelerator $acc. The function returns 0 if the operation succeeded, 1 otherwise and the d value. This function returns the derivative d of an interpolated function for a given point $x, using the interpolation object interp, data arrays $xa and $ya and the accelerator $acc. This function computes the second derivative d2 of an interpolated function for a given point $x, using the interpolation object $interp, data arrays $xa and $ya and the accelerator $acc. The function returns 0 if the operation succeeded, 1 otherwise and the d2 value. This function returns the second derivative d2 of an interpolated function for a given point $x, using the interpolation object $interp, data arrays $xa and $ya and the accelerator $acc. This function computes the numerical integral result of an interpolated function over the range [$a, $b], using the interpolation object $interp, data arrays $xa and $ya and the accelerator $acc. The function returns 0 if the operation succeeded, 1 otherwise and the result value. This function returns the numerical integral result of an interpolated function over the range [$a, $b], using the interpolation object $interp, data arrays $xa and $ya and the accelerator $acc. gsl_interp_free($interp) - This function frees the interpolation object $interp. This function returns the index i of the array $x_array such that $x_array[i] <= x < $x_array[i+1]. The index is searched for in the range [$index_lo,$index_hi]. Polynomial interpolation. This method should only be used for interpolating small numbers of points because polynomial interpolation introduces large oscillations, even for well-behaved datasets. The number of terms in the interpolating polynomial is equal to the number of points. Cubic spline with natural boundary conditions. The resulting curve is piecewise cubic on each interval, with matching first and second derivatives at the supplied data-points. The second derivative is chosen to be zero at the first point and last point. Cubic spline with periodic boundary conditions. The resulting curve is piecewise cubic on each interval, with matching first and second derivatives at the supplied data-points. The derivatives at the first and last points are also matched. Note that the last point in the data must have the same y-value as the first point, otherwise the resulting periodic interpolation will have a discontinuity at the boundary. Non-rounded Akima spline with natural boundary conditions. This method uses the non-rounded corner algorithm of Wodicka. Non-rounded Akima spline with periodic boundary conditions. This method uses the non-rounded corner algorithm of Wodicka.
CommonCrawl
Chebfun is very good at solving eigenvalue problems in one dimension defined by smooth or piecewise-smooth coefficients. An important example of such problems is the determination of eigenstates of the Schroedinger operator, which correspond to energy levels of quantum systems. There is a special Chebfun command, quantumstates, for computing and plotting such functions. Here $h$ is a small positive parameter with default value $h=0.1$ and $V(x)$ is a potential function. The quantumstates command assumes that $V$ is a Chebfun, whose domain defines the interval the problem is posed on. Here is a famous example, the harmonic oscillator, with $V(x)=x^2$. All our plots make use of a standard convention: each eigenfunction is plotted raised by a distance equal to its eigenvalue $\lambda$, so that one can see the eigenvalue by looking at the height. Note that the first eigenfunction is of one sign, the second has one zero, the third has two zeros, and so on. Notice that the eigenvalues take the regularly spaced values $h[1, 3, 5, \dots]$. The quantumstates command permits various outputs including just eigenvalues or eigenvalues and eigenfunctions, and it is also possible to suppress the plot with the string noplot; see the help text. For the rest of this Example, however, we shall just look at plots and suppress all output with a semicolon. Here is an effectively infinite square well. The eigenvalues are spaced quadratically. Since we are working on a finite interval $[-L,L]$, the spectrum is discrete both below and above the level $1$, but the spacing will get closer as $L$ is increased, and it is easy to imagine that for $L=\infty$, one gets a continuum of eigenvalues above $1$ -- more precisely, a continuous spectrum. The discrete eigenfunctions below level $1$ are called bound states, whereas the states above level $1$ (in the limit $L=\infty$) are continuous states. Notice that each lower eigenfunction is localized on one or the other side of the barrier, whereas the higher eigenfunctions are not localized. Inside the barrier, the eigenfunction is nonzero -- this is quantum tunnelling -- but its amplitude decreases exponentially with distance inside the barrier. Elapsed time is 31.917111 seconds. One can learn about the physics of these quantum mechanical problems in innumerable books and other sources. One reference we have consulted is the textbook by Robinett . Richard W. Robinett, Quantum Mechanics, 2nd ed., Oxford University Press, 2006.
CommonCrawl
A discrete signal or discrete-time signal is a time series consisting of a sequence of quantities. Are there alternatives to the bilinear transform? Is wavelet analysis useful for 1D signals? What is the difference between linear and non-linear filters? Why convolution is required, or what is the philosophy behind convolution? Where is the flaw in this derivation of the DTFT of the unit step sequence $u[n]$? Is there an algorithm to compute the phase for a single frequecy? What is meant by 20 dB signal-to-noise ratio? I have just started studying Digital Signal Processing. Can someone explain what is the difference between a Discrete Signal and a Digital Signal in simple words? Thanks in advance ! Is there a way to obtain the impulse response of a discrete system by just knowing it's response to the discrete unit step function? I asked myself how to compute dBFS (dB full scale) from a value of sample between 1 and -1? and in general? Scalogram (and related nomenclatures) for DWT? How does the $\mathcal Z$-transform's "region of convergence" work? How can I automatically classify peaks of signals measured at different positions? What is the difference between normalized peak of correlation, versus peak of correlation divided by its average? How can I decompose a signal into square waves? Why Do I Get This Crackling Noise on Zeroing out the High Frequencies? What are high frequencies and low frequencies in a signal? Open access signal processing journal? What is the effect of aliasing on the magnitude of the autocorrelation? Compress a signal by storing signal diff instead of actual samples - is there such a thing?
CommonCrawl
It is well-known that the commutator of Riesz transform (Hilbert transform in dimension 1) and a symbol $b$ is bounded on $L^2(\mathbb R^n)$ if and only if $b$ is in the BMO space BMO$(\mathbb R^n)$ (Coifman--Rochberg--Weiss). Inspired by this result, it is natural to ask whether it holds for commutator of Riesz transform on Heisenberg groups $\mathbb H^n$. Note that in the setting of several complex variables, the Heisenberg group $\mathbb H^n$ is the boundary of the Siegel upper half space, whose roles are holomorphically equivalent to the unit sphere and the unit ball in $\mathbb C^n$ respectively, and hence the role of Riesz transform on $\mathbb H^n$ is similar to that of Hilbert transform on the real line $\mathbb R$. We answer this question confirmatively in a more general setting: stratified Lie groups $\mathcal G$. %, which is more general than the Heisenberg group $\mathbb H^n$. We first obtain a suitable version of lower bound for the kernel of the Riesz transform on $\mathcal G$, and then establish a characterisation for the boundedness of the Riesz commutator, i.e., the commutator of Riesz transform and a symbol $b$ is bounded on $L^2(\mathcal G)$ if and only if $b$ is in the BMO space BMO$(\mathcal G)$ studied by Folland and Stein. %In the mean time we also establish characterisations for the endpoint boundedness of Riesz commutators, including the weak type $(1,1)$, $H^1(\mathcal G)\to L^1(\mathcal G)$, and $L^\infty(\mathcal G)$ to BMO$(\mathcal G)$, where $H^1(\mathcal G)$ is the Hardy space studied by Folland and Stein. a suitable curl operator that we introduced on $\mathcal G$. which are singular integrals with non-smooth kernels (beyond the standard frame of Calder\'on--Zygmund operators). The results we provide here are based on recent joint works with Xuan Thinh Duong, Michael Lacey, Hong-Quan Li and Brett D. Wick.
CommonCrawl
Abstract: We construct new families of smooth Fano fourfolds with Picard rank 1, which contain cylinders, i.e., Zariski open subsets of form $Z\times A^1$, where $Z$ is a quasiprojective variety. The affine cones over such a fourfold admit effective $G_a$-actions. Similar constructions of cylindrical Fano threefolds and fourfolds were done previously in [KPZ11, KPZ14, PZ15].
CommonCrawl
Rafael López, Zeljka Milin Sipus, Ljiljana Primorac Gajcic, Ivana Protrka, Harmonic evolute of B-scrolls with constant mean curvature in Lorentz-Minkowski space. International Journal of Geometric Methods in Modern Physics, to appear. Thomas Hasanis, Rafael López, Translation surfaces in Euclidean space with constant Gaussian curvature. Communication in Analysis and Geometry, to appear. Rafael López, Gabriel Ruiz, Surfaces with a canonical principal direction and prescribed mean curvature, Annali di Matematica Pura ed Applicata, to appear. Rafael López, The Dirichlet problem on a strip for the $\alpha$-translating soliton equation. Comptes rendus Mathematique, 356 (2018), 1179-1187. Rafael López, Compact singular minimal surfaces with boundary. American Journal of Mathematics, to appear. Rafael López, Uniqueness of critical points and maximum principles of the singular minimal surface equation. Journal of Differential Equations, 266, Issue 7 (2019), 3927-3941. Rafael López, Some geometric properties of translating solitons in Euclidean space. Journal of Geometry, to appear. Rafael López, Compact $\lambda$-translating solitons with boundary. Mediterranean Journal of Mathematics, to appear. Rafael López, The one dimensional case of the singular minimal surfaces with density. Geometriae Dedicata, to appear. Seher Kaya, Rafael López, Solutions of the Bjorling problem for timelike surfaces in the Lorentz-Minkowski space. Turkish Journal of Mathematics, 42 (2018), 2186-2201. Rafael López, Matthias Weber, Explicit Björling surfaces with prescribed geometry, Michigan Mathematical Journal, 67 (2018), 561-584. Antonio Bueno, Rafael López, Translation surfaces of linear Weingarten type, An. Stiint. Univ. Al. I. Cuza Iasi. Mat. (N.S.).64 (2018), 151-160. Rafael López, Seher Kaya, On the duality between rotational minimal surfaces and maximal surfaces, Journal of Mathematical Analysis and Applications 458 (2018), no. 1, 345-360. Rafael López, Constant mean curvature hypersurfaces in the steady state space: a survey, to appear in VIII International Meeting on Lorentzian Geometry (GELOMA, Málaga, September 20-23, 2016), Cañadas-Pinedo, María A., Flores, Jose L, Palomo, Francisco J. (Eds.) Springer Proceedings in Mathematics and Statistics, Berlin, 2017. pp. 185-212. Shintaro Akamine, Rafael López, The number of catenoids connecting two coaxial circles in Lorentz-Minkowski space. Journal of Geometry and Physics 121 (2017), 386-395. Rafael López, Óscar Perdomo, Minimal translation surfaces in Euclidean space, The Journal of Geometric Analysis, 27 (2017), 2926–2937. David Brander, Rafael López, Remarks on the boundary curve of a constant mean curvature topological disc, Complex Variables and Elliptic Equations, 62 (2017), 1037-1043. Rafael López, Stability and bifurcation of a capillary surface on a cylinder, SIAM Journal on Applied Mathematics, 77(1) (2017), 108-127. Rafael López, A necessary condition for the existence of a doubly connected minimal surface, Annali di Matematica Pura ed Applicata 196 (2017), 1513-1524. Rafael López, Spacelike graphs of prescribed mean curvature in the steady state space, Advanced Nonlinear Studies, 16, Issue 4 (2016), 807-819. Rafael López, ¿Cómo un topólogo clasifica las letras del alfabeto?, Miscelánea Matemática, 61 (2016), 57-73. Rafael López, Separation of variables in equation of mean curvature type, Proceedings of the Royal Society of Edinburgh: Section A Mathematics, 146(5) (2016), 1017-1035. Rafael López, Capillary surfaces modeling liquid drops on wetting phenomena, MI Lecture Notes, 65 (2015), 41-43. Rafael López, A characterization of hyperbolic caps in the steady state space, Journal of Geometry and Physics, 98 (2015), 214-226. Rafael López, Marilena Moruz, Translation and homothetical surfaces in Euclidean space with constant curvature, Journal of the Korean Mathematical Society, 52, No. 3, (2015), 523-535. Alma L Albujer, Magdalena Caballero, Rafael López, Convexity of the solutions to the constant mean curvature spacelike surface equation in the Lorentz-Minkowski space, Journal of Differential Equations, 258 (7), (2015), 2364-2374. Mohamed Jleli, Rafael López, Bifurcating nodoids in hyperbolic space, Advanced Nonlinear Studies, 15 (2015), 849–865 . Rafael López, Gabriel Ruiz, A characterization of isoparametric surfaces in R^3 via normal surfaces, Results in Mathematics, 67 (2015), 87-94. Rafael López, Differential Geometry of curves and surfaces in Lorentz-Minkowski space, International Electronic Journal of Geometry, 7 (2014), 44-107. Rafael López, Capillary surfaces with free boundary in a wedge , Advances in Mathematics, 262 (2014), 476-483. Rafael López, Esma Demir, Helicoidal surfaces in Minkowski space with constant mean curvature and constant Gauss curvature, Central European Journal of Mathematics, 12 (9) (2014), 1349-1361. Rafael López, Invariant surfaces in Sol_3 with constant mean curvature and their computer graphics, Advances in Geometry, 14 (2014), no. 1, 31-48. Rafael López, Juncheol Pyo, Capillary surfaces in a cone, Journal of Geometry and Physics, 76 (2014), 256-262. Rafael López, Juncheol Pyo, Capillary surfaces of constant mean curvature in a right solid cylinder, Mathematische Nachrichten, 287 (2014), no. 11-12, 1312-1319. Rafael López, Marian I. Munteanu, Invariant surfaces in the homogeneous space Sol with constant curvature, Mathematische Nachrichten, 287 8-9 (2014), 1013-1024. Rafael López, Ana Nistor, Surfaces in Sol_3 space foliated by circles, Results in Mathematics, 64 (2013), No. 3-4, 319-330. Rafael López, Juncheol Pyo, Constant mean curvature surfaces with boundary on a sphere, Applied Mathematics and Computation, 220 (2013), 316 -323. Rafael López, Level curves of constant mean curvature graphs over convex domains, Journal of Differential Equations, 254 (2013), no. 7, 3081-3087. Rafael López, Bifurcation of cylinders for wetting and dewetting models with striped geometry, SIAM, Journal on Mathematical Analysis, 44 (2012), 946--965. Ahmad Ali, Rafael López, Melih Turgut, k-type partially null and pseudo null slant helices in Minkowski 4-space, Mathematical Communications, 17 (1) (2012), 93-103. Rafael López, Marian I. Munteanu, Minimal translation surfaces in Sol3, Journal of the Mathematical Society of Japan, 64 (2012), No. 3, 985--1003. Rafael López, Marian I. Munteanu, Surfaces with constant mean curvature in Sol geometry, Differential Geometry and its Applications, 29, Supplement 1 (2011), s S238-S245. Özgür Boyacioglu Kalkan, Rafael López, Derya Saglam, Non-degenerate surfaces of revolution in Minkowski space that satisfy the relation aH+bK=c, Acta Mathematica Universitatis Comenianae, LXXX 2 (2011), 201-212. Özgür Boyacioglu Kalkan, Rafael López, Derya Saglam, Linear Weingarten surfaces foliated by circles in Minkowski space, Taiwanese Journal of Mathematics, 15 (2011), No. 5, 1897--1917. Rafael López, Marian Munteanu, Constant angle surfaces in Minkowski space, Bulletin of the Belgian Mathematical Society - Simon Stevin, 18 (2011), 271-286. Ozgür Boyacioglu Kalkan, Rafael López, Spacelike surfaces in Minkowski space satisfying a linear relation between their principal curvatures, Differential Geometry-Dynamical Systems, 13 (2011), 120-129. Rafael López, Minimal translation surfaces in hyperbolic space, Beiträge zur Algebra und Geometrie (Contributions to Algebra and Geometry), 52 (1) (2011), 105-112, DOI: 10.1007/s13366-011-0008-z. Ahmad Ali, Rafael López, Slant helices in Minkowski space E_1^3, Journal of the Korean Mathematical Society, 48 (2011), 159-167. Rafael López, Surfaces with constant mean curvature in Euclidean space, International Electronic Journal of Geometry, 3 , number 2 (2010), 67-101. Ahmad T. Ali, Rafael López, Timelike B2-slant helices in Minkowski space E_1^4, Archivum Mathematicum, 46 (2010), 39-46. Ahmad T. Ali, Rafael López, Slant helices in Euclidean 4-space E^4, Journal of the Egyptian Mathematical Society, 18 (2) (2010), 223-230. Rafael López, A new proof of a characterization of small spherical caps, Results in Mathematics, 55 (2009), 427-436. Rafael López, Parabolic Weingarten surfaces in hyperbolic space, Publicationes Mathematicae Debrecen, 74 (2009), 1-2, 59-80. Rafael López, A comparison result for radial solutions of the mean curvature equation, Applied Mathematics Letters, 22 (2009), 860--864. Rafael López, Parabolic surfaces in hyperbolic space with constant Gaussian curvature, Bulletin of the Belgian Mathematical Society - Simon Stevin. 16 (2009), no. 2 , 337-349. Rafael López, Stationary bands in three-dimensional Minkowski space, Osaka Journal of Mathematics, 46 (2009), no. 1, 1-20. Rafael López, Linear Weingarten surfaces in Euclidean and hyperbolic space, Matemática Contemporanea, 35 (2008), 95-113. Rafael López, Stationary surfaces in Lorentz-Minkowski space, The Royal Society of Edinburgh Proceedings A (Mathematics), 138 (2008), no. 5, 1067-1096. Rafael López, Rotational linear Weingarten surfaces of hyperbolic type, Israel Journal of Mathematics, 167 (2008), 283-301. Rafael López, Special Weingarten surfaces foliated by circles, Monatshefte für Mathematik, 154 (2008), no. 4, 289-302. Rafael López, An exterior boundary value problem in Minkowski space, Mathematische Nachrichten, 281 (2008), no. 8, 1169-1181. Rafael López, On linear Weingarten surfaces, International Journal of Mathematics, 19 (2008), no. 4, 439 - 448. Rafael López, Parabolic surfaces in hyperbolic space with constant curvature. Pure and Applied Differential Geometry, PADGE 2007, (F. Dillen, I. Van de Woestyne eds.), Shaker Verlag, Aachen 2007, pp: 162-170. Rafael López, On the existence of spacelike constant mean curvature surfaces spanning two circular contours in Minkowski space, Journal of Geometry and Physics, 57 (2007), no. 11, 2178-2186. Rafael López, Capillary channels in a gravitational field, Nonlinearity, 20 (2007), no. 7, 1573-1600. Rafael López, On uniqueness of graphs with constant mean curvature, Journal of Mathematics of Kyoto University, 46 (2006), no. 4, 771-787. Rafael López, Superficies con curvatura media constante cuyo borde es un círculo, Divulgaciones Matemáticas, 14 (2006), no. 2, 121-140. Rafael López, Spacelike hypersurfaces with free boundary in the Minkowski space under the effect of a timelike potential, Communications in Mathematical Physics, 266 (2006), no. 2, 331-342. Rafael López, Symmetry of stationary hypersurfaces in hyperbolic space, Geometriae Dedicata, 119 (2006), no. 1, 35-47. Rafael López, A characterization of hemispheres Differential Geometry and its Applications, 24 (2006), no. 4, 398-402. Rafael López, Wetting phenomena and constant mean curvature surfaces with boundary, Reviews in Mathematical Physics, 17 (2005), no. 7, 769-792. Rafael López, Area monotonicity for spacelike surfaces with constant mean curvature, Journal of Geometry and Physics, 52 (2004), no. 3, 353-363. Rafael López, Some a priori bounds for solutions of the constant Gauss curvature equation, Journal of Differential Equations, 194 (2003), no. 1, 185-197. Rafael López, A note on radial graphs with constant mean curvature, Manuscripta Mathematica, 110 (2003), no. 1, 45-54. Rafael López, Surfaces of constant Gauss curvature in Lorentz-Minkowski 3-space, Rocky Mountain Journal of Mathematics, 33 (2003), no. 3, 971-993. Rafael López, Cyclic hypersurfaces of constant curvature, Advanced Studies in Pure Mathematics, 34 2002, Minimal Surfaces, Geometric Analysis and Symplectic Geometry, 185-199. Rafael López, An existence theorem of constant mean curvature graphs in Euclidean space, Glasgow Mathematical Journal, 44 (2002), no. 3, 455-461. Rafael López, Constant mean curvature graphs in a strip of R^2, Pacific Journal of Mathematics, 206 (2) (2002), no. 2, 359-374. Rafael López, How to use MATHEMATICA to find cyclic surfaces of constant curvature in Lorentz-Minkowski space. Global Differential Geometry: The Mathematical Legacy of Alfred Gray, (M. Fernández, J. Wolf, Ed.) Contemporary Mathematics, 288, American Mathematical Society, Providence, 2001, 371-375. Rafael López, Graphs of constant mean curvature in hyperbolic space, Annals of Global Analysis and Geometry, 20 (2001), no. 1, 59-75. Rafael López, Cyclic surfaces of constant Gauss curvature, Houston Mathematical Journal, 27 (2001), no. 4, 799-805. Rafael López, Constant mean curvature graphs on unbounded convex domains, Journal of Differential Equations, 171 (2001), no. 1, 54-62. Francisco López, Rafael López, Rabah Souam, Maximal surfaces of Riemann type in Lorentz-Minkowski space L^3, Michigan Mathematical Journal, 47 (2000), no. 3, 469-497. Rafael López, Timelike surfaces with constant mean curvature in Lorentz three-space, Tohoku Mathematical Journal, 52 (2000), no. 4, 515-532. Rafael López, Hypersurfaces with constant mean curvature in hyperbolic space, Hokkaido Mathematical Journal, 29 (2000), no. 2, 229-245. Rafael López, On uniqueness of constant mean curvature surfaces with planar boundary. New Developments in Differential Geometry, Budapest, 1996, 235-242, Kluwer Acad. Publ.Dordrecht 1999. Rafael López, Constant mean curvature hypersurfaces foliated by spheres, Differential Geometry and its Applications, 11 (1999), no. 3, 245-256. Rafael López, Constant mean curvature surfaces foliated by circles in Lorentz-Minkowski space, Geometriae Dedicata 76 (1999), no. 1, 81-95. Rafael López, Constant mean curvature surfaces with boundary in Euclidean three-space, Tsukuba Journal of Mathematics, 23 (1999), no. 1, 27-36. Rafael López, Constant mean curvature surfaces bounded by a circle, Rocky Mountain Journal of Mathematics, 29 (1999), no. 3, 971-978. Luis Alías, Rafael López, Bennet Palmer, Stable constant mean curvature surfaces with circular boundary, Proceedings of the American Mathematical Society, 127 (1999), no. 4, 1195-1200. Rafael López, Sebastián Montiel, Existence of constant mean curvature graphs in hyperbolic space, Calculus of Variations and Partial Differential Equations, 8 (1999), no. 2, 177-190. Rafael López, Constant mean curvature surfaces with boundary in hyperbolic space, Monasthefte für Mathematik, 127 (1999), no. 2, 155-169. Luis Alías, Rafael López, José Pastor, Compact spacelike surfaces with constant mean curvature in the Lorentz-Minkowski 3-space, Tohoku Mathematical Journal, 50 (1998), no. 4, 491-501. Rafael López, Surfaces of constant mean curvature with boundary in a sphere, Osaka Journal of Mathematics, 34 (1997), no. 3, 573-577. Rafael López, A note on H-surfaces with boundary, Journal of Geometry, 60 (1997), no. 1-2, 80-84. Rafael López, Recientes avances en superficies de curvatura media constante con frontera plana, Publicaciones del Departamento de Matemáticas de la Universidad de Murcia, 19 (1997), 1-29. Rafael López, Surfaces of constant mean curvature bounded by convex curves, Geometriae Dedicata, 66 (1997), no. 3, 255-263. Rafael López, Surfaces of constant mean curvature bounded by two planar curves, Annals Global Analysis and Geometry, 15 (1997), no. 3, 201-210. J. M. Sullivan, F. Morgan, et. al. Open problems in soap bubble geometry, International Journal of Mathematics, 7, no. 6 (1996), 833-842. Rafael López, Sebastián Montiel, Constant mean curvature surfaces with planar boundary, Duke Mathematical Journal, 85 (1996), no. 3, 583-604. Rafael López, Constant mean curvature surfaces with boundary, Abstracts of Papers Presented to the American Mathematical Society, 16 (1995), 648. Rafael López, Sebastián Montiel, Constant mean curvature discs with bounded area, Proceedings of the American Mathematical Society, 123 (1995), no. 5, 1555-1558.
CommonCrawl
The instructional materials reviewed for Common Core Coach Suite Grade 5 do not meet the expectations for alignment to the CCSSM. In Gateway 1, the instructional materials partially meet the expectations for focus and coherence. The instructional materials meet the expectations for focus, but they do not meet the expectations for coherence. In Gateway 2, the instructional materials do not meet the expectations for rigor and the mathematical practices. The instructional materials partially meet the expectations for rigor and balance, but they do not meet the expectations for practice-content connections. Since the materials do not meet the expectations for alignment to the CCSSM, they were not reviewed for usability in Gateway 3. The instructional materials reviewed for Common Core Coach Suite Grade 5 partially meet the expectations for focus and coherence in Gateway 1. The instructional materials do not assess topics before the grade level in which the topic should be introduced, and the materials do spend at least 65% of instructional time on the major work of the grade. The instructional materials do not meet the expectations for being coherent and consistent with the Standards as they partially have: supporting content that enhances focus and coherence by engaging students in the major work of the grade; consistency with the progressions in the Standards; and coherence through connections at a single grade. The instructional materials reviewed for Common Core Coach Suite Grade 5 meet the expectations for not assessing topics before the grade level in which the topic should be introduced. The instructional materials reviewed for Common Core Coach Suite Grade 5 meet the expectations for assessing grade-level content. Most of the assessments include material that is appropriate for Grade 5, although there is some content from future grades assessed. In the instances where future content is assessed, the items could be easily omitted or modified by a teacher without impacting the structure and grade-level content of the overall assessment. The instructional materials reviewed for Common Core Coach Suite Grade 5 meet the expectations for students and teachers using the materials as designed devoting the large majority of class time to the major work of the grade. The instructional materials reviewed for Common Core Coach Suite Grade 5 meet expectations for spending a majority of instructional time on major work of the grade. Overall, approximately 71 percent of instructional time is spent on major work. Common Core Coach contains approximately 20 of 28 lessons focused on major work or supporting work connected to the major work of the grade (71 percent). Common Core Coach approximately 2450 minutes out of 3380, or approximately 72 percent of the time is spent on major work or work that supports major work. Common Core Support Coach approximately 2200 minutes out of 2960, or approximately 75 percent of the time is spent on major work or work that supports major work. Common Core Performance Coach approximately 2000 minutes out of 2590, or approximately 77 percent of the time is spent on major work or work that supports major work. It is important to note that Common Core Support Coach does not contain lessons that address two standards that are major work of the grade (5.NBT.4 and 5.MD.5c), thus they are unaccounted for in the calculations of instructional time. The instructional materials reviewed for Common Core Coach Suite Grade 5 do not meet the expectations for being coherent and consistent with the Standards. The instructional materials partially have: supporting content that enhances focus and coherence by engaging students in the major work of the grade; consistency with the progressions in the Standards; and coherence through connections at a single grade. The instructional materials for Common Core Coach Suite Grade 5 partially meet expectations that supporting work enhances focus and coherence simultaneously by engaging students in the major work of the grade. Throughout the Common Core Suite of books, standards are mostly taught in isolation from other standards. Each lesson focuses on one standard without referencing connections to major work. Additionally, the teacher edition does not provide explicit connections from supporting work to major work; however, some natural connections are made. Converting like measurements (5.MD.1) is addressed in Common Core Coach Lesson 21, Common Core Support Coach Lesson 15, and Common Core Performance Coach Lesson 23. However, no lesson connects to multiplying and dividing fractions (5.NF.4). Making line plots to display data sets (5.MD.2) is addressed in Common Core Coach Lesson 22, Common Core Support Coach Lesson 16, and Common Core Performance Coach Lesson 24. There are no connections to using equivalent fractions to add and subtract fractions (5.NF.1) or multiplying and dividing fractions (5.NF.B). Common Core Coach Lesson 1 Evaluating Numerical Expressions and Lesson 2 Writing and Interpreting Numerical Expressions address using parentheses, brackets, or braces to write, interpret, and evaluate numerical expressions (5.OA.A). Both lessons connect the order of operations to understandings of operations with whole numbers (5.NBT.B). Common Core Performance Coach Lesson 1 Writing Numerical Expressions and Lesson 2 Evaluating Numerical Expressions support and extend understandings of these concepts. The instructional materials for Common Core Coach Suite Grade 5 do not meet the expectation that the amount of content designated is viable for one school year in order to foster coherence between grades. The materials consists of three components: Common Core Coach, Common Core Support Coach, and Common Core Performance Coach. These three together make up the Common Core Coach Suite. Common Core Coach contains the core instruction and practice elements of the suite. There are 28 lessons broken up across the five domains, each designed to be taught over three to six days, for a total of 132 instructional days. Lessons are broken into smaller components that are scheduled to last between 20-30 minutes each day. Additionally, each domain contains a Domain Assessment, which is given over two 40-minute periods, for an additional ten days. Common Core Support Coach contains scaffolded lessons for students struggling with concepts taught during core instruction. There are 20 lessons, each designed to be supplemental by making explicit connections between prior knowledge and current grade-level concepts. Each lesson is designed to be taught over three to six days, between 10-20 minutes following the corresponding core instruction. Additionally, there are two Practice Test Assessments which are given over two days at the end of the year. Common Core Performance Coach extends skill development for on-level students and provides practice with a variety of item types for reinforcement and test preparation. There are 30 lessons, each designed to be taught over three to six days, between 10-20 minutes following the corresponding core instruction. Additionally, each domain contains a Domain Review, which is completed over two days as time permits, for ten days. In Common Core Coach Lesson 14, Problem Solving: Adding and Subtracting Fractions and Mixed Numbers, students work four guided examples and five practice problems over five days to complete the lesson, at 20 minutes each session (100 minutes). These provide an insufficient amount of problems and practice for this lesson. Adding in the differentiation components of Common Core Support Coach or Common Core Performance Coach, another 20 minutes of instruction over five days (or 100 minutes) could be accounted for. Again, these two components consist of guided examples and little practice with grade-level work. The lessons in these components would add little instructional practice. Common Core Coach Lesson 26 Graphing Points on the Coordinate Plane spans four days (80 minutes) for three scaffolded examples, a "Mystery Graph" problem, and 26 practice problems. Common Core Support Coach Lesson 1 Analyzing Numerical Patterns spans four days (80 minutes) of supplemental learning through five scaffolded examples and 24 practice problems. Common Core Support Coach Lesson 15 Converting Measurements spans five days (100 minutes) of supplemental learning using six scaffolded examples and 32 practice problems. Common Core Performance Coach Lesson 3 Relating Numerical Expressions spans four days (80 minutes) for extending understandings of numerical expressions using three scaffolded examples, one coached example, and 10 practice problems. Common Core Performance Coach Lesson 10 Dividing Whole Numbers spans six days (120 minutes) for extending understandings of dividing whole numbers through four scaffolded examples, one coached example, and 11 practice problems. Common Core Support Coach Practice Tests take place over two days (80 minutes) with teachers selecting key questions for students based on need; neither Practice Test is given in its entirety. Common Core Performance Coach Domain Review and Performance Tasks take place over two days (80 minutes) for 29-35 problems and one performance task per assessment. The instructional materials for Common Core Coach Suite Grade 5 partially meet expectations for the materials being consistent with the progressions in the standards. Two components of the suite develop according to the grade-by-grade progressions in the standards, and most content from prior or future grades is clearly identified and connected to current grade-level work. However, the materials do not provide students with extensive work with grade-level problems, and the materials do not meet the full depth of the standards. Domain 2 for Number and Operations in Base Ten, Grade 4 Lesson 2 Problem Solving: Using Multiplication and Division to Make Comparisons (4.OA.2) connects with Grade 5 Lesson 4 Multiplying Whole Numbers (5.NBT.5) which in turn connects to Grade 6 Lesson 4 Problem Solving: Unit Rates (6.RP.3b), Grade 6 Lesson 27 Finding the Area of Triangles and Quadrilaterals (6.G.1), and Grade 6 Lesson 28 Finding the Volume of Rectangular Prisms (6.G.2). Domain 4 for Measurement and Data, Grade 4 Lesson 10 Multiplying Whole Numbers (4.NBT.5) connects to Grade 5 Lesson 24 Finding the Volume of Rectangular Prisms (5.MD.5a, 5.MD.5b) and progresses to Grade 6 Lesson 28 Finding the Volume of Rectangular Prisms (6.G.2). Powers of Ten, Lesson 2 supports students multiplying and dividing by powers of ten (Common Core Coach Lesson 4) (5.NBT.2). The "Plug In" draws on previous understandings of place value (4.NBT.1, 5.NBT.1), the "Power Up" reviews multiplying by multiples of ten (3.NBT.3, 4.NBT.5), and the "Ready to Go" practice gives them opportunities to use whole-number exponents to multiply and divide by powers of ten (5.NBT.2). Dividing Whole Numbers, Lesson 6 supports students dividing multi-digit numbers up to four-digit quotients and two-digit divisors (Common Core Coach Lesson 9) (5.NBT.6). The "Plug In" reviews using place value to multiply and divide (4.NBT.6), the "Power Up" reviews multiplying multi-digit numbers (4.NBT.5), and the "Ready to Go" practice provides opportunities for students to apply knowledge of place value and the relationship between multiplication and division to divide multi-digit numbers (5.NBT.6). Multiplying Fractions, Lesson 10 supports students using models to multiply fractions in order to solve real-world problems (5.NF.4a). The "Plug In" reviews understanding fractions as multiples (4.NF.4a), the "Power Up" reviews multiplying fractions by whole numbers (4.NF.4b), and the "Ready to Go" practice provides opportunities for students to use models to multiply fractions (5.NF.5a). In Common Core Performance Coach Multiplying Fractions, Lesson 17, students work with the grade-level standard for multiplying fractions. (5.NF.4a) Guidance is not provided within Common Core Performance Coach for students or teachers as to how this connects to previous understandings of multiplying fractions by a whole number (4.NF.4) and extends to future learning involving dividing fractions in Grade 6 (6.NS.1). Dividing Whole Numbers, Lesson 9 addresses 5.NBT.6, a major standard for Grade 5. In this lesson, students use a place-value model to find whole-number quotients and have no opportunities to use other strategies to find whole number quotients to meet the full intent of this standard. Recognizing Volume as Additive, Lesson 25 teaches the standard algorithm for multiplication, but does not provide students opportunities to relate volume to the operations or multiplication and addition in real-world problems. (5.NF.5) Additionally, the nine practice items do not provide students extensive work with grade-level problems. Common Core Support Coach Adding and Subtracting Fractions with Unlike Denominators, Lesson 8 addresses two standards (5.NF.1 and 5.NF.2) and contains 17 practice problems. Common Core Support Coach Measuring Volume of Rectangular Prisms, Lesson 17 addresses all of 5.MD.3 (5.MD.3a, 5.MD.3b, and 5.MD.3c) and contains 14 practice problems. Common Core Performance Coach Dividing Decimals, Lesson 13 is the only lesson that attends to 5.NBT.7, major work for Grade 5. There are 10 practice problems presented. The instructional materials for Common Core Coach Suite Grade 5 partially meet expectations that materials foster coherence through connections at a single grade, where appropriate and required by the standards. Materials are clearly shaped by domain headings but do not connect two or more domains or clusters. Domain 1: Operations and Algebraic Thinking: Lesson 2 Writing and Interpreting Numerical Expressions (5.OA.2). Domain 3: Number and Operations - Fractions: Lesson 16 Multiplying Fractions (5.NF.4). Domain 4: Measurement and Data: Lesson 23 Understanding and Measuring Volume (5.MD.3). Common Core Coach Lesson 21 Converting Units of Measure to Solve Problems, Common Core Support Coach Lesson 15 Converting Measurements, and Common Core Performance Coach Lesson 23 Converting Measurement Units do not connect 5.NBT.5 and 5.NBT.6, to place value (5.NBT.1). Common Core Coach Lesson 24 Finding Volume of Rectangular Prisms, Lesson 25 Recognizing Volume as Additive, Common Core Support Coach Lesson 18 Formulas for Volumes of Rectangular Prisms, and Common Core Performance Coach Lesson 26 Volume of Rectangular Prisms all address finding the volume of a right rectangular prism (5.MD.5a), but do not connect to multiplying fractions (5.NF.4). The instructional materials reviewed for Common Core Coach Suite Grade 5 do not meet the expectations for rigor and mathematical practices. The instructional materials partially reflect the balances in the Standards and helping students meet the Standards' rigorous expectations, by helping students develop conceptual understanding, procedural skill and fluency, and application, but the instructional materials do not meet the expectations for meaningfully connecting the Standards for Mathematical Content and the Standards for Mathematical Practice. The instructional materials reviewed for Common Core Coach Suite Grade 5 partially meet the expectations for reflecting the balances in the Standards and helping students meet the Standards' rigorous expectations, by helping students develop conceptual understanding, procedural skill and fluency, and application. The instructional materials partially attend to each aspect of rigor, and they also partially attend to balance among the three aspects of rigor. The instructional materials for Common Core Coach Suite Grade 5 partially meet expectations that the materials develop conceptual understanding of key mathematical concepts, especially where called for in specific standards or cluster headings. Common Core Performance Coach Lesson 2 Evaluating Numerical Expressions addresses order of operations (5.OA.1); however, students are given the mnemonic "PEMDAS" and do not have opportunities to learn how the operations are related to one another. Students are given few opportunities to demonstrate conceptual understanding independently. During independent practice, students solve problems similar to those examples from class instruction, with slight difference in context and/or numbers. Students rarely create visual representations on their own or explain concepts to demonstrate understanding. There are Practice questions with labels such as "Write," "Draw," or "Prove," where students explain mathematical concepts. The questions elicit students' ability to restate the mathematics ideas addressed by the teacher during class instruction. The materials address conceptual understanding standards in a proceduralized way and do not enhance the student's ability to form a conceptual understanding of major work within the grade. The instructional materials for Common Core Coach Suite Grade 5 partially meet expectations that they attend to those standards that set an expectation of procedural skill and fluency. Specific lessons in the suite address the fluency standards in the CCSSM. All lessons in the suite provide students opportunities to use computation skills. Common Core Coach lessons conclude with two pages of Practice problems, Common Core Support Coach lessons conclude with three practice problems, and Common Core Performance Coach lessons conclude with independent practice problems. Additional fluency practice pages are found in Appendix A of the Common Core Coach Teacher's Guide. Since there are very few lessons specifically identified as addressing fluency standards in the suite, there are few opportunities for students to practice fluency skills throughout the entire year. Common Core Support Coach and Common Core Performance Coach do not identify specific fluency components; lessons in these components are developed around the fluency standards. However, additional procedural practice is not provided outside of those specific lessons. In the Common Core Coach Teacher's Guide, teachers are instructed to assign various pages for fluency practice throughout the year. No further instructions are given for demonstrating mastery of procedural skill and/or fluency on these pages. The instructional materials for Common Core Coach Suite Grade 5 partially meet expectations that the materials are designed so that teachers and students spend sufficient time working with engaging applications of the mathematics, without losing focus on the major work of the grade. Engaging applications include single and multi-step problems, routine and non-routine, presented in a context in which the mathematics is applied. In the Common Core Coach Teacher's Edition, the Table of Contents denotes lessons that apply skills to real-world problems. Common Core Support Coach does not label specific lessons as application or provide performance tasks that apply skills to real-world situations. In Common Core Performance Coach, there is one Performance Task at the end of each domain that applies concepts and skills to real-world problems. Non-routine problems are addressed in the Performance Tasks, and there are five Performance Tasks throughout the year. Common Core Coach contains 28 lessons; 11 are identified as "Problem Solving." Each "Problem Solving" lesson follows a specific, scaffolded procedure: "Read, Plan, Solve, & Check." Students are then given five practice problems to solve, each with diminishing scaffolds. In Lesson 7 Rounding Decimals Using Place Value, students use number lines and place value to round. There is one practice problem and students are given a place-value chart along with blank spaces for students to fill in to determine how to round the number. During independent practice students solve 22 procedural fluency problems related to rounding and four routine story problems (5.NBT.5). In Common Core Support Coach, lessons are scaffolded for students and include a checklist for students to follow when solving problems. Application standards and clusters are therefore not presented appropriately. For example, Lesson 14 Dividing Unit Fractions and Whole Numbers (5.NF.7.a, 5.NF.7.b, & 5.NF.7.c) contains nine questions within the main lesson, four of which are scaffolded and do not provide students opportunities to apply an understanding of multiplication. There are an additional three "Ready to Go" problems to solve that contain an example of a procedural process: "Read, Plan, Solve, and Check," with a checklist for students to reference throughout the procedure. Common Core Performance Coach provides three to five worked problems as examples to solve the subsequent routine problems. The instructional materials for Common Core Coach Suite Grade 5 partially meet expectations that the three aspects of rigor are not always treated together and are not always treated separately. All three aspects of rigor are present in the program; however, they are mostly treated separately, and there is emphasis on procedural skill and fluency over the other aspects of rigor. Common Core Coach designates lessons that are specifically identified as fluency, concept, or problem solving (application) lessons. However, the majority of the materials present the mathematics procedurally. In Common Core Coach Lesson 10 Adding and Subtracting Decimals, students use models and place- value charts to add decimals. As the lesson progresses, students use the traditional algorithm for adding decimals. Students then subtract decimals using place value and the traditional algorithm, but no models are given. Students use the standard algorithm to solve practice problems. In Common Core Performance Coach Lesson 14 Adding and Subtracting Fractions and Mixed numbers, students use models to add fractions, then use the algorithm for finding common denominators, creating equivalent fractions, and adding them. The lesson encourages students to use the algorithm and reinforces this in practice problems. The instructional materials reviewed for Common Core Coach Suite Grade 5 do not meet the expectations for meaningfully connecting the Standards for Mathematical Content and the Standards for Mathematical Practice. The instructional materials partially attend to: identifying the mathematical practices and using them to enrich mathematics content; prompting students to construct viable arguments and analyze the arguments of others; assisting teachers in engaging students to construct viable arguments and analyze the arguments of others; and explicitly attending to the specialized language of mathematics. The instructional materials reviewed for Common Core Coach Suite Grade 5 partially meet expectations that the Standards for Mathematical Practice are identified and used to enrich mathematics content within and throughout the grade level. The Standards for Mathematical Practice (MPs) are identified in the Teacher Editions of all three components of the suite. The MPs are identified throughout the "teacher notes" and are mostly found during the discussion portion of the lessons. The MPs are identified for each lesson and guidance is given to teachers as to where they are woven into lessons; however, the meaningful connections to the mathematical content are missing. Lesson 2 Writing and Interpreting Numerical Expressions and Lesson 10 Adding and Subtracting Decimals are the two lessons identified as addressing MP2. These two lessons contain three example problems that align to MP2. MP2 is not identified in any other lesson during the year. No MPs are identified for Lesson 14 Problem Solving: Adding and Subtracting Fractions and Mixed Numbers, Lesson 15 Problem Solving: Interpreting Fractions as Division, Lesson 18 Problem Solving: Multiplying Fractions and Mixed Numbers, and Lesson 20 Problem Solving: Dividing with Unit Fractions. MP3 and MP6 are identified in every lesson. In Lesson 3 Reading and Writing Decimals, MPs 1-7 are identified as being present in the lesson. There is no guidance for teachers as to how the seven MPs meaningfully connect to the content within this lesson. The instructional materials reviewed for Common Core Coach Suite Grade 5 do not meet expectations that the instructional materials carefully attend to the full meaning of each practice standard. MP1: In Common Core Coach Lesson 19 Dividing with Unit Fractions and Whole Numbers, teachers are instructed to "Encourage students to ask: Into how many 1/4's can 6 be divided? This question may help them think of a word problem." Students do not need to make sense of this problem. MP2: In Common Core Coach Lesson 2 Writing and Interpreting Numerical Expressions, the teacher is instructed to "Encourage students to include parentheses in the numerical expressions they write. Emphasize that including the parentheses makes it very clear which operation is to be performed first." Students are not independently demonstrating abstract thinking to solve the problem quantitatively but are being told how to approach the problem. In Lesson 1 Analyzing Numerical Patterns, students discuss additional examples of real-world situations that involve creating number or shape patterns. In Lesson 10 Multiplying Fractions, students discuss additional examples of real-world situations that involve using unit fractions to describe fractions. In Lesson 13 Multiplying Fractions and Mixed Numbers, the Problem Solving section prompts teachers to model the 4-step method to problem solving and point out the multiplication clue words "1/2 as much." Students fill in the blanks on a worksheet and then transition to three basic story problems involving multiplication. In Lesson 17 Measuring Volume of Rectangular Prisms, students discuss additional examples of real objects that are cubes. MP5: In Common Core Coach Lesson 8 Multiplying Whole Numbers Example 2, the teacher is instructed, "A place-value chart may help students see why the product has a digit in the hundred thousands place." Students are not given the opportunity to choose the model they use but instead are told to use a place-value chart. Additionally, Lesson 24 Finding Volume of Rectangular Prisms addresses MP5 through a discussion of how students could use a volume formula $$V = B\times h$$. This question does not allow students to make their own choices for an appropriate tool and/or model in order for them to solve the problem efficiently and accurately. The instructional materials reviewed for Common Core Coach Suite Grade 5 partially meet expectations that the instructional materials prompt students to construct viable arguments and analyze the arguments of others concerning key grade-level mathematics. In Common Core Coach Lesson 12 Dividing Decimals, after completing Example A, teachers are instructed "To reinforce the relationship between multiplication and division, have students use the multiplication sentence provided to check their work." While students are encouraged to discuss their own work, using inverse operation is not an example of constructing an argument or critiquing the work of other students. The instructional materials reviewed for Common Core Coach Suite Grade 5 partially meet expectations that the instructional materials assist teachers in engaging students to construct viable arguments and analyze the arguments of others concerning key grade-level mathematics. There is little teacher guidance on how to lead discussions beyond the provided discussion questions, and there are missed opportunities to guide students in analyzing the arguments of others. Common Core Support Coach provides limited assistance to teachers in engaging students in both constructing viable arguments and analyzing the arguments of others. Most often when MP3 is identified, teachers are directed to "Have partners discuss briefly before group discussion." Some lessons contain a section titled "Spotlight on Mathematics" that offers additional support for teachers in developing critical thinking by offering probing questions to use with students. In addition, teachers are frequently provided a prompt and sentence starter to assist students. However, these probing questions and prompts do not allow for students to construct arguments or critique the reasoning of others. Lesson 4 Comparing Decimals directs teachers to "Have partners discuss briefly before group discussion. As needed, have students look back at the Instruction box to review expanded form of decimal numbers." This does not explicitly instruct teachers how to support students in constructing arguments about number sense or in analyzing the arguments of others. In Lesson 5 Multiplying Whole Numbers, students discuss with each other before group discussion. In addition, the Teacher Edition prompts, "What steps do you take to find the product by using partial products and place value?" and a sentence starter "I find the product by….". However, there are no instructions to have student analyze the reasoning of others. In Common Core Performance Coach, there are no directions to assist teachers in engaging students in constructing arguments or analyzing the arguments of others. Although discussion questions and journal prompts are provided, there are no prompts for teachers, or example student answers to guide the teacher. MP3 is addressed within the discussion questions at the beginning of lessons and within the journal prompt that accompanies most lessons. Additional support for the teacher related to MP3 is not present within the lessons. Lesson 8 Rounding Decimals addresses MP3 with the discussion question, "How can you tell when the rounded decimal will be less than or greater than the given decimal?" No additional support is offered for the teacher and students do not need to construct an argument or analyze others' arguments based on this prompt. Lesson 19 Comparing Products to Factors, teachers are prompted to ask, "How can multiplication be used for scaling or resizing?" This question serves as a prompt, and students do not need to construct an argument or analyze the reasoning of others to provide an answer. The instructional materials reviewed for Common Core Coach Suite Grade 5 partially meet expectations that materials use accurate mathematical terminology. A glossary is available in the student edition of the materials, allowing students to look up definitions for highlighted terms they find in lessons, but these terms are not part of the lesson practice nor do they appear on assessments in any component of the materials. Common Core Coach Lesson 16 Multiplying Fractions, teachers are instructed, "Show how to plot (3, 5) and (5, 3) on the same grid. This will help students understand that the two ordered pairs represent different locations on the coordinate plane." It is unclear how the student demonstrates precise language. Other than highlighting key vocabulary and a glossary, there is little additional instruction on using accurate mathematical vocabulary. Common Core Coach Lesson 28 Extending Classification of Two-Dimensional Figures Example A introduces the concept that figures can be classified and sorted according to the types of sides they have. Students are instructed to use the glossary to help review terms such as perpendicular and parallel, but no instruction is given for students to use these mathematical terms throughout the lesson. Common Core Support Coach Lesson 7 Dividing Decimals directs teachers to "Have partners discuss briefly before group discussion. As needed, direct students' attention to the models in [example] DO A. Have students compare the two sets of models." There is no direction on how students are to use precise language during the discussion. While Common Core Performance Coach lessons never specifically identify key vocabulary, the ELL Support component does suggest students keep dictionaries and use the Frye Method to define terms, use them in a sentence, and give examples and non-examples. Sentence frames are also provided for teachers so they can assist students in understanding the concepts of mathematical terms. However, there is little to no instruction for teachers on how any student should use the language of mathematics. Lesson 12 Multiplying Decimals directs the teacher to ask, "How can you make sure you have placed the decimal point in the correct location of a product when multiplying decimal factors?" This does not support students' use of precise language around number sense, but rather students could provide a rote response.
CommonCrawl
Banach spaces of continuous functions; tensor products; operator ideals; 𝑝-summing operators. Let 𝛺 be a compact Hausdorff space, 𝑋 a Banach space, $C(\Omega,X)$ the Banach space of continuous 𝑋-valued functions on 𝛺 under the uniform norm, $U:C(\Omega,X)\to Y$ a bounded linear operator and $U^\#,U_\#$ two natural operators associated to 𝑈. For each $1\leq s <\infty$, let the conditions $(\alpha)U\in \Pi_s(C(\Omega, X), Y);(\beta)U^\#\in \Pi_s(C(\Omega), \Pi_s(X, Y));(\gamma)U_\#\in \Pi_s(X, \Pi_s(C(\Omega), Y))$. A general result, [10,13], asserts that $(\alpha)$ implies (𝛽) and (𝛾). In this paper, in case $s=2$, we give necessary and sufficient conditions that natural operators on $C([0,1],l_p)$ with values in $l_1$ satisfies (𝛼), (𝛽) and (𝛾), which show that the above implication is the best possible result.
CommonCrawl
expected ⋆ 100% Private Proxies - Fast, Anonymous, Quality, Unlimited USA Private Proxy! So I have Messages in iCloud enabled on my iMac – High Sierra, 10.13.6. My iPhone – version 12.2 and iPad – version 12.1.4. The issue id like to solve is this – on the iPhone an iPad when I delete a message, I get the warning / confirmation that if I delete a message it will delete across all devices, do I want to delete? When I answer yes, the message will delete from the iPhone and iPad but not the iMac. When I go to delete a message on my iMac, I get no prompt / confirmation, the message will just delete. I have sent numerous logs to apple, enabled a pin code for them to rest my iCloud account and sent them some sort of profile info, but they have been unable to get my messages to sync when deleting across all the devices. I have also disabled and re-enabled iMessages in the cloud on all devices with no luck. Can anyone help me to get the iMessages deleting across all devices? I am almost certain it did work properly at some point, but started this after an update. Sorry to not be more specific on what I did with apple, but I did not write down all the info – but they have collected a lot of info and I do not remember for 100% sure when it started to act up. This has been ongoing for awhile now and they just won't respond any longer. Tks much! The initial cost of a machine is $ 3. The life-time, T, of that machine has an exponential distribution with an expected value equal to 3 years. The maker wants to offer a warranty that pays $ 3 if the machine gets broken during the first year, $ 2 if the machine gets broken during the second year, and $ 1 if the machine gets broken during the third year. To find E[X] I tired the following: 3*P(T <= 1) + 2*P(1 < T <= 2) + 3*P(2 < T <= 3), which gives me approximately 1.402. However, I don't know if that is the right way to do it. Nevermind, i fixed my problem– I am dumb and I am a babyfur// i had addresses: 2x in the yaml file. I have a random $ m \times n$ matrix over $ F_2$ of full rank (i.e. all columns are independent). Now, I wish to randomly choose $ k$ rows such that the corresponding sub-matrix (that is made only of the chosen rows) will be of full rank (i.e. rank $ m$ ) with high probability, and I wonder how large $ k$ should be in order for that to happen with constant probability. I always hear that in terminal (no matter which in MacOS or Linux), the hotkey Ctrl + U will erase the text JUST BEFORE THE CURSOR. However, I find it erases EVERYTHING in my MacOS terminals, no matter where the cursor is. So I just wondering that, just my Macs do behave like this, or every mac does, and if they do, how to config it to behave like in linux. my first post on here, and my math skills are more than a little rusty. I have a simple question for you: assume Y is a mean preserving spread of X. Is it always true that: E(X | X > Y) < E(Y | Y < X) ? How to prove it? Is it also true that, for some value of C>0, Var(Y| Y>C) < Var(X|X>C)? There is a game involving opening doors. There are 10 doors and 3 contain normal balls while one contains a gold ball. One gold ball is worth 3 points while the normal are worth one point. Using hypergeometric distribution.
CommonCrawl
Abstract : [en] We show that the colour-singlet contributions to the hadroproduction of $J/\Psi$ in association with a W boson are sizable, if not dominant over the colour-octet contributions. They are of two kinds, $sg\toJ/\Psi+c+W$ at $\alpha^3s\alpja$ and $q\overline q' \to\gamma^\star /Z ^\star W\to J/\Psi W$ at order $\alpha^3$. These have not been considered in the literature until now. Our conclusion is that the hadroproduction of a $J/\Psi$ in association with a W boson cannot be claimed as a clean probe of the colour-octet mechanism. The rate are small even at the LHC and it will be very delicate to disentangle the colour-octet contributions from the sizable colour-singlet ones and from the possibly large double-parton-scattering contributions. During this analysis, we have also noted that, for reactions such as the production of a $J/\Psi$ by light quark–antiquark fusion, the colour-singlet contribution via an off-shell photon is of the order of the expectation from the colour-octet contribution via an off-shell gluon. This is relevant for inclusive production at low energies close to the threshold. Such an observation also likely extends to other processes naturally involving light-quark annihilation.
CommonCrawl
In this post I'll demonstrate one way to use Bayesian methods to build a 'dynamic' or 'learning' regression model. When I say 'learning,' I mean that it can elegantly adapt to new observations without requiring a refit. Today we lay our scene in London between the years 1760-1850. The good people of London have bread and they have wheat, which is, of course, what bread is made from. One would expect that the price of bread and the price of wheat are closely correlated and generally they are, though in periods of turmoil things can come unstuck. And Europe, between 1760-1850, was a very eventful place. Our dataset is sourced from the collection, Consumer price indices, nominal / real wages and welfare ratios of building craftsmen and labourers, 1260-1913, kindly compiled by Robert C. Allen at the University of Oxford's Department of Economics, and it can be downloaded from the International Institute of Social History. (Notice that I'm quoting prices in grams of silver, rather than in the local sterling. This is a lazy attempt to remove (somewhat) the effect of currency devaluation and fluctuation). Now: from 1760, through the American Revolutionary War, up until the mid-1790s, the prices of wheat and bread in London are fairly steady. And so is the relationship between those prices: see the compact cluster of blue points on the scatterpot. In a typical year, the price of a kilogram of bread is roughly 50g of silver on top of the price of litre of wheat. Then the chaos: the French Revolutionary Wars and their offspring, the Napoleonic Wars; a series of very poor wheat harvests in the 1790s; Napoleon's attempt to lock Britain out of European trade with his Continental System. In London, wheat and bread prices go haywire between 1795-1820. After Waterloo, and Britain's subsequent assumption of global maritime dominance, prices steady again. But the relationship between them has changed. The red points and the red best fit line show that from ~1820, London bread now tends to cost a higher multiple of the cost of wheat. What are the causes for these fluctuations in prices? There'd be many influences on the supply & demand of both wheat and bread, but that's a question for other blogs. Our goal here is to develop a regression model that predicts the price of bread and, as the years roll by, will automatically adapt to the changing economic relationship with the price of wheat. Basically, we want a model which inititially follows the blue line, but will move to the red line as post-war economic conditions settle in. Probabilistic Programming & Bayesian Methods for Hackers: a free, practical guide with lots of worked examples in PyMC. Comes in iPython notebook format so you can easily tinker with it. Think Bayes: Those who are new to Bayes may find this book of worked examples helpful, I did. Problems are solved in Python through direct use of Bayes' rule (no MCMC). It's easier to understand when you see the output, so skip on down the the scatterplot with overlaid regression lines. The coloured lines show regression defined by the posterior $\alpha$ and $\beta$ at each decade. At 1790 we start where the dark blue line is. Over the decades the line gradually moves upward and the gradient becomes more positive, eventually settling where the dark red line is in 1850. The gradient is determined by the posterior distribution of $\beta$: you can see the distribution edging rightward in the histograms. Did you notice that the final red line for 1840-1850 has a different slope to the simple OLS regression line I fit at the start of this post? This brings us to an interesting capability of Bayesian modelling: our ability to constrain the way that the model adapts via the limits we place on the prior. If the prior distribution is very narrow, then it constrains the dimensions of the posterior: basically, it limits how far the regression line can move in a decade. In the above example, I've kept the prior distributions fairly narrow at each step, and that's why you see only very small changes in the gradient of the line. This constraint can be applied via the precision/variance of the prior distribution. Let's try the exercise again, but let's make the variance of the priors for $\alpha$ and $\beta$ larger, and thereby give them more room to move in each step. Here' we're running the exact same code, except with the stdAlpha & stdBeta variables, which set the standard derviation of those distributions, enlarged. Now you can see that the gradient of subsequent regression lines varies. The histograms of posteriors are also wider, and their means move further in each decade. Some readers may find it distasteful that the results of this "mathematical modelling" vary with subjective or arbitrary choices like the prior standard deviation. We might like to think that it should be somehow "objective." But the prior exerts a powerful influence over Bayesian modelling. A prior must be specified, and we often have no choice but to specify it with guesswork and/or parameters that are more pragmatic than "correct." It's worth noting that choices made by the Frequentist modeller are no less subjective or influential on their results. Although they need not specify a prior, they still must make choices like, for example, specifying a model formula or governing distributions. At any rate, that's missing the point. We should daily remind ourselves of George E. P. Box's dictum that, "Essentially, all models are wrong, but some are useful." I think the task of the applied statistician / machine learning practitioner is to be as useful as possible, not as correct as possible. Being able to control the "rate of learning" is a very useful thing, even if our justification for our choice of parameters may be extraneous to the data, such as contextual knowledge or even, horror of horrors, intuition. A model that adapts gradually can be thought of one that is less influenced by outliers. By controlling the rate of learning, we implement models that adapt as rapidly or as slowly as we judge appropriate to circumstances. In the case of London's bread prices, I would suggest that the chaotic prices we see between 1800-1820 do not reflect long term (or short term) trends and we don't want our predictions to be tossed about by the storm. We want it to adapt gracefully and gradually, because that is how we understand/imagine that macroeconomic forces work: slowly. (Economists, feel free to weigh in on this...) We believe that the war years - where prices go dramatically off the recent trend - are outliers, and we don't want them to exert a strong influence ove our model. We expect things will settle down again into a steady state, likely not too far from where we started. And so we find that constraining that by, in this case, the precision parameter of our priors, is very useful.
CommonCrawl
$f_1, \ldots , f_ r$ is an $M$-quasi-regular sequence. In particular the sequence $f_1, \ldots , f_ r$ is a regular sequence in $R$ if and only if it is a Koszul regular sequence, if and only if it is a $H_1$-regular sequence, if and only if it is a quasi-regular sequence. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 09CC. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 09CC, in case you are confused.
CommonCrawl
Ruben and Albert are what you can call abnormally smart. They are also both very fond of mathematically inspired games. Their only problem is that most games are too easy for them, and they end up beating everyone who dares challenge them. Because of that, they're now mostly playing against each other. To make things interesting, they had a professor design a new game for them. This new game was interesting at first. Nowadays, however, Albert often complains that it is impossible for him to win a particular round. After long discussions, they've now decided to take this a step further, and actually figure out who'd win if they both played optimally. They need you to write a computer program that does this for them. A state in the game consists of one or more $x\times y\times z$ cuboids. A (legal) move is choosing a cuboid, then a value for each of the three axes (basically choosing three planes), and then cutting the cuboid along these (thus removing a $1\times y\times z$, $x\times 1\times z$ and a $x\times y\times 1$, all overlapping, cuboid). In effect you've created between $0$ and $8$ (inclusive) smaller cuboids. All three planes cut from the cuboid need to be on the cuboid (you can't cut away a hypothetical cuboid on the outside of the real one). An example might be in order. You've chosen a $3\times 5\times 4$ cuboid, and are about to cut it. You now need to choose the three planes. This means you need an $x$ between $1$ and $3$, a $y$ between $1$ and $5$ and a $z$ between $1$ and $4$. Say you choose $2$, $1$ and $3$, respectively. The first cut would alone cut the cuboid into two $1\times 5\times 4$ cuboids, the second into a single $3\times 4\times 4$ cuboid, while the third would alone cut the cuboid into a $3\times 5\times 1$ and a $3\times 5\times 2$ cuboid. Put together these cuts produces $4$ new smaller cuboids, of sizes $1\times 4\times 1$,$1\times 4\times 1$,$1\times 4\times 2$ and $1\times 4\times 2$. Note that cutting a cuboid with an axis of size $1$ would remove it altogether. The players take turns making a move. The winner is the player that removes the last cuboid. The first line of input is a line containing either RUBEN or ALBERT, the name of the player who starts that particular round. Then follows a line containing $N$, the number of cuboids that particular game starts with. $N$ lines follow, each describing a cuboid. A cuboid description consists of three numbers, $x$, $y$ and $z$, the size of that particular cuboid. Output the name of the player that wins the game (either RUBEN or ALBERT).
CommonCrawl
I am reading Guillemin and Pollack's Differential Topology. For the proof on Page 164, I was not able to get through the last step. According to Daniel Robert-Nicoud's ansewr to $f: X \to Y$ is a smooth map and $\omega$ is a $p$-form on $Y$, what is $\omega[f(x)]$? Following James S. Cook's very brilliant answer Pullback expanded form. So here I got stuck - I don't really know how to move around $\alpha, \beta$ under $df^*$, to get close to the left hand side expression. $(f\circ h)^* \omega = h^*f^*\omega$ - Legit now? How to write differential forms on manifolds? Explicit description of Poincare dual of graph?
CommonCrawl
The iron mediated (1,5)-homologous Michael reaction as a route to spirocyclic ring systems. The reaction between $\gamma$-acetoxy-$\alpha,\beta$-unsaturated esters with diiron nonacarbonyl produces the $\eta\sp2$-iron tetracarbonyl complexes. Addition of a Lewis acid, such as boron trifluoride etherate, produces $\eta\sp3$-iron allyl cationic complexes. These complexes react with silyl enol ethers and silyl ketene acetals to afford the addition products. The regiochemistry is such that the addition occurs at the terminus of the allyl fragment remote from the ester function to produce $\gamma$ substitution products. The geometric stability of the iron allyl cation allows the stereochemical integrity of the double bond to remain intact during the course of the reaction. Cyclic silyl ketene acetals and silyl enol ethers provide difunctionalized cyclic products containing a quaternary centre. These products are precursors to a variety of spirocyclic ring systems. Three types of spirocyclizations were performed with the iron allyl addition products. Attempts at cyclizing the alkene failed to produce any product cleanly, so the compounds were first hydrogenated. The Dieckmann and the acyloin condensations produced the corresponding spirocycles. Cyclization by metal halogen exchange also formed a spirocycle in low yield. Further investigation of this reaction and its application to natural product synthesis will be discussed. Source: Masters Abstracts International, Volume: 34-06, page: 2378. Adviser: James R. Green. Thesis (M.Sc.)--University of Windsor (Canada), 1996. Charlton, Margaret Anne., "The iron mediated (1,5)-homologous Michael reaction as a route to spirocyclic ring systems." (1996). Electronic Theses and Dissertations. 4077.
CommonCrawl
My textbook says it converges for all values of $z$ but there is a pole at $z=0$, so which one is true? When we speak about convergenge in all the $Z$ plane, the points $z=0$ and $z=\infty$ are not considered. Note that by time-shifting the signal, you include/exclude zeros and poles at these 2 special points. Not the answer you're looking for? Browse other questions tagged z-transform or ask your own question. What does 'z' in Z-transform represent ? Is it frequency or something else? Same z transformed function, but different answers of inverse z transform? Inverse $\mathcal Z$-transform when region of convergence goes outwards from the inner pole?
CommonCrawl
Abstract : Finite topological spaces are in bijective correspondence with preorders on finite sets. We undertake their study using combinatorial tools that have been developed to investigate general discrete structures. A particular emphasis will be put on recent topological and combinatorial Hopf algebra techniques. We will show that the linear span of finite spaces carries generalized Hopf algebraic structures that are closely connected with familiar constructions and structures in topology (such as the one of cogroups in the category of associative algebras that has appeared e.g. in the study of loop spaces of suspensions). The most striking results that we obtain are certainly that the linear span of finite spaces carries the structure of the enveloping algebra of a $B_\infty$--algebra, and that there are natural (Hopf algebraic) morphisms between finite spaces and quasi-symmetric functions. In the process, we introduce the notion of Schur-Weyl categories in order to describe rigidity theorems for cogroups in the category of associative algebras and related structures, as well as to account for the existence of natural operations (graded permutations) on them.
CommonCrawl
Problem. Is it consistent with ZFC that $\mathfrak t=\omega_1$ and each $\omega_1$-generated tall $P$-ideal is of the second Baire category? (Asked 01.10.2016 by David Chodounsky at page 20 of Volume 1 of the Lviv Scottish Book). Prize: A bottle of Becherovka. Browse other questions tagged set-theory gn.general-topology infinite-combinatorics or ask your own question. A meager subgroup of the real line, which cannot be covered by countably many closed subsets of measure zero? Is each Swiatkowski function with closed graph continuous? Is each Peano continuum a topological fractal?
CommonCrawl
Cells sense forces from the extracellular matrix (ECM) and transduce them into biochemical signals. The molecules produced cause in turn remodeling of the ECM. Molecular altered expression will affect this force sensing mechanism changing cellular properties as migration, differentiation, etc. Therefore, cells mechanical properties can be used as a marker for the early diagnosis of pathologies as cancer or cardiovascular diseases. In this framework, Atomic Force Microscopy (AFM) represents an excellent tool to evaluate the mechanical properties of different cellular systems. In this talk, we will analyze the mechanical properties of aortic valve interstitial cells (VICs), the predominant constituent of aortic valves, governing ECM structure and composition, in the onset of calcific aortic valve disease (CAVD). In particular, we obtained adhesion polymeric substrates with different stiffness onto which human AoV VICs were plated, and subsequently investigated for the cytoskeleton dynamics and the activity of the mechanosensing-activated transcription factor YAP. We found that cells were subject to a reversible stiffness-dependent nuclear translocation of the transcription factor in concert with an increase in cytoskeleton tensioning and loading of the myofibroblast-specific protein $\alpha$SMA onto the F-actin cytoskeleton.\newline Then, we studied the interaction between porcine VICs and optically transparent, vertically aligned carbon nanotube (CNT) substrates, mimicking the chemical/morphological role of natural ECM. Here we found that the number of myofibroblasts (correlated to disease-associated phenotype) was similar to the case of healthy valves, and that fibroblasts on CNT matrix resulted in higher stiffness and higher number of focal adhesions, with respect to reference glass. AFM imaging of the inner membrane of VICs broken up by osmotic shock allowed to observe that CNTs are piercing and pinching the plasma membrane, in this way facilitating the creation of clusters of FAs that contribute to increase cellular rigidity.
CommonCrawl
is "by general category theory" "a fibre product diagram". I tried to show this using the universal property, but didn't obtain anything useful. How do you prove $X\times_TY$ is a fiber product of $X\times_SY$ and $T$ with respect to $T\times_ST$? I hope this answer is clear -- fundamentally, the solution is to make a little doodle, which is difficult to TeX up. You have to show that if an arbitrary scheme $P$ (weird name but out of letters!) maps to $X \times_S Y$ and to $T$, commuting with the given maps to $T \times_S T$, then this map factors through a map $X \times_T Y$. To get a map $P \to X \times_T Y$, you'd better make a map $P \to X$ and a map $P \to Y$, commuting with the given map to $T$. There's only one reasonable guess for the maps $P \to X$ and $P \to Y$; namely the factors of the given map $P \to X \times_S Y$, so you're forced to check that these commute with the given map $P \to T$. To check this, build a square diagonally to the bottom right of your picture, giving the definition of $T \times_S T$, and note the definition of the map $\Delta$ implies the two compositions $T \to T$ are both the identity. Now add a copy of $X$ and $Y$ to your picture, mapping to the two different $T$s. I claim these copies receive maps from the $X \times_S Y$ in your picture making everything commute. This is because $Y$ and $X$ both receive their $S$-scheme structure via a given map $T \to S$ (as can be confirmed by reading the link in your question). Not the answer you're looking for? Browse other questions tagged category-theory schemes products diagram-chasing or ask your own question. Finite fiber of scheme morphism is zero-dimensional? Fiber product of non-abelian groups. If a morphism has a section then it is an effective epimorphism?
CommonCrawl
For each genus $g$, there are many curves of genus $g$ defined over $\mathbb Q$. How many? We might study this question by considering the rational points of the Deligne-Mumford moduli space of curves $\mathcal M_g$. What is the dimension of the largest subvariety of $\mathcal M_g$ with Zariski dense rational points? What is the dimension of the largest subvariety of $\mathcal M_g$ that has no dominant rational maps to a variety of general type? I'd be happy to see an conjectural and/or asymptotic answer to either question. For a lower bound, observe that the trigonal locus is unirational, hence has Zariski dense rational points, and has dimension $2g+1$. There are many other kinds of obvious rational subvarieties in the moduli space of curves (e.g. parameterizing complete intersections), but they all seem to have lower dimensions. For large g, is the trigonal locus the largest such subvariety? Edit: Felipe pointed out Jason Starr's comment that the trigonal locus is actually larger than the hyperelliptic locus, and has Zariski dense rational points, so I switched hyperelliptic to trigonal in my best guess for the largest subvariety. Browse other questions tagged ag.algebraic-geometry nt.number-theory or ask your own question. Are most curves over Q pointless? What is the second fundamental form of moduli space? Do mapping classes have gonality? Is the geometry of a variety determined by the counts of rational points? Does the Bombieri-Lang conjecture imply severe restrictions on rational points on twists of hyperelliptic curves?
CommonCrawl
A reanalysis of collinear factorization for inclusive Deep Inelastic Scattering shows that a novel, non perturbative spin-flip term associated with the invariant mass of the produced hadrons couples, at large enough Bjorken $$x_B$$, to the target's transversity distribution function. The resulting new contribution to the $$g_2$$ structure function can potentially explain the discrepancy between recent calculations and fits of this quantity. The new term also breaks the Burkhardt-Cottingham sum rule, now featuring an interplay between the $$g_2$$ and $$h_1$$ functions that calls for a re-examination of their small-$x$ behavior. As part of the calculation leading to these results, a new set of TMD sum rules is derived by relating the single-hadron quark fragmentation correlator to the fully dressed quark propagator by means of integration over the hadronic momenta and spins. A complete set of momentum sum rules is obtained for transverse-momentum-dependent quark fragmentation functions up to next-to-leading twist. Search OSTI.GOV for author "Accardi, Alberto" Search OSTI.GOV for ORCID "0000-0002-2077-6557" Search orcid.org for ORCID "0000-0002-2077-6557" Search OSTI.GOV for author "Signori, Andrea" Search OSTI.GOV for ORCID "0000-0001-6640-9659" Search orcid.org for ORCID "0000-0001-6640-9659" Accardi, Alberto, and Signori, Andrea. Transversity in inclusive DIS and novel TMD sum rules. United States: N. p., 2018. Web. doi:10.22323/1.316.0158. Accardi, Alberto, & Signori, Andrea. Transversity in inclusive DIS and novel TMD sum rules. United States. doi:10.22323/1.316.0158. Accardi, Alberto, and Signori, Andrea. Thu . "Transversity in inclusive DIS and novel TMD sum rules". United States. doi:10.22323/1.316.0158. https://www.osti.gov/servlets/purl/1492007. The unpolarised transverse momentum dependent distribution and fragmentation functions (TMDs) are extracted from HERMES and COMPASS experimental measurements of semi- inclusive deep inelastic scattering multiplicities for charged hadron production. A simple factorised functional form of the TMDs is adopted, with a Gaussian dependence on the intrinsic transverse momentum, which turns out to be quite adequate in shape. Conference Sulkosky, Vincent A. ; Allada, Kalyan C.
CommonCrawl
Abstract. We study the integrability of intermediate distributions for Anosov diffeomorphisms and provide an example of $C^\infty-$Anosov diffeomorphism on three dimensional torus whose intermediate stable foliation has leaves that admit only a finite number of derivatives. We also show that this phenomenon is quite abundant. In dimension four or higher this can happen even if the Lyapunov exponents at periodic orbits are constant.
CommonCrawl
\u0160\u00E1rka Ne\u010Dasov\u00E1, Reimund Rautmann, Werner Varnhorn. Preface. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): i-ii. doi: 10.3934\/dcdss.2010.3.2i. Helmut Abels. Nonstationary Stokes system with variable viscosity in bounded and unbounded domains. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 141-157. doi: 10.3934\/dcdss.2010.3.141. Ch\u00E9rif Amrouche, Mar\u00EDa \u00C1ngeles Rodr\u00EDguez-Bellido. On the very weak solution for the Oseen and Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 159-183. doi: 10.3934\/dcdss.2010.3.159. Claude Bardos, E. S. Titi. Loss of smoothness and energy conserving rough weak solutions forthe $3d$ Euler equations. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 185-197. doi: 10.3934\/dcdss.2010.3.185. An elementary approach to the 3D Navier-Stokes equations with Navier boundary conditions: Existence and uniqueness of various classes of solutions in the flat boundary case. Luigi C. Berselli. An elementary approach to the 3D Navier-Stokes equations withNavier boundary conditions: Existence and uniqueness of various classes of solutions in the flat boundary case.. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 199-219. doi: 10.3934\/dcdss.2010.3.199. Ihsane Bikri, Ronald B. Guenther, Enrique A. Thomann. The Dirichlet to Neumann map - An application to the Stokes problemin half space. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 221-230. doi: 10.3934\/dcdss.2010.3.221. A challenging open problem: The inviscid limit under slip-type boundary conditions. Hugo Beir\u00E3o da Veiga. A challenging open problem: The inviscid limit under slip-typeboundary conditions.. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 231-236. doi: 10.3934\/dcdss.2010.3.231. Paul Deuring, Stanislav Kra\u010Dmar, \u0160\u00E1rka Ne\u010Dasov\u00E1. A representation formula for linearizedstationary incompressible viscous flows around rotating andtranslating bodies. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 237-253. doi: 10.3934\/dcdss.2010.3.237. Lars Diening, Michael R\u016F\u017Ei\u010Dka. An existence result for non-Newtonian fluids in non-regular domains. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 255-268. doi: 10.3934\/dcdss.2010.3.255. Andrei Fursikov. Local existence theorems with unbounded set of input data and unboundedness of stable invariant manifolds for 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 269-289. doi: 10.3934\/dcdss.2010.3.269. Matthias Geissert, Horst Heck, Matthias Hieber, Okihiro Sawada. Remarks on the $L^p$-approach to theStokes equation on unbounded domains. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 291-297. doi: 10.3934\/dcdss.2010.3.291. Horst Heck, Matthias Hieber, Kyriakos Stavrakidis. $L^\\infty$-estimates for parabolic systems with VMO-coefficients. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 299-309. doi: 10.3934\/dcdss.2010.3.299. Ond\u0159ej Kreml, Milan Pokorn\u00FD. On the local strong solutions for the FENE dumbbell model. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 311-324. doi: 10.3934\/dcdss.2010.3.311. Petr Ku\u010Dera. The time-periodic solutions of the Navier-Stokes equations with mixed boundary conditions. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 325-337. doi: 10.3934\/dcdss.2010.3.325. Rainer Picard. On a comprehensive class of linear material lawsin classical mathematical physics. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 339-349. doi: 10.3934\/dcdss.2010.3.339. Paolo Secchi. An alpha model for compressible fluids. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 351-359. doi: 10.3934\/dcdss.2010.3.351. Zden\u011Bk Skal\u00E1k. On the asymptotic decay of higher-order normsof the solutions to the Navier-Stokes equations in R<SUP>3<\/SUP>. Discrete & Continuous Dynamical Systems - S, 2010, 3(2): 361-370. doi: 10.3934\/dcdss.2010.3.361.
CommonCrawl
Detexify - A webservice for finding LaTeX symbols. $\leadsto$ \leadsto $\mapsto$ \mapsto . . . . $\Phi$ \Phi $\varphi$ \varphi . . . . $\Lambda$ \Lambda $\Delta$ \Delta . . . . I've recently needed \(\dots, \ddots, \vdots\) (\dots, \ddots, \vdots) for a visualization in a matrix. Note that you can write ... instead of \dots, but you'll lose the semantics.
CommonCrawl
Determinant of a particular matrix. What is the best way to find determinant of the following matrix? I thought it looks like a Vandermonde matrix, but not exactly. I can't use $|A+B|=|A|+|B|$ to form a Vandermonde matrix. Please suggest. Thanks. Determinant of a $3\times 3$ matrix in simplest form.
CommonCrawl
Defines simple macros for greek letters. Defines macros using § to type greek letters so that the user may (for example) type §a to get the effect of $\alpha$. The author is Yvon Henel. The package is Copyright © 2004,2008 Yvon Henel. Visit a nearby CTAN:/macros/latex/contrib/paresse. If this fails, try Dante. Copyright © 1986-2011 [email protected]. This page was generated 2012-10-01 14:30:04.
CommonCrawl
where $R_0$ and $R_a$ are the inner and outer radius of the tube, respectively, $h_1$ and $h_2$ are the inner and outer heat transfer coefficients, and $k$ is the thermal conductivity of the wall. How does Eq. (1) simplify for the $T$-profile shown on the left? I had seen a similar $T$-profile in one of our exercises and there they used Eq. (1), however when selecting option 1, I get "incorrect answer". How can you tell from looking at the $T$-profle what the equation for the overall heat transfer coefficient will be? All help and hints are appreciated! This is the heat equation analogue to Ohm's law, in which resistances are additive (in series), and hence the inverse conductances are additive. The way to reason is that the full equation (1) for the heat conductivity (1) contains three terms: one for the interior of the tube, one for the tube wall, and a third for the exterior. When you look at the profile, it becomes evident that, since temperatures immediately adjacent inside and outside the tube are equal, the tube has infinite heat conductance ($k=\infty$). The right answer is therefore 3. Not the answer you're looking for? Browse other questions tagged thermodynamics heat chemical-engineering thermal-conductivity or ask your own question.
CommonCrawl
Inspired by Timothy Gowers's example, here is my transfinite epistemic logic problem. First, let's begin with a simple finite example. Cheryl Hello Albert and Bernard! I have given you each a different natural number ($0, 1, 2, \ldots$). Who of you has the larger number? Bernard I don't know either. Albert Even though you say that, I still don't know. Bernard And still neither do I. Albert Alas, I continue not to know. Bernard And also I do not know. Albert Yet, I still do not know. Bernard Aha! Now I know which of us has the larger number. Albert In that case, I know both our numbers. Bernard. And now I also know both numbers. Question: What numbers do Albert and Bernard have? Now, let us consider a transfinite instance. Consider the following conversation. Cheryl I have given you each a different ordinal number, possibly transfinite, but possibly finite. Who of you has the larger ordinal? Bernard I don't know, either. Albert Alas, I still don't know. Bernard And yet, neither do I. Cheryl Well, this is becoming boring. Let me tell you that no matter how much longer you continue that back-and-forth, you still will not know the answer. Albert Well, thank you, Cheryl, for that new information. However, I still do not know who has the larger ordinal. Bernard And yet still neither do I. Albert Alas, even now I do not know! Bernard And neither do I! Cheryl Excuse me; you two can go back and forth like this again, but let me tell you that no matter how much longer you continue in that pattern, you will not know. Albert Well, 'tis a pity, since even with this further information, I still do not know. Bernard Aha! Now at last I know who of us has the larger ordinal. Albert In that case, I know both our ordinals. Bernard. And now I also know both ordinals. Question: What ordinals do they have? See my next transfinite epistemic logic puzzle challenge! For the first problem, with natural numbers, let us call the numbers $a$ and $b$, respectively, for Albert and Bernard. Since Albert doesn't know at the first step, it means that $a\neq 0$, and so $a$ is at least $1$. And since Bernard can make this conclusion, when he announces that he doesn't know, it must mean that $b$ is not $0$ or $1$, for otherwise he would know, and so $b\geq 2$. On the next round, since Albert still doesn't know, it follows that $a$ must be at least $3$, for otherwise he would know; and then, because Bernard still doesn't know, it follows that $b$ is at least $4$. The next round similarly yields that $a$ is at least $5$ and then that $b$ is at least $6$. Because Albert can undertake all this reasoning, it follows that $a$ is at least $7$ on account of Albert's penultimate remark. Since Bernard announces at this point that he knows who has the larger number, it must be that Bernard has $6$ or $7$ and that Albert has the larger number. And since Albert now announces that he knows the numbers, it must be because Albert has $7$ and Bernard has $6$. For the transfinite problem, let us call the ordinal numbers $\alpha$ and $\beta$, respectively, for Albert and Bernard. Since Albert doesn't know at the first step, it means that $\alpha\neq 0$ and so $\alpha\geq 1$. Similarly, $\beta\geq 2$ after Bernard's remark, and then $\alpha\geq 3$ and $\beta\geq 4$ and this would continue for some time. Because Cheryl says that no matter how long they continue, they will not know, it follows that both $\alpha$ and $\beta$ are infinite ordinals, at least $\omega$. But since Albert does not know at this stage, it means $\alpha\geq\omega+1$, and then $\beta\geq \omega+2$. Since Cheryl says again that no matter how long they continue that, they will not know, it means that $\alpha$ and $\beta$ must both exceed $\omega+k$ for every finite $k$, and so $\alpha$ and $\beta$ are both at least $\omega\cdot 2$. Since Albert still doesn't know after that remark, it means $\alpha\geq\omega\cdot 2+1$. But now, since Bernard knows at this point, it must be that $\beta=\omega\cdot 2$ or $\omega\cdot 2+1$, since otherwise he couldn't know. So Albert's ordinal is larger. Since at this point Albert knows both the ordinals, it must be because Albert has $\omega\cdot 2+1$ and Bernard has $\omega\cdot 2$. It is clear that one may continue in this way through larger transfinite ordinals. When the ordinals become appreciable in size, then it will get harder to turn it into a totally finite conversation, by means of Cheryl's remarks, but one may succeed at this for quite some way with suitably obscure pronouncements by Cheryl describing various limiting processes of the ordinals. To handle any given (possibly uncountable) ordinal, it seems best that we should consider conversations of transfinite length. This entry was posted in Exposition and tagged common knowledge, epistemic logic, ordinals, Transfinite recursion by Joel David Hamkins. Bookmark the permalink. I changed the discussion slightly from the first posting, so that in each case both numbers are determined. I have a (maybe silly) question. Why is Albert's number – or ordinal, for the transfinite case – determined? For instance, consider the finite case: wouldn't be the dialogue just the same also if b=6 and a>7? In particular, suppose b=6 and a=100. In this case, Bernard can still announce that he knows who has the larger number (knowing that 6, at that point, is necessarily the minimum), and then Albert can still says that he knows both the numbers, for he deduced Bernard's one and he clearly knows his own number. — and similarly for the transfinite version. Since Bernard's number could be either 6 or 7 until the final remark, the only way that Albert can know both numbers is if he can rule one of these out, and the only that way that that can happen is if Albert himself has 7, so that he knows Bernard has 6. If Albert had 100, he wouldn't know both numbers, since he wouldn't know whether Bernard had 6 or 7. I can imagine how this would work for ordinals up to, say, omega^omega or even epsilon_0, by Cherly short-circuiting conversations including not only Albert and Bernard but herself as well (by admitting that she will interrupt their exchange infinitely often). But I do not see a way of getting past even omega_1^CK which does not amount to Cheryl starting the conversation by saying how many times she will interrupt the exchange and reducing the game to the finite case. Miha, I agree that for larger ordinals, it becomes increasingly difficult for Cheryl to carry them through the limit stages in that fashion. I can imagine her pointing to an oracle for some large ordinal saying that, "even if you go *that* far, you will not know." For this reason, I find it more natural to consider an actually infinite transfinite conversation purely between Albert and Bernard, as I mention at the end. I have no idea what you are talking about, but this quote from a Jane Austen novel comes to mind. Elizabeth: There is, I believe, in every disposition a tendency to some particular evil, a natural defect, which not even the best education can overcome. And Your defect is a propensity to hate every body. Mr. Darcy: And yours (he replied with a smile) is to willfully misunderstand them. I like to give a n interesting related puzzle. Give two successive natural numbers greater than 10 to A and B, each one only knows his/her number and both of them know that the numbers are successive. 1) Both players are honest! I don't know what your number is. Now puzzle starts.It's beginning a conversation between A and B. A: I don't know what your number is. B: I don't know what your number is. The game continues, but suddenly one of the says Now I can say what your number is and your number is….. and then game ends. Question: How is it possible? This problem show that you can get many information from nothing(they only say I don't know)!!!!!! Note that I didn't think about transfinite situation. I like your puzzle, and actually I had considered exactly that version before deciding to present the current one, where Albert and Bernard do not know that the numbers are necessarily successive, but the pattern of reasoning still works. It is indeed amazing to me! So if it's convenient, delete my post, I have another one, If you know it I don't leave a comment otherwise it's more interesting. Summery of Puzzle: The situation is more complicated one of them has a number that is multiplication of a and b (a.b) and another one the summation of a and b (a+b). but the numbers are less that 100. it's important who starts first and in 4 steps second one says the answer. Can this be generalized to transfinite case? Also the relation ship between the least given number and the steps to reach the answer is also interesting. Why does it have to start at 0? Why does the conversation have to go sequentially? Now start a conversation to determine x and y. Albert: I cannot determine x and y from the given number. Bernard: Yes I knew that. Albert: Now I can determine x and y. Bernard: Now I also can determine x and y. Question: What are x and y? I left a comment and explained full story.
CommonCrawl
Sesana, A., Barausse, E., Dotti, M., & Rossi, E. M. (2014). Linking the spin evolution of massive black holes to galaxy kinematics. Astrophysical Journal, 794: 104. doi:10.1088/0004-637X/794/2/104. Abstract: We present the results of a semianalytical model that evolves the masses and spins of massive black holes together with the properties of their host galaxies along the cosmic history. As a consistency check, our model broadly reproduces a number of observations, e.g. the cosmic star formation history, the black hole mass and luminosity function and the galaxy mass function at low redshift, the black hole to bulge mass relation, and the morphological distribution at low redshift. For the first time in a semianalytical investigation, we relax the simplifying assumptions of perfect coherency or perfect isotropy of the gas fueling the black holes. The dynamics of gas is instead linked to the morphological properties of the host galaxies, resulting in different spin distributions for black holes hosted in different galaxy types. We compare our results with the observed sample of spin measurements obtained through broad K$\alpha$ iron line fitting. The observational data disfavor both accretion along a fixed direction and isotropic fueling. Conversely, when the properties of the accretion flow are anchored to the kinematics of the host galaxy, we obtain a good match between theoretical expectations and observations. A mixture of coherent accretion and phases of activity in which the gas dynamics is similar to that of the stars in bulges (i.e., with a significant velocity dispersion superimposed to a net rotation) best describes the data, adding further evidence in support to the coevolution of massive black holes and their hosts.
CommonCrawl
Lemma 9.28.2. Let $K$ be a field of characteristic $p > 0$. Let $K \subset L$ be a separable algebraic extension. Let $\alpha \in L$. If the coefficients of the minimal polynomial of $\alpha $ over $K$ are $p$th powers in $K$ then $\alpha $ is a $p$th power in $L$. More generally, if $P \in K[T]$ is a polynomial such that (a) $\alpha $ is a root of $P$, (b) $P$ has pairwise distinct roots in an algebraic closure, and (c) all coefficients of $P$ are $p$th powers, then $\alpha $ is a $p$th power in $L$. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 031V. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 031V, in case you are confused.
CommonCrawl
The OpenKIM Directory of Interatomic Model Developers lists researchers engaged in the development of interatomic potentials and force fields. This is a free resource provided by OpenKIM to help researchers engaged in molecular simulation to connect with model developers in order to find interatomic models and to form collaborations. Click here to add your name to the Directory. If you are listed in the directory and are an OpenKIM Member, you can edit your entry on your OpenKIM profile page, sign in to access it. If you are not an OpenKIM Member, contact us with your requested changes. Ti and Zr from G.J.Ackland, Phil.Mag.A, 66, 917. (1992) and G.J.Ackland, S.J.Wooding and D.J.Bacon, Phil. Mag. A 71 553-565 (1995) Note typoes in the journal version of zirconium. Pt unpublished, but made for someone who never got back to me. Cs K Li Mo Na Nb Rb Ta V W Some other metals in ATVF format. Vanadium is published in Journal of Applied Physics, Vol. 93, No. 6, pp. 3328. Others unpublished and untested: let me know if you try them and find anything! Newer Iron for point defects: alpha-Fe potential 2 two potentials are given here,2 and 4 in the paper. Here also is alpha-Fe potential 5, optimised for surfaces note there is a minus sign typo in ... M.I.Mendelev,G.J.Ackland, A.Barashev, DJ Srolovitz and SW Han. Phil.Mag.A, 83 3977-3994 (2003). alpha-Fe + P from G.J.Ackland, M.I.Mendelev, DJ Srolovitz SW Han and AV Barashev. J.PhysCM 16 S2629 (2004). The iron potential here is slightly improved from the 2003 version to eliminate negative thermal expansion. It has a melting point of 1796 K. alpha-Fe + V by M.I.Mendelev and G.J.Ackland, PRB 76 214105 (2007). For alloys, you need a cross potential for input stream 27. Unit now eV and A. Contact me for more details. Cu-Ti by D.T.Kulp, G.J.Ackland, M.M.Sob, V.Vitek and T.Egami, Modelling and Simulation in Materials Science and Engineering, 1, 315 (1992). Al-Ti + V by M.I.Mendelev and G.J.Ackland, PRB 76 214105 (2007). Our focus is on Tersoff-Brenner-type analytical bond-order potentials, including a variant with charge transfer. Dynamic charge-transfer bond-order potential for gallium nitride. Thermodynamics of L1(0) ordering in FePt nanoparticles studied by Monte Carlo simulations based on an analytic bond-order potential. Analytic bond-order potential for bcc and fcc iron - comparison with established embedded-atom method potentials. Analytic Interatomic Potentials for Atomic-Scale Simulations of Metals and Metal Compounds: A Brief Overview. In: integral Materials Modeling: Towards Physics-Based Through-Process Models (16) pp. 197-206. Analytical interatomic potential for modeling nonequilibrium processes in the W-C-H system. Analytical potential for atomistic simulations of silicon, carbon, and silicon carbide. Modelling of compound semiconductors: analytical bond-order potential for gallium, nitrogen and gallium nitride. Modeling of compound semiconductors: Analytical bond-order potential for Ga, As, and GaAs. Modeling the metal-semiconductor interaction: Analytical bond-order potential for platinum-carbon. We develop reactive interatomic potentials using machine learning techniques and in particular artificial neural networks. J. Behler and M. Parrinello, Phys. Rev. Lett. 98 (2007) 146401. J. Behler, J. Chem. Phys. 134 (2011) 074106. J. Behler, Phys. Chem. Chem. Phys. 13 (2011) 17930. J. Behler, J. Phys.: Condensed Matter 26 (2014) 183001. J. Behler, Int. J. Quantum Chem. 115 (2015) 1032. - G. Bonny, D. Terentyev, A. Bakaev, E.E. Zhurkin, M. Hou, D. Van Neck, L. Malerba, "On the thermal stability of late blooming phases in reactor pressure vessel steels: An atomistic study", J. Nucl. Mater. 442 (2013) 282. - G. Bonny, N. Castin, D. Terentyev, "Interatomic potential to study aging under irradiation in stainless steels: the FeNiCr model alloy", Model. Simul. Mater. Sci. Eng. 21 (2013) 085004. - G. Bonny, N. Castin, J. Bullens, A. Bakaev, T.C.P. Klaver, D. Terentyev, "On the Mobility of Vacancy Clusters in Reduced Activation Steels: An Atomistic Study in the FeCrW Model Alloy", J. Phys. Condens. Matter 25 (2013) 315401. - G. Bonny, D. Terentyev, R.C. Pasianot, S. Poncé, A. Bakaev, "Interatomic potential to study plasticity in stainless steels: the FeNiCr model alloy", Model. Simul. Mater. Sci. Eng. 19 (2011) 085008. model high-chromium ferritic alloys", Philos. Mag. 91 (2011) 1724. - G. Bonny, R.C. Pasianot, N. Castin, L. Malerba, "Ternary Fe-Cu-Ni many-body potential to model reactor pressure vessel steels: First validation by simulated thermal annealing" Philos. Mag. 89 (2009) 3531. - G. Bonny, R.C. Pasianot, L. Malerba, "Fe-Ni many-body potential for metallurgical applications", Model. Simul. Mater. Sci. Eng. 17 (2009) 025010. - M.I. Pascuet, G. Bonny, J.R. Fernández, "Many-body interatomic U and Al-U potentials", J. Nucl. Mater. 424 (2012) 158. * Pair, EAM potentials (interpolated or fixed functional form). * Long-range potentials: Coulomb, induced dipole interactions. * Angular dependent potentials (MEAM, ADP, Stillinger-Weber, Tersoff). * Temperature-dependent potentials (alpha functionality). The program is open-source, but feel free to contact the mailing list http://potfit.net/wiki/doku.php?id=mailinglist if you need any help or want to engage in a collaboration to create potentials. See potfit homepage http://potfit.net/wiki/doku.php for comprehensive information. The "potentials database" http://potfit.net/wiki/doku.php?id=potentials contains a number of potfit-created potentials. The Publications page http://potfit.net/wiki/doku.php?id=references tries to maintain an up-to-date list of all publications of potentials created with potfit. Seunghwa Ryu and Wei Cai, "A Gold-Silicon Potential Fitted to the Binary Phase Diagram", Journal of Physics Condensed Matter, 22, 055401 (2010). Working on a class of materials modelling techniques called electronic coarse graining. The fundamental idea is to replace the electrons of a molecule with a simpler system that is efficient to simulate yet has rich physics, such as a quantum harmonic oscillator (know as a Quantum Drude Oscillator). This technique has two main advantages: the model is parameterised from the properties of isolated molecules and its dynamics can be sampled using an order N method. Used electronic coarse graining to create a transferable molecular model of water. I have extensive experience with several different types of interatomic potential formats, in particular for multi-component systems. Specifically, I have developed a number of potentials based on the analytic bond-order potential (ABOP) formalism as well as the versions of the embedded atom method (EAM) suitable for alloys. I also work on the implementation and application of these potentials not only in molecular dynamics and statics simulations but Monte Carlo simulations. More recently, our activities in this area are focused on the development of efficient codes for potential construction that enable the rapid and at least semi-automated generation of potentials for targeted applications. In particular, we have developed the atomicrex code (https://atomicrex.org), which provides a powerful tool for constructing various interatomic potential models including but not limited to e.g., EAM, MEAM, ABOP, and Stillinger-Weber potentials. Several interatomic potentials have been developed to investigate the structural properties of materials in various forms, such as cluster, surface, bulk. S. Erkoc, Physics Report 278, 79-105(1997). S. Erkoc, Annual Review of Computational Physics IX, 1-103(2001). Ed. D. Stauffer, World Scientific. Modified embedded atom method (MEAM) potentials. "Interatomic potentials for ionic systems with density functional accuracy based on charge densities obtained by a neural network" Physical Review B 92, 045131 (2015). Reactive force fields for C-B-N-H and Si-O-Li-P-F systems. The C-B-N-H is for design of liquid hydrogen storage compounds ncluding C-B-N systems. And the Si-O-Li-P-F is for Li-ion batteries. Interlayer potentials for layered materuals. Registry Index methods for layered materials. Force Fields: graphene bilayers, hexagonal boron nitride bilayers, graphene/hexagonal boron nitride bilayers, double walled carbon nanotubes, double walled boron nitride nanotubes, double walled carbon/boron nitride nanotubes. Registry Index: graphene bilayers, hexagonal boron nitride bilayers, graphene/hexagonal boron nitride junctions, hexagonal molybdenum disulfide, nanotubes rolling on surfaces. I. Leven, T. Maaravi, I. Azuri, L. Kronik, and O. Hod, "Inter-Layer Potential for Graphene/h-BN Heterostructures", submitted (2016). I. Leven, I. Azuri, L. Kronik, and O. Hod, "Inter-Layer Potential for Hexagonal Boron Nitride", J. Chem. Phys. 140, 104106 (2014). O. Hod, "The Registry Index: A Quantitative Measure of Materials Interfacial Commensurability", ChemPhysChem 14, 2376-2391 (2013). I. Oz, I. Leven, Y. Itkin, A. Buchwalter, K. Akulov, and O. Hod, "Nanotubes Motion on Layered Materials: A Registry Perspective", J. Phys. Chem. C 120, 4466-4470 (2016). Coarse-grained: RDX, Polymers, Nitromethane, Alcohols, Ionic Liquids. Journal of Chemical Physics 143(24): 244506-244518. Journal of Chemical Physics 134(19): 194109-194114. Physical Review E 87(4): 042606-042614. Journal of Chemical Physics 135(4): 044112-044117. Journal of Chemical Physics 123(13): 134105-134113. Journal of Physical Chemistry B 109(7): 2469-2473. Journal of Chemical Physics 120(23): 10896-10913. We work on the development of first-principles based adiabatic reactive potentials, primarily ReaxFF, and newer generation reactive potentials, polarizable charge equilibration potentials (pQeq), and polarizable Gaussian-based ReaxFF, and on non-adiabatic explicit-electron quantum-based potentials for systems with a high number of electronically excited states, including the electron force field, eFF, for systems with low Z numbers, and the Gaussian Hartree Approximated (GHA) method with angular momentum projection operators, for systems containing high Z elements. Most elements in the periodic table, up to and including d-block. We parametrize the Stillinger-Weber potential for MoS2 and black phosphorus using the lattice dynamical properties. Area of research in our group: Lattice dynamics and nanomechanics. Particularly interested in understanding fundamental relations between the phonon modes and some mechanics phenomena in nanomaterials, including the negative Poisson's ratio effect in nanostructures. Phys. Rev. B 56, 8542 (1997). Phys. Rev. B, 58, 2539 (1998). Phys. Rev. B 58,: 8323 (1998). J. Appl. Phys. 86, 1843 (1999). Tersoff-Brenner type reactive bond order potentials with screening functions for realistic description of bond breaking forces. Second moment approximation for strongly correlated metals (Gutzwiller approximation). EAM potentials for sputtering of metals and etching of silicon surfaces. EAM plus molecular mechanics force fields for alkanethiols on gold surfaces. 1. "Explicit inclusion of electronic correlation effects in molecular dynamics," J.-P. Julien, J. D. Kress, and J.-X. Zhu, arXiv:1503.00933 (2015). 2. B. Jeon, J. D. Kress, and N. Gronbech-Jensen, "Thiol density dependent empirical po¬tential for methyl-thiol on a Au(lll) surface," Phys. Rev. B 76, 155120-1-7 (2007). 3. Y. Mishin, M. J. Mehl, D. A. Papaconstantopoulous, A. F. Voter, and J. D. Kress, " Structural Stability and Lattice Defects in Copper: Investigation by ab initio, tight-binding, and embedded-atom methods," Phys. Rev. B 63, 224106 (2001). 4. T. J. Lenosky, B. Sadigh, E. Alonso, V. V. Bulatov, T. Diaz de la Rubia, A. F. Voter, J. D. Kress, D. F. Richards, and J. B. Adams, "Highly optimized empirical potenital model of silicon," Modelling and Simulation in Mats. Sci. and Eng. 8, 825-841 (2000). 5. J. D. Kress, D. E. Hanson, A. F. Voter, C. L. Liu, X.-Y. Liu, and D. G. Coronell, "Molecular Dynamics Simulations of Cu and Ar Ion Sputtering of Cu (111) Surfaces," J. Vac. Sci. Tech. A 17, 2819-2825 (1999). 6. D. E. Hanson, J. D. Kress, and A. F. Voter, "Reactive ion etching of Si by CI, CI2 and Ar ions: molecular dynamics simulations with comparisons to experiment," J. Vac. Sci. Tech. A 17, 1510-1513 (1999). 7. K. M. Beardmore, J. D. Kress, N. Gronbech-Jensen, A. R. Bishop, "Determination of the headgroup-gold(lll) potential surface for alkanethiol self-assembled monolayers by ab-initio calculation," Chem. Phys. Lett. 286, 40-45 (1998). T. Kumagai, S. Hara, S. Izumi, S. Sakai, "Development of a bond-order type interatomic potential for Si-B systems", Modeling and Simulation in Materials Science and Engineering, Vol. 14, pp.S29-S37 (2006). T. Kumagai, S. Izumi, S. Hara, S. Sakai ,"Development of bond-order potentials that can reproduce the elastic constants and melting point of silicon for classical molecular dynamics simulation", Computational Materials Science,Vol.39, pp.457-464(2007). T. Kumagai, D. Nikkuni, S. Hara, S. Izumi, S. Sakai ,"Development of interatomic potential for Zr-Ni amorphous systems", Materials Transactions,Vol.48, pp.1313-1321(2007). T. Kumagai, S. Hara, J. Choi, S. Izumi, T. Kato ,"Development of empirical bond-order-type interatomic potential for amorphous carbon structures", Journal of Applied Physics,Vol.105article number64310 (2009). S. Hara, T. Kumagai, S. Izumi, S. Sakai, "Multiscale analysis on the onset of nanoindentation-induced delamination: Effect of high-modulus Ru overlayer", Acta Materialia,Vol.57, pp.4209-4216(2009). T. Kumagai, K. Nakamura, S. Yamada, T. Ohnuma, "Simple bond-order-type interatomic potential for an intermixed Fe-Cr-C system of metallic and covalent bondings in heat-resistant ferritic steels", Journal of Applied Physics, Vol.116, article number 4904447 (2014). M. Arai, Y. Takahashi, T. Kumagai, "Determination of high-temperature elastoplastic properties of welded joints by indentation test", Materials at High Temperatures, Vol. 32, pp.475-482 (2015). T. Kumagai, K. Nakamura, S. Yamada, T. Ohnuma, "Effects of guest atomic species on the lattice thermal conductivity of type-I silicon clathrate studied via classical molecular dynamics", Journal of Chemical Physics, Vol.145, article number 64702 (2016). A. Landa, P. Wynblatt, D. J. Siegel, J.B. Adams, O.N. Mryasov, and X.Y. Liu, Development of Glue-type Potentials for the Al-Pb System: Phase Diagram Calculation, Acta Mater. 48, 1753 (2000). DOI:10.1016/S1359-6454(00)00002-1. A. Landa, P. Wynblatt, A. Girshick, V. Vitek, A. Ruban, and H. Skriver, Development of Finnis–Sinclair type Potentials for Pb, Pb–Bi, and Pb–Ni Systems: Application to Surface Segregation, Acta Mater. 46, 3027 (1998). DOI:10.1016/S1359-6454(97)00496-5. A. Landa, P. Wynblatt, A. Girshick, V, Vitek, A. Ruban, and H. Skriver, Development of Finnis–Sinclair type potentials for the Pb–Bi–Ni system—II. Application to surface co-segregation, Acta Mater. 47, 2477 (1999). DOI:10.1016/S1359-6454(99)00105-6. I got started fitting tight-binding models in 1995, doing a postdoc at Los Alamos. I fitted tight-binding parameters for silicon with 36 adjustable parameters to a DFT database using force-matching techniques. That code has been extended over the years to fit spline-based pair potentials, EAM, MEAM, SW (Stillinger-Weber), MEAM+SW, and EAM+SW models. The MEAM+SW model consists of a MEAM term added to a SW term. More recently we have experimented with MEAM(n) and MEAM(n)+SW models which contain multiple MEAM terms. My fitting code contains sophisticated local and global optimizers, and can fit models with several dozen splines or several hundred parameters to large databases, using an ordinary workstation-class computer for fitting. We use statistical techniques and uncertainty quantification to validate model performance. Over the last seven years, I have been self-employed and have had contracts with Los Alamos National Laboratory and Lawrence Livermore National Laboratory. This has resulted in a variety of unpublished work. Please contact me if you are interested in any other contract work, consulting, or commercial ventures. The EDIP potential for carbon was developed with amorphous carbon in mind, but has since been applied to fullerenes, nanotubes, nanoporous carbon and nanodiamond. It contains a long-ranged repulsion but does not have a corresponding attractive term to capture van der Waals forces. It is available in both a stand-alone Fortran program, as well a LAMMPS module. The potentials for oxide systems, namely SrTiO3, Sr(La)TiO3 and Y2Ti2O7, are conventional Buckingham-type potentials suitable for in packages such as DLPOLY, GULP and LAMMPS. All relevant parameters are available in publications. The only exception is the potential developed for MgO which follows the compressible ion formalism developed by Mark Wilson and Paul Madden. This requires a dedicated code. All potentials are of the EAM or Finnis-Sinclair types. A special attention is paid to the liquid structure, crystal defects and phase transformation data. Al, Cu, Mg, Fe, Zr, Ti, Na, Ni, V, Sm. Binary alloys of containing elements above. M.I. Mendelev, F. Zhang, Z. Ye, Y. Sun, M.C. Nguyen, S.R. Wilson, C.Z. Wang and K.M. Ho, MSMSE 23, 045013 (2015). S.R. Wilson and M.I. Mendelev, Philosophical Magazine 95, 224 - 241 (2015).M.I. Mendelev, M.J. Kramer, S.G. Hao, K.M. Ho and C.Z. Wang, Phil. Mag 92, 4454-4469 (2012). M.I. Mendelev, M. Asta, M.J. Rahman and J.J. Hoyt, Phil. Mag. 89, 3269-3285 (2009). M.I. Mendelev, M.J. Kramer, R.T. Ott, D.J. Sordelet, D. Yagodin and P. Popel, Phil. Mag. 89, 967-987 (2009). M.I. Mendelev, M.J. Kramer, C.A. Becker and M. Asta, Phil. Mag. 88, 1723 - 1750 (2008). M.I. Mendelev and G.J. Ackland, Phil. Mag. Letters 87, 349-359 (2007). G.J. Ackland, M.I. Mendelev, D.J. Srolovitz, S. Han and A.V. Barashev, J. Phys.: Condens. Matter 16, S2629-S2642 (2004). M.I. Mendelev, S. Han, D.J. Srolovitz, G.J. Ackland, D.Y. Sun and M. Asta, Phil. Mag. 83, 3977-3994 (2003). M.I. Mendelev and D.J. Srolovitz, Phys. Rev. B, 66, 014205 (2002). A. Fortini, M.I. Mendelev, S. Buldyrev and D.J. Srolovitz, J. Appl. Phys. 104, 074320 (2008). The potential may also be adapted to model other semiconductors. Quantum-based GPT and MGPT potentials for metals and alloys. Generalized pseudopotential theory (GPT) provides a first-principles approach to transferable multi-ion interatomic potentials for transition metals within DFT quantum mechanics. In mid-period transition metals, a simplified model GPT (MGPT) has been developed using canonical d bands to allow analytic forms and large-scale atomistic simulations. Recent advances have led to a more general matrix representation of MGPT beyond canonical bands, allowing improved accuracy, extensions to f-electron actinide metals and series-end transition metals, an order of magnitude increase in computational speed for MD simulations, and the development of electron-temperature-dependent potentials. In addition, in the appropriate limit, GPT can also be used to calculate first-principles many-body central-force potentials for non-transition metals as well. The fast matrix MGPT is now implemented as a USER-MGPT package on LAMMPS. This package can also run non-transition-metal GPT potentials. Most elemental metals and Al-TM alloys for 3d transition metals. We develop Tersoff-Brenner-like potentials and EAM-like potentials. PtC, GaAs, GaN, ZnO, WBeCH(He), WN, FeCrC, FeH, Fe-He. The potentials are all reactive and include potentials for the pure elements. We develop interatomic potentials using two approaches: The artificial neural networks (ANN) that represent the potential energy surface by capturing any type of bonding in the material of interest and the embedded-atom method (EAM) that is generally used for describing the interactions between the atoms in metals and their alloys. A many-body potential for $\alpha$-Zr. Application to defect properties. R.C. Pasianot, A.M. Monti, J. Nucl. Mater. Vol.264, 198 (1999). Interatomic potentials consistent with thermodynamics: The Fe-Cu system. R.C. Pasianot, L. Malerba, J. Nucl. Mater. Vol.360, 118 (2007).. * J. A. Martinez, A. Chernatynskiy, D. E. Yilmaz, T. Liang (梁涛), S. B. Sinnott and S. R. Phillpot, Potential Optimization Software for Materials (POSMat), Computer Physics Communications (to be submitted, July 10 2015; returned for major revision October 18 2015; resubmitted November 30 2015; accepted January 31, 2016). doi:10.1016/j.cpc.2016.01.015. * 11. A. Kumar, A. Chernatynskiy, T. Liang, K. Choudhary, M. Noordhoek, Y.-T. Cheng, S. R. Phillpot and S. B. Sinnott, Charge Optimized Many Body (COMB) Potential for Dynamical Simulation of the Ni-Al Phases, Journal of Physics Condensed Matter 27, 336302 (2015). http://dx.doi.org/10.1088/0953-8984/27/33/336302. * Tao Liang, Tzu-Ray Shan, Yu-Ting Cheng, Bryce D. Devine, Mark Noordhoek, Yangzhong Li, Zizhe Lu, Simon R. Phillpot and Susan B. Sinnott, Classical Atomistic Simulations of Surfaces and Heterogeneous Interfaces with Charge-Optimized Many Body Potentials, Materials Science and Engineering Reports, 74 255-279 (2013). * Yangzhong Li (李扬中), Tao Liang (梁涛), Susan B. Sinnott and Simon R. Phillpot*, A Charge Optimized Many-Body (COMB) Potential for the U-UO2 System. Journal of Physics: Condensed Matter, 25 505401 (2013). * J. A. Martinez, D. Yilmaz, T. Liang, S. B. Sinnott and S. R. Phillpot, Fitting Interatomic Potentials, Current Opinions in Solid State and Materials Science, 17, 263-270 (2013). * Mark J. Noordhoek, Tao Liang, Zizhe Lu, Tzu-Ray Shan, Susan B. Sinnott, and Simon R. Phillpot, Charge-Optimized Many-Body (COMB) Potential for Zirconium, Journal of Nuclear Materials 441, 274-279 (2013). * Tao Liang, Yun Kyung Shin, Yu-Ting Cheng, Dundar E. Yilmaz, Karthik Vishnu, Osvalds Verners, Chenyu Zou, Simon R. Phillpot, Susan B. Sinnott and Adri C. T. van Duin, Reactive Potentials for Advanced Atomistic Simulations, Annual Review of Materials Research 43, 109-130 (2013) DOI: 10.1146/annurev-matsci-071312-121610. * T. Liang, B. Devine, S. R. Phillpot and S. B. Sinnott, A variable charge reactive potential for hydrocarbons to simulate organic metal interactions, Journal of Physical Chemistry A 116, 7976(2012). * Y.-T. Cheng, T.-R. Shan, B. Devine, D. W. Lee, T. Liang, B. Brooks-Hinojosa, S. R. Phillpot, A. R. Asthagiri, and S. B. Sinnott, Atomistic Simulations of the Adsorption and Mobility of Cu Adatoms on ZnO Surfaces using COMB Potentials, Surface Science 606 1280 (2012). * Yun Kyung Shin, Tzu-Ray Shan, Tao Liang, Mark Noordhoek, Susan B. Sinnott, Adri C. T. van Duin and Simon R. Phillpot, Variable Charge Many-Body Interatomic Potentials, MRS Bulletin 37, 504-212 (2012). * Yangzhong Li (李扬中), Tzu-Ray Shan (單子睿), Tao Liang (梁涛), Susan B. Sinnott, and Simon R. Phillpot, Classical Interatomic Potential for Uranium Metal, Journal of Physics: Condensed Matter 24 235403 (2012). * B. Devine, T.-R. Shan, Y.-T. Cheng, A. J. H. McGaughey, M. Lee, S. R. Phillpot and S. B. Sinnott, Atomistic Simulations of Copper Oxidation and Cu/Cu2O Interfaces Using COMB Potentials, Physical Review B 84, 125308 (2011). * Tzu-Ray Shan (單子睿), Bryce D. Devine, Simon R. Phillpot, and Susan B. Sinnott, Molecular Dynamics Study of the Adhesion of Cu/SiO2 Interfaces using a Variable Charge Interatomic Potential, Physical Review B 83, 115327 (2011). * T.-R. Shan, B. D. Devine, J. M. Hawkins, A. Asthagiri, S. R. Phillpot and S. B. Sinnott, Second Generation Charge Optimized Many-Body (COMB) Potential for Si/SiO2 and Amorphous Silica, Physical Review B 82, 235302 (2010). * T.-R. Shan, T. K. Kemper, S. B. Sinnott and S. R. Phillpot, "Empirical Charge Optimized Many Body Potential for Hafnium and Hafnium Oxide Systems", Physical Review B 81 125328 (2010). * T. Liang, S. R. Phillpot, and S. B. Sinnott, "Parameterization of a Many-Body Potential for Mo-S Systems", Physical Review B 79, 245110 (2009) 14 pages. * J. Yu, S. B. Sinnott and S. R. Phillpot, "Optimized Many Body Potentials for fcc Metals" Phil. Mag. Lett. 89, 136-144 (2009). * S. R. Phillpot and S. B. Sinnott, "Simulating Multifunctional Structures", Science 325, 1634-1635 (2009). The embedded atom method and the angular dependent potentials for metals and alloys are being developed by our group. Kernel-based machine learning models for fast and accurate estimation of electronic structure calculations outcomes. Effective Medium Theory potentials for metallic systems. SImilar in spirit to the Embdedded Atom Method. Currently also looking at COMBS type potentials with a student. Ni, Cu, Pd, Ag, Pt, Au. Charge-Optimized Many-Body Potentials (COMB): Developed the 2nd generation formalism and implemented in LAMMPS. Implemented the 3rd generation in LAMMPS. Reactive Force Field (ReaxFF): Parameterized for a couple of organic/inorganic energetic materials. ReaxFF: Ammonium Nitrate; HNS; CL20. Charge optimized many-body (COMB) potentials and reactive empirical bond-order (REBO) potentials. We have developed a new Multi-State MEAM model for Ti. It is now included in KIM. EAM-, ADP-, MEAM-potentials created by the force-matching method (using the potfit code). Electron-temperature-dependent potential for gold. We have developed potentials for oxides and metals in the past. Currently we have a small effort on ReaxFF potentials for electrochemistry. My group develops a versatile potential fitting code (for, e.g., EAM, MEAM, Analytic Bond Order, Tersoff, and user-defined potential models) in collaboration with Professor Paul Erhart, Chalmers University. Finnis-Sinclair type potentials for FCC metals and alloys. They were developed for computer simulations in which van der Waals type interactions between well separated atomic clusters are as important as the description of metallic bonding at short range. The potentials always favour f.c.c. and h.c.p. structures over the b.c.c. structure. They display convenient scaling properties for both length and energy, and a number of properties of the perfect crystal may be derived analytically. "Long-range Finnis-Sinclair potentials", A.P. Sutton and J. Chen, Phil. Mag. Letts., vol. 61, 139 (1990). "Long-range Finnis-Sinclair potentials for f.c.c. metallic alloys", H. Rafii-Tabar and A.P. Sutton, Phil. Mag. Letts., vol. 63, 217 (1991). M. Wen, S. M. Whalen, R. S. Elliott and E. B. Tadmor, "Interpolation Effects in Tabulated Interatomic Potentials", Model. Simul. Mater. Sci. Eng., Vol 23, 074008 (2015). Style snap computes interactions using the spectral neighbor analysis potential (SNAP) (Thompson). Like the GAP framework of Bartok et al. (Bartok2010), (Bartok2013) it uses bispectrum components to characterize the local neighborhood of each atom in a very general way. The mathematical definition of the bispectrum calculation used by SNAP is identical to that used of compute sna/atom. In SNAP, the total energy is decomposed into a sum over atom energies. The energy of atom i is expressed as a weighted sum over bispectrum components. See publication and LAMMPS doc page listed below for more information. A. P. Thompson , L.P. Swiler, C.R. Trott, S.M. Foiles, and G.J. Tucker, "Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials," J. Comp. Phys., 285 316 (2015) . "Shell model potential for PbTiO3 and its applicability to surfaces and domain walls" "Development of interatomic potential for Nd-Fe-B permanent magnet and evaluation of magnetic anisotropy near interface and grain boundary" "Development of a new dipole model: interatomic potential for yttria-stabilized zirconia for bulk and surface" ReaxFF reactive force fields - bond order-based reactive force fields that include a polarizable charge calculation. Senftle, T., Hong, S., Islam, M., Kylasa, S. B., Zheng, Y., Shin, Y. K., Junkermeier, C., Engel-Herbert, R., Janik, M., Aktulga, H. M., Verstraelen, T., Grama, A. Y., and van Duin, A. C. T., 2016. The ReaxFF Reactive Force-field: Development, Applications, and Future Directions. Nature Computational Materials 2, 15011. van Duin, A. C. T., Dasgupta, S., Lorant, F., and Goddard, W. A., 2001. ReaxFF: A reactive force field for hydrocarbons. Journal of Physical Chemistry A 105, 9396-9409. Chenoweth, K., van Duin, A. C. T., and Goddard, W. A., 2008. ReaxFF reactive force field for molecular dynamics simulations of hydrocarbon oxidation. Journal of Physical Chemistry A 112, 1040-1053. Bond order potentials for transition BCC metals. In the case of Fe the ferromagnetism is included via Stoner's model of itinerant magnetism. The potentials are based on tight-binding but in real space and thus no periodic boundary conditions are required. Only dd bonds are included explicitly but the effect of s electrons is included via screening of dd bond integrals. At present the potentials are numerical and thus the relevant code has to be obtained from the authors. and is adjusted during the calculation. M. Aoki, D. Nguyen-Manh, D. G. Pettifor, and V. Vitek, Prog. Mater. Sci. 52,154 (2007). We develop Machine Learning models of atomic forces for arbitrary chemical systems. The models are trained on electronic structure results obtained for representative and relevant samples of chemical space. The underlying goal is to increasingly replace the ab initio calculation of forces by successively trained machine learning models. We have developed such models for carbon and hydrogen atoms in organic materials. Other materials will follow. a MEAM potential for Au-Ge system that is fitted to the binary phase diagram. - EAM potential for studying the atomic scale structure of sputtered multilayers and misfit-energy-increasing dislocations in vapor-deposited CoFe/NiFe multilayers. -Stillinger-Weber potential for the II-VI elements Zn-Cd-Hg-S-Se-Te. - A modified Stillinger-Weber potential for TlBr, and its polymorphic extension. - Analytical bond order potentials for various materials. - X. W. Zhou, H. N. G. Wadley, R. A. Johnson, D. J. Larson, N. Tabat, A. Cerezo, A. K. Petford-Long, G. D. W. Smith, P. H. Clifton, R. L. Martens, and T. F. Kelly, "Atomic scale structure of sputtered metal multilayers", Acta Mater., Vol 49, 4005-4015, 2001. - X. W. Zhou, H. N. G. Wadley, J.-S. Filhol, and M. N. Neurock, "Modified charge transfer–embedded atom method potential for metal/metal oxide systems", Phys. Rev. B, Vol. 69, 035402, 2004. - X. W. Zhou, R. A. Johnson, and H. N. G. Wadley, "Misfit-energy-increasing dislocations in vapor-deposited CoFe/NiFe multilayers", Phys. Rev. B, Vol. 69, 144113, 2004. - D. K. Ward, X. W. Zhou, B. M. Wong, F. P. Doty, and J. A. Zimmerman, "Analytical bond-order potential for the cadmium telluride binary system", Phys. Rev. B., Vol. 85, 115206, 2012. - D. K. Ward, X. W. Zhou, B. M. Wong, F. P. Doty, and J. A. Zimmerman, "Analytical bond-order potential for the Cd-Zn-Te ternary system", Phys. Rev. B, Vol. 86, 245203, 2012. - Donald K. Ward, Xiaowang Zhou, Bryan M. Wong, and F. Patrick Doty, "A refined parameterization of the analytical Cd–Zn–Te bond-order potential", J. Mol. Model., Vol. 19, 5469-5477, 2013. - X. W. Zhou, D. K. Ward, J. E. Martin, F. B. van Swol, J. L. Cruz-Campa, and D. Zubia, "Stillinger-Weber potential for the II-VI elements Zn-Cd-Hg-S-Se-Te", Phys. Rev. B, Vol. 88, 085309, 2013. - X. W. Zhou, M. E. Foster, F. B. van Swol, J. E. Martin, and Bryan M. Wong, "Analytical Bond-Order Potential for the Cd−Te−Se Ternary System", J. Phys. Chem., Vol. 118, 20661−20679, 2014. - X. W. Zhou, M. E. Foster, Reese Jones, P. Yang, H. Fan, and F. P. Doty, "A Modified Stillinger-Weber Potential for TlBr, and Its Polymorphic Extension", J. Mater. Sci. Res., Vol. 4, 15-32, 2015.
CommonCrawl
Analysis of one-way layout of count data. Inadequacy of the Poisson assumption, due to the presence of overdispersion, in analysing count data has been reported by several authors (see McCaughran and Arnold (1976), Bliss and Owen (1958) etc.). Negative binomial distribution has been widely used to incorporate overdispersion in analysing the count data. Several test statistics for detecting negative binomial variation have been presented-C($\alpha$) tests, range-justified tests (appealing to the nonnegativity of the dispersion parameter) are compared with the static presented by Collings and Margolin (1985). One-way layout of data in the form of counts is often reported as a result of laboratory experiment or field work. Assuming the underlying distribution for the groups to be negative binomial with common dispersion parameter, two C($\alpha$) tests are developed for comparing the means of the groups. Their performance is compared in terms of level and power with the likelihood ratio test and test based on variance establishing transformation (Anscombe (1948)). A test for checking the validity of assumption of common dispersion is also developed. In several situations the assumption of a common dispersion parameter might not be tenable. A C($\alpha$) test is derived for comparing the means of negative binomial distributions with unequal dispersion parameters. For two groups, this test is compared with the Welch's approximate degree of freedom formula and Banerji's procedure (1960) for empirical level and power. Methods for testing the presence of an outlier in data coming from a population following Poisson distribution have also been derived. In deriving the C($\alpha$) test statistics for the above problems, the method presented by Neyman (1959) has been presented under a more general setting, which covers many situations concerning inferences on several parameters in presence of nuisance parameters.Dept. of Mathematics and Statistics. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis1990 .B276. Source: Dissertation Abstracts International, Volume: 52-11, Section: B, page: 5913. Supervisor: S. R. Paul. Thesis (Ph.D.)--University of Windsor (Canada), 1989. Barnwal, Rajesh Kumar., "Analysis of one-way layout of count data." (1989). Electronic Theses and Dissertations. 1170.
CommonCrawl
Abstract: In 2009 Lurie published an expository article outlining a proof for a higher version of the cobordism hypothesis conjectured by Baez and Dolan in 1995. In this note we give a proof for the 1-dimensional case of this conjecture. The proof follows most of the outline given in Lurie's paper, but differs in a few crucial details. In particular, the proof makes use of the theory of quasi-unital $\infty$-categories as developed by the author in a previous note.
CommonCrawl
be a commutative diagram with exact rows. If $\beta , \delta $ are isomorphisms, $\epsilon $ is injective, and $\alpha $ is surjective then $\gamma $ is an isomorphism. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 05QB. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 05QB, in case you are confused.
CommonCrawl
How to find a 2D coordinate field's corners in a 3D Coordinate field if I have 3x 3D points with 3x2D Points? In order to solve "this" problem, i have to transform my corner-points from a 2D Space to my 3D Space. The simplest transformation between the two coordinate systems, and the one I suspect you've got in mind, is an affine map. It's convenient to work in homogeneous coordinates, which allows this map to be represented by a $4\times3$ matrix $M$. I'll use lower-case letters for points in the 2-D $u$-$v$ coordinate system and upper-case for points in the 3-D coordinate system to avoid ambiguity. The images of the corners of the 2-D unit square are then found by multiplying their homogeneous coordinate vectors by $M$, but the results will be simple combinations of the columns of $M$: $$(0,0) \mapsto M_3 \\ (1,0) \mapsto M_1+M_3 \\ (0,1) \mapsto M_2+M_3 \\ (1,1) \mapsto M_1+M_2+M_3.$$ You need to dehomogenize, of course, but if you've done this correctly, the last coordinate will always be $1$, so all you need to do to get the corresponding inhomogeneous Cartesian coordinates is to drop it. † In fact, in practice you can drop the last row of $M$ so that you get dehomogenized coordinates directly when you multiply a homogeneous coordinate vector by $M$. If the last element of the homogeneous coordinate vector $\mathbf p$ is $1$, then so will be the last element of $M\mathbf p$, and dehomogenizing the result is a matter of dropping this $1$, as noted elsewhere. Not the answer you're looking for? Browse other questions tagged coordinate-systems transformation affine-geometry or ask your own question. How to determine the position of neighbor points? How to know if two points are diagonally aligned?
CommonCrawl
An older term, hardly used nowadays (2000), for an isolated point, or hermit point, of a plane algebraic curve (cf. also Algebraic curve). For instance, the point $(0,0)$ is an acnode of the curve $X^3+X^2+Y^2=0$ in $\mathbf R^2$. This page was last modified on 20 July 2016, at 21:24.
CommonCrawl
Abstract : This paper addresses the problem of observer-based stabilization of discrete-time linear systems in presence of parameter uncertainties and $ell_2$-bounded disturbances. We propose a new variant of the classical two-steps LMI approach. In the first step, we use a slack variable technique to solve the optimization problem resulting from the stabilization problem by a static state feedback. In the second step, a part of the slack variable obtained is incorporated in the $H_\infty$ observer-based stabilization problem, to calculate simultaneously the Lyapunov matrix and the observer-based controller gains. A numerical evaluation is presented to show the superiority of the proposed Modified Two-Steps Method (MTSM) from LMI feasibility point of view.
CommonCrawl
For mathematical questions about Octave; questions purely about the language, syntax, or runtime errors would likely be better received on Stack Overflow. Octave is a high-level interpreted language for numerical computations. Use either the (octave) tag or the (matlab) tag, unless your question involves both packages. Converting system from continuous time to discrete time with restricted time? How do I complete the steps of finding the Jordan of this $5\times 5$ matrix (with Octave)? Solve 2 equations in 2 unknowns in octave? How to make "sigma" summation of a function by i variable in GNU Octave? Why does the multiplication in a division algebra depends on every component? Octave tf2ss: no way to build a system with multiple outputs? Conversion from state space back to transfer function in octave. How do I plot this graph in octave? The decision boundary found by your classifier? Histogram: What is wrong with this code? Octave - Why $0.6-0.2-0.2-0.2 \neq 0$, but $0.4-0.2-0.2 = 0$?
CommonCrawl
Two concurrent processes $P1$ and $P2$ want to use resources $R1$ and $R2$ in a mutually exclusive manner. Initially, $R1$ and $R2$ are free. The programs executed by the two processes are given below. Is mutual exclusion guaranteed for $R1$ and $R2$? If not show a possible interleaving of the statements of $P1$ and $P2$ such mutual exclusion is violated (i.e., both $P1$ and $P2$ use $R1$ and $R2$ at the same time). Can deadlock occur in the above program? If yes, show a possible interleaving of the statements of $P1$ and $P2$ leading to deadlock. Exchange the statements $Q1$ and $Q3$ and statements $Q2$ and $Q4$. Is mutual exclusion guaranteed now? Can deadlock occur? I didn't understand the last question. Exchange the statements Q1 and Q3 and statements Q2 and Q4. Is mutual exclusion guaranteed now? Can deadlock occur? There is some mistake in the question for the second part. Suppose both processes P1 and P2 reach S4 and Q4 respectively. Now in (b) part deadlock is not possible because whichever process executes its next line will use both of the resources first. Am I correct sir? Both set $R2=$ busy and enter into critical section together. Hence, Mutual exclusion is not guaranteed. Here, deadlock is not possible, because at least one process is able to proceed and enter into critical section. If $Q1$ and $Q3$; $Q2$ and $Q4$ will be interchanged then Mutual exclusion is guaranteed but deadlock is possible. Here, both process will not be able to enter critical section together. if $P1$ sets $R1=$ busy and then preempted, and $P2$ sets $R2=$ busy then preempted. In this scenario no process can proceed further, as both holding the resource that is required by other to enter into CS. Hence, deadlock will be there. @Arjun question and solution are not matching, seems someone updated the question after the soution was posted. In this question, ME is satisfied as per the current question. @Arjun I think there is a printing mistake in the question. please find some reliable source and match the question. What is the mistake? I don't think there are any reliable sources for GATE questions before 2006. @Arjun now given the above solution is correct, please select it as best! In C) ques. Mutual exclusion can be violated.. b. Deadlock is not Possible. c. ME is satisfied and Deadlock is possible. in second part after exchanging, I think that mutual exclusion can still be violated. Please check. Say if P1 executed S1 and then context switch happens now P2 executes its first 4 statements So P2 has got control over R1 and R2 but now again context switch happens and P1 resumes, it will hold R1. Won't it? please confirm. As both processes are using the resources simultaneously in Q5 and S5, Mutual exclusion is violated. like if the order of execution is P1 got executed till S3 and preempted and then P2 executed Q1 and got preempted and again P1 starts and executes S4 and prempted and P2 executes Q2 and at this time both P1 and P2 have access to R2. General Statement- "If there is no mutual exclusion, then Deadlock not possible and if there is Deadlock then there must be mutual exclusion.." Consider a disk with the following specifications: 20 surfaces, 1000 tracks/surface, 16 sectors/track, data density 1 KB/sector, rotation speed 3000 rpm. The operating system initiates the transfer between the disk and the memory sector-wise. Once the head has been ... transfer? What is the maximum percentage of time the CPU is held up for this disk I/O for cycle-stealing DMA transfer? We wish to construct a $B^+$ tree with fan-out (the number of pointers per node) equal to 3 for the following set of key values: 80, 50, 10, 70, 30, 100, 90 Assume that the tree is initially empty and the values are added in the order given. ... Intermediate trees need not be shown. The key values 30 and 10 are now deleted from the tree in that order show the tree after each deletion. Consider a relation examinee (regno, name, score), where regno is the primary key to score is a real number. Write a relational algebra using $( \Pi, \sigma, \rho, \times)$ to find the list of names which appear more than once in examinee.
CommonCrawl
Abstract: We approximate the uniform measure on an equilateral triangle by a measure supported on $n$ points. We find the optimal sets of points ($n$-means) and corresponding approximation (quantization) error for $n\leq4$, give numerical optimization results for $n\leq 21$, and a bound on the quantization error for $n\to\infty$. The equilateral triangle has particularly efficient quantizations due to its connection with the triangular lattice. Our methods can be applied to the uniform distributions on general sets with piecewise smooth boundaries.
CommonCrawl
Recall from The Lebesgue Measure page that the Lebesgue measure $m$ is a set function defined on the set of all Lebesgue measurable sets $\mathcal M$ that is identically the Lebesgue outer measure set function restricted to $\mathcal M$. We now prove an important property of $m$ - the excision property. Theorem 1 (The Excision Property of the Lebesgue Measure): Let $A$ and $B$ be Lebesgue measurable sets with $A \subseteq B$ and $m(A) < \infty$. Then $m(B \setminus A) = m(B) - m(A)$.
CommonCrawl
Takahashi, Ryoji, Víctor A. Gil, and Victor Guallar Journal of Chemical Theory and Computation 2014, 10, 282−288. This study uses Monte Carlo (MC) sampling, and a Markov state model analysis of the resulting trajectories, to compute absolute binding free energies for four benzamidine ligands binding to trypsin that are in good agreement with experiment. The measured binding free energies for the same ligand vary a bit and the mean absolute deviation ranges from 0.9 to 1.4 kcal/mol. The binding free energy for each ligand is derived from a Markov state model analysis of 840 MC trajectories constructed using six different random initial ligand positions - all well away from the protein surface. Each MC trajectory is constructed using the protein energy landscape exploration (PELE) method. There are three kinds of PELE MC moves: (1) the ligand can be translated or rotated rigidly, (2) the internal ligand geometry can be changed using a ligand-specific rotamer library, and (3) all protein atoms are displaced along a randomly picked mode derived from an anisotropic network model followed by minimization of all all atoms except the $\alpha$-carbons. After each move is made the side-chain orientations close to the ligands are sampled from a rotamer library followed my an OPLS-AA/SGB energy minimization of all atoms affected by the move. The resulting "super move" is accepted or rejected based on a Metropolis criterion. The total simulation time for a ligand is about 1 week using 64 cores. However, the binding site of each ligand could be identified using only 20-30 trajectories in 5-10 CPU hours. In fact, such a binding site search can be performed using the PELE web server developed by the authors. With its use of "super moves" with extensive energy minimization this method strikes me as an excellent way to generate snapshots for QM/MM calculations and it seems to me it could be easily adapted to look at enzyme catalysis. How Small Can a Catenane Be?
CommonCrawl
I want to solve a problem. There is n points upper than x = 0 (not on x = 0). We can put circles with radius r on x axis (Which mean their center lie on x axis). We must use minimum circles to cover all points. a point is covered if lie on a circle or be inside a circle. in input n and n points and r is given. This is problem. We have one second time limit and we can not do lots of working. I do a greedy approach. I start from most left. I choose from most left and choose a circle. my point should lie on that circle. I think this is best answer. then skip all point which covered with that circle and do it again until finish all point. However I get wrong answer. I think a lot and I found some problem. I couldn't find a better way. Is it possible to help me? I think that I can map this problem to a famous algorithm but I couldn't find it!!! I'm sorry for bad English too. Thanks. If you're sorting points only by their x-coordinate, it may be that the leftmost point doesn't give you the leftmost circle that you need to place. Say for example that you have a radius 100 and you have points (1, 1) and (2, 100). If you center your circle at (2, 0) it will cover both points, but if you consider (1, 1) first you'll place your circle somewhere between (98, 0) and (100, 0) and it will miss the second point. Yes. I found this problem. I changed my approach. same as before, I always after finishing covering with one circle, for next circle I start from left. but I didn't choose left point as a point which lies on circle. I go to right until there is a upper point (with higher y than before). after finding last point with this feature, I will choose that point. (to be on my circle) Then I test it to see, is it cover all before point? If yes, then I continue, otherwise, I skip that point and go one level before. I mean go to left, next point to this point and do same again. With this approach, for your example, when I go to right (because there is higher point), I will choose that point. after testing it, I will see that it will cover first point too. so I finish problem with only one circle. I didn't find any problem but I get some wrongs for this approach and I coudln't find why?!! Is it possible to help me? I'm so sorry for bad English. You can use ternary search for x-coordinate of the minimal circle center. can you explain more? I read ternary search. I should read it's implementation more carefully but I understand it's purpose. however I couldn't find any map between this two problem. Choosing the value for $$$x_c$$$ according to the ternary search algorithm you can find the minimum radius for the covering circle. Oh. I'm sorry. I think you think that we must choose r so cover all points however question is something else. I mentioned that: We must use minimum circles to cover all points. r is given to us with n points and we must find minimum circles with radius r on x axis with cover all of them. Oh, sorry. So you need to find minimum number of circles with fixed radius $$$r$$$. Then of course gready approach. Can the $$$y$$$-coordinate of some point be larger than $$$r$$$? No. all y-coordinate are smaller or equal to r. Question guarantee that for us. Server time: Apr/25/2019 16:02:02 (e3).
CommonCrawl
Given a set of times that errors occurred, how can I identify the beginning of a spike in errors (in real time)? We can calculate periodically or on each error occurrence. I'd like to be able to adjust the sensitivity of the algorithm based on feedback from the sysadmins. For now, they'd like it to be fairly sensitive, even though we know we can expect some false positives. I am not a statistician, which I'm sure is obvious, and implementing this needs to be relatively simple with our existing tools: SQL Server and old-school ASP JScript. I'm not looking for an answer in code, but if it requires additional software, it probably won't work for us (though I welcome impractical but ideal solutions as a comment, for my own curiosity). It has been 5 months since you asked this question, and hopefully you figured something out. I'm going to make a few different suggestions here, hoping that you find some use for them in other scenarios. For your use-case I don't think you need to look at spike-detection algorithms. What you want is a numerical indicator, a "measure" of how fast the errors are coming. And this measure should be amenable to thresholding - your sysadmins should be able to set limits which control with what sensitivity errors turn into warnings. Your sysadmins would set the sensitivity based on the heights of the bars i.e. the most errors tolerable in a 20-minute interval. What's the problem with this method for your particular scenario? Well, your variable is an integer, probably less than 3. You wouldn't set your threshold to 1, since that just means "every error is a warning" which doesn't require an algorithm. So your choices for the threshold are going to be 2 and 3. This doesn't give your sysadmins a whole lot of fine-grained control. Instead of counting errors in a time window, keep track of the number of minutes between the current and last errors. When this value gets too small, it means your errors are getting too frequent and you need to raise a warning. Your sysadmins will probably set the limit at 10 (i.e. if errors are happening less than 10 minutes apart, it's a problem) or 20 minutes. Maybe 30 minutes for a less mission-critical system. This measure provides more flexibility. Unlike Measure 1, for which there was a small set of values you could work with, now you have a measure which provides a good 20-30 values. Your sysadmins will therefore have more scope for fine-tuning. There is another way to approach this problem. Rather than looking at the error frequencies, it may be possible to predict the errors before they occur. You mentioned that this behavior was occurring on a single server, which is known to have performance issues. You could monitor certain Key Performance Indicators on that machine, and have them tell you when an error is going to happen. Specifically, you would look at CPU usage, Memory usage, and KPIs relating to Disk I/O. If your CPU usage crosses 80%, the system's going to slow down. where the $\alpha$ would determine how much weight give the latest value of $x_k$. then you raise a warning. Moving averages are nice when working with real-time data. But suppose you already have a bunch of data in a table, and you just want to run SQL queries against it to find the spikes. Many real-world time-series exhibit cyclic behavior. There is a model called ARIMA which helps you extract these cycles from your time-series. A search for Online detection algorithms would be a start. +1 for Statistical process control, there's some useful information here on Step Detection. For SPC it's not too hard to write an implementation of either the Western Electric Rules or the Nelson Rules. Just make a USP in SQL server that will iterate through a data set and ping each point against the rules using its neighbouring points. Maybe sum up the number of errors by hour (depending on your needs). You may want to look at statistical process control. Or time series monitoring. There are tons of work in this direction, and the optimal answer probably depends a lot on what exactly you are doing (do you need to filter out yearly or weekly seasonalities in load before detecting anomalies etc.). Not the answer you're looking for? Browse other questions tagged time-series real-time or ask your own question. How to identify spikes in a noisy time series? Is there a simple way to explain the parameter estimation process of MA or ARMA? What's a simple way to test for a break in a small sample time-series?
CommonCrawl
You are given a map of a building, and your task is to count the number of rooms. The size of the map is $n \times m$ squares, and each square is either floor or wall. You can walk left, right, up, and down through the floor squares. The first input line has two integers $n$ and $m$: the height and width of the map. Then there are $n$ lines of $m$ characters that describe the map. Each character is . (floor) or # (wall). Print one integer: the number of rooms.
CommonCrawl
In the Princeton Companion to Mathematics one reads that even pure mathematicians should know some theoretical physics and applied mathematics. What are some well-organized comprehensive companions to theoretical physics for working mathematicians? I have heard of Armin Wachter and Henning Hoeber's, but I don't know if it is rigorous enough (i.e., for example, there are enough proofs of the theorem given). Now let's try to break down the subjects. If you are not interested too much in details, the following book can play the role of a comprehensive companion: http://www.amazon.com/Unified-Grand-Theoretical-Physics-Edition/dp/1439884463 (A Unified Grand Tour of Theoretical Physics, by Ian D. Lawrie). The answer to this question depends sensitively on how much physics you want to learn. For a brief overview of all of physics, two good choices are The Six Core Theories of Modern Physics by Charles Stevens and The Theoretical Minimum by Leonard Susskind. If you want to delve more deeply then I think it is best to go for a book that treats just one subfield of physics, such as classical mechanics or quantum field theory. Some good suggestions are listed in the related Physics StackExchange question. Take a look at Physics and Partial Differential Equations, by Tatsien Li and Tiehu Qin, published by SIAM. Try Eberhard Zeidler's multi-volume Quantum Field Theory. This is extremely comprehensive. To give a partial answer: this is a nice companion to quantum physics for mathematicians (especially those that are into category theory and/or operator algebras): Deep Beauty: Understanding the Quantum World through Mathematical Innovation, ed Hans Halvorson. Not the answer you're looking for? Browse other questions tagged reference-request soft-question mp.mathematical-physics physics or ask your own question. Where does a math person go to learn quantum mechanics? The Unreasonable Effectiveness of Physics in Mathematics. Why ? What/how to catch? How should a mathematician approach the physics literature concerning percolation? What are some mathematical consequences of the study of 6D $\mathcal N = (2,0)$ SCFT? What is known about the common knowledge of mathematicians outside their field?
CommonCrawl
This tag is for questions concerning point processes such as poisson point processes or any other point process. Are all cluster point processes considered as inhomogeneous? How to test if an intensity function is a conditional intensity function? Intensity function $\lambda(u)$ of non-stationary MatérnI hard-core point process? What is the intensity measure of a thinned Poisson point process? Why this definition of spherical contact distribution function is $1 - N(b(o,r) =0)$ and not $N(b(o,r) =0)$? Does it exist a known non-homogeneous point process with fixed number of points? Poisson process uniquely identified proof: what is $\Gamma_r((\Theta ∟ A_i)^r)$? What does the weak convergence of stochastic intensity tell us about the point process? here, $\Phi_e$ is a poisson point process and $\eta_k$ a random variable having exponential distribution. I'm having trouble in understanding how this equality holds? What is the space of all possible counting measures? How to compute the probability of $P(N_A = 1)$ considering an area $A$ in a Poisson point process? When does a stationary point process on group $G$ have $0$ or $\infty$ many points a.s.? Why can we choose a sequence of points uniformly?
CommonCrawl
Why is Marginal Cost = Price better than Marginal Cost > Price for maximizing profit? Isn't MC>P a better aim? Given the revenue you earn from each unit is more than cost in producing each unit? Why do some perfectly competitive, loss-making firms shutdown and others don't? Why is Walras equilibrium inefficient when we are dealing with public goods? I know that when we have public goods we have that: $$MRT = MRS_a + MRS_b$$ Though I fail to understand why does this makes Walras equilibrium inefficient. Thank you very much for your help! Guess 2/3 of the average with integers - mixed strategy equilibria? Difference between Giffen and inferior goods. Why aren't all inferior goods Giffen goods? Why would minimum rent under franchise arrangements for McDonald's decrease year by year? I didn't understand that if the change in output was zero, why wasn't the Marginal Product of Capital and Marginal Product of Labour zero? So how could they predict that the MPL and MPK were varying? Can anybody explain to me the income offer curve? Is it a relationship between $x_1$ and $m$? If the budget equation is $p_1x_1+p_2x_2=m$? With income being the variable $m$?
CommonCrawl
Semiprimes (pq-numbers) guarantee that if p and q are prime numbers the only divisors of the result of p*q are p and q. My question is: Does this hold true for p*q*r as well? For example 3*5*7=105. Are there any three numbers other than 3, 5 and 7 so that x*y*z=105? Of course if there are non, the follow up question is, what about the other prime-multiplications like p*q*r*s? I have this question because I came across an article about sending messages to space and the Arecibo message used a semiprime to transmit information about the layout of the message. If we could do the same with three numbers we could make a 3d layout. The cardinality of 1,679 was chosen because it is a semiprime (the product of two prime numbers), to be arranged rectangularly as 73 rows by 23 columns. The alternative arrangement, 23 rows by 73 columns, produces jumbled nonsense (as do all other X/Y formats). The message forms the image shown on the right, or its inverse, when translated into graphics, characters and spaces. If $p$ and $q$ are distinct prime numbers, then $pq$ actually has four divisors in $\mathbb Z^+$: $1, p, q, pq$. What you want to know about are what we might call "nontrivial" divisors, divisors other than 1 and the number itself. In which case $pq$ does indeed have two nontrivial divisors: $p$ and $q$. If $p$, $q$ and $r$ are distinct prime numbers, then $pqr$ does indeed have $p$, $q$ and $r$ among its nontrivial divisors. But it also has $pq$, $pr$ and $qr$ among its nontrivial divisors. To use your example of 105: its nontrivial divisors are 3, 5, 7, 15, 21, 35. So if we sent the aliens a 3 by 5 by 7 cuboid array, they might misunderstand it as a 3 by 35 rectangular array, or a 5 by 21, or 7 by 15. Even with a $p$ by $q$ array, there is the danger they might misunderstand it. With more possible ways to interpret the message, the potential for misunderstanding increases, along with the potential for star wars. Not the answer you're looking for? Browse other questions tagged prime-numbers prime-factorization or ask your own question. How to explain "why study prime numbers" to 5th Graders? Might there be a Skewes number for semiprimes? Can prime density be increased by arranging odd integers in colums?
CommonCrawl
The integers $1,2,3,\ldots,n$ are to be arranged clockwise around a circle, such that adjacent integers always share a common digit (in their decimal representations). (a) Find the smallest integer $n\ge3$ for which such an arrangement does exist. (b) Find the largest integer $n\ge3$ for which such an arrangement does not exist. (a) It's easy to see that n must be at least 29, since every number less than 10 must be paired with 2 neighbours, so you need 29 to pair with 9. So 9 can be paired with 19 and 29, and then 8 with 18 and 28, etc upto 1 with 11 and 21. If you put the number less than 10 in the middle, those groups of 3 can be easily chained together, because they all have a 1 on one end and a 2 on the other end, So you get the 1-group, 2-group, 3-group, etc. Then 10 and 20 are left, which can be inserted on each end. They form the 0-group. You can put 0 in there to make it consistent. (b) If we take the sequence of (a), we can always insert the next number. 30 between 10 and 20, 31 between 11 and 21, 32 between 12 and 22, etc. There is no number you can never fit. Every number ending in x can always be inserted into the x-group, because x is common inside that entire group. Since n must be at least 29, the answer here is 28. I think the answer is $a) 29$. Each digit from 3 to 9 must be bracketed by 2 other numbers with 3-9, the next 2 being the 20's. So, you'd have 13-3-23-24-4-14-15-5-25 etc. You have to do a little juggling with 1, 2, 10, 20, but those are easy to insert into the chain. Same for 11, 12, 21, 22. "The integers 1,2,3,…,n are to be arranged clockwise around a circle, such that adjacent integers always share a common digit" B.) adjacent numbers must share exactly one digit. Using the same reasoning as the posters above, a number satisfying the constraints can be inserted anywhere, and all numbers greater the min(n) are possible. Therefore, the largest n not satisfying the constraints would be n=2 for A.) and n=3 for B.). Not the answer you're looking for? Browse other questions tagged mathematics combinatorics number-theory or ask your own question.
CommonCrawl
Turn a glass of water upside down without letting the water fall out. The secret to this trick involves some basic lessons in air pressure. Best performed over a teacher's head. Fig. 1: The upside-down water trick. Fill a glass part way with water. Turn it upside-down. You now have water on the floor. Why did you listen to me? Pour water in the same glass again. Put an index card over the mouth of the glass and press the palm of your hand on the index card, pressing the card against the rim of the glass and depressing it slightly into the glass in the center (this part is very important). While your hand is on the index card over the mouth of the glass, invert the glass and slowly take your hand away. If you hold the glass steady and level, the water should remain in the glass (Fig. 1). Why doesn't the water fall out of the glass with the index card? The answer has to do with air pressure. Any object in air is subject to pressure from air molecules colliding with it. At sea level, the mean air pressure is one "atmosphere" (=101,325 Pascals in standard metric units). This air pressure is pushing up on the card from below, while the water is pushing down on the card from above. The force on the card is just the pressure times the area over which the pressure is applied; that's the definition of pressure. $$Force=Pressure\times Area$$ If you've done the trick correctly, the force from the air below exactly counteracts the force from the water above, and the card stays in place. Fig. 2: Diagram showing the relevant forces on the water. The blue arrows indicate the forces due to air pressure above and below the water. The red arrow indicates the force of gravity. Together, the three forces balance out to cancel each other. The details of this delicate balance are more easily understood by looking at the forces on the water, rather than on the card (see Figure 2). The card transfers the force of the air pressure upward to the water, so there is a pressure of (almost1) one atmosphere pushing up on the water from below. Of course there is also pressure from the air inside the glass pushing down on the water from above. The air inside the glass was originally at one atmosphere of pressure when you put the card over it, but when you inverted the glass and removed your hand, the water moved downward a very slight amount (perhaps making the card sag ever so slightly), thereby increasing the volume allotted to the air. As the air expands to fill this increased volume, several things happen at once. The air molecules spread out so that fewer of them hit the edges of the volume each second, and they slow down so that they don't collide with the container quite as forcefully. As a result, the air pressure goes down a tiny bit according to Boyle's Law. Now the pressure inside the glass pushing down is not as great as the outside pressure pushing up, and this pressure difference is enough to counteract the gravitational force pulling down on the water. Once the card sags enough so that these three forces balance, everything will stay put. For a typical sized glass about half full of air, an air volume increase of less than 1% generates a big enough pressure difference to support the weight of the water. There is another separate effect that helps keep the water in the glass. Water molecules have a strong attractive "cohesive" force between them due to the fact that each water molecule can make four hydrogen bonds with other water molecules. (This cohesive force is the origin of surface tension.) In the upside-down glass, it helps prevent the first water drop from separating from the rest of the water volume. As a result, the pressure difference required to keep the water in the glass is less than would be needed if there were no cohesive force. In containers with a small opening, like a straw, cohesion plays a bigger relative effect. This is why you can keep water in a straw just by putting your finger over the top, leaving the bottom open. Cohesion adds the extra force necessary to overcome small instabilities in the water. Why doesn't the water stay in the glass when we don't use the index card? This is really an issue of stability. In principle, if we could invert the glass of water so that the glass was perfectly level and the water was perfectly still, the forces would balance as before and the water would stay in the glass. In practice, it's impossible to achieve these conditions without the help of the card. If the glass is tilted ever so slightly to one side, or if there is a tiny ripple in the surface of the water, a drop of water will fall out of the glass on the low side, and a bubble of air will enter on the high side to make up the missing volume. Then another drop of water will fall out and another bubble of air will enter, and the process will accelerate until all the water is emptied out of the glass. With the index card in place, the water surface is kept flat and the pressure is evenly distributed over the entire mouth of the glass. For much smaller openings, surface tension is enough to stabilize the surface, and we actually don't need the index card. Surface tension demands a certain minimum size for a drop to form; as the first water molecules begin to fall, they pull other moleules along with them until there is enough weight to overcome surface tension and separate a drop. In a narrow straw, there isn't enough room in the opening for both a drop of water to fall out and a bubble of air to flow in at the same time. Does the shape of the glass matter? Only to a small extent. A glass that is tapered, with the base smaller than the mouth as in Fig. 2, is a little easier than a bottle with a narrow mouth and a wide base. The reason for this is that in the case of the bottle, the card has to sag by a bigger amount in order to generate the necessary volume (and pressure) change. If the card sags too much, it is likely that some water will dribble out the crack on one side and some air will bubble in on the other, and the balance will become unstable. Note for geeks: In the case of the tapered glass, it might be tempting to think that even if the air pressure were the same on top and bottom, the force pushing down on the water from above is smaller than the force pushing up from below because the area is smaller above the water than below. However, this argument fails to take into account the force from the sides of the glass. If the glass is tapered, the sides of the glass exert a force that has a small downward component, and this component exactly makes up for the reduced area directly above the water. If the air pressure above the water is exactly equal to the air pressure below the water, the upward and downward forces (counting the sides of the glass) are also exactly equal. Does the water always fall out of your glass? Try using a lighter more flexible material across the mouth of the glass. A heavy, very rigid plate won't work very well. Remember to press into the glass a little bit before you turn it over. Make sure the glass is perfectly rigid. If you use a soft plastic cup, the cup will compress as the water sags, preventing a pressure difference from building up. Use a glass that has a mouth bigger than the base (see "Does the shape of the glass matter? above). Does the water soak through the index card too quickly and make a mess? Try using a foam picnic plate instead of an index card. The foam plate is impervious to water, but it still provides the flexibility needed to depress the plate slightly into the glass before turning it over. For students who already have the concept of air pressure, it's often worthwhile to let the class brainstorm about why the water stays in the glass before leading them through an explanation. In this case, you might let them experiment with both a rigid glass and a soft plastic cup (which won't hold the water — see "troubleshooting" above) in order to identify the important difference. Give the plastic cup to your most troublesome student and stand back. For students who know calculus, it might be a good exercise for them to try to calculate the optimal amount of air to leave in the glass. Use a cylindrical glass instead of a tapered glass to make the calculation a little easier. Have them derive an expression for the distance the water must fall in order to balance forces. They will want to minimize this distance as a function of the height of the air column. The solution is a somewhat messy quadratic equation, but they can plug in typical numbers for the height of the glass, the density of water, the density of air, and assorted physical constants, to get a numeric result. 1. Strictly speaking, the upward force on the water is actually the upward force of the air pressure on the card, reduced by the weight of the card, which is assumed to be very light. This is explained very well. This is explained very well. Great lesson here saving this for sure!
CommonCrawl
Keen to make a skeletal octahedron? Here you can find pictures of the model itself and the module which is used to make it, along with videos of how to make the module and how to put them together to create the skeletal octahedron.... We are going to investigate truncations of a cube and an octahedron. They are related because cube and octahedron are dual polyhedra. Starting with a cube you can remove the corners to make equilateral triangles. Using the same three golden rectangles at right-angles to each other, we can also make an octahedron. If we put a square as shown around each rectangle, the squares will also be at right angles to each other and form the edges of an octahedron.... The 2v Octahedron Dome has 8 struts in a circle for the bottom foundation. Each of the 8 hubs on the bottom has to provide a 45 degree joint to make a 360 degree circle (45 x 8 = 360). Octahedron Building is located in Birmingham, 1,500 feet from The ICC-Birmingham, 2,100 feet from Birmingham NIA, as well as a 9-minute walk from Brindleyplace.... To make folding easier and the final product more professional-looking, you can score along the lines before folding. This means to carefully scratch along the line with a knife, nail or similar object (use a ruler to keep the line straight), just don't cut through! You can make a dodecahedron as follows: Start with an octahedron $\mathcal O$ with edges of length $1$. You can color its faces inblack and white in such a way that no two faces of the same color share an edge, and then you can orient each edge of $\mathcal O$ so that when you move along it, with your head pointing towards the outside of... Learn how to make a little three-dimensional origami octahedron! Hang these up as decorations or use them as a gift box! This little origami gems would look awesome on your Christmas tree! To make folding easier and the final product more professional-looking, you can score along the lines before folding. This means to carefully scratch along the line with a knife, nail or similar object (use a ruler to keep the line straight), just don't cut through! Then do what you did to make an octahedron out of a cube: press Tab to switch to Edit mode. All the vertices should already be selected. Press All the vertices should already be selected. Press W to bring up the Specials menu, and select the Bevel function. A polyhedron is a 3-dimentional solid that contains flat faces and straight edges. You may be familiar with the five shapes known as the Platonic solids: the tetrahedron (4-sided pyramid), the cube, the octohedron, the dodecahedron, and the icosahedron.
CommonCrawl
Abstract: Based on Wigner unitary representations for the covering group $ISL(2,\mathbb C)$ of the Poincaré group, we obtain spin-tensor wave functions of free massive particles with an arbitrary spin that satisfy the Dirac–Pauli–Fierz equations. In the framework of a two-spinor formalism, we construct spin-polarization vectors and obtain conditions that fix the corresponding density matrices (the Berends–Fronsdal projection operators) determining the numerators in the propagators of the fields of such particles. Using these conditions, we find explicit expressions for the particle density matrices with integer (Berends–Fronsdal projection operators) and half-integer spin. We obtain a generalization of the Berens–Fronsdal projection operators to the case of an arbitrary number $D$ of space–time dimensions. Keywords: Wigner unitary representation, Poincaré group, Berends–Fronsdal projection operator, Dirac–Pauli–Fierz equation.
CommonCrawl
The Siruseri Economic Survey has done a through feasibility study of the different stations and documented the expected profits (or losses) for the eateries in all the railway stations on this route. The authorities would like to ensure that every station is catered to. To prevent caterers from bidding only for profitable stations, the authorities have decided to give out catering contracts for contiguous segments of stations. The minister in charge realises that one of the bidders is his bitter adversary and he has decided to hand out as useless a segment as possible to him. On the other hand, he does not want to be seen to be blatantly unfair by handing out a large loss-making section to the adversary. Instead he wants to find the largest segment whose sum is closest to 0, so that his adversary spends all his time running a large number of canteens and makes either a small loss or a small profit or, even better, nothing at all! In other words, if the profits/losses at the stations are p1, p2, ..., pN the minister would like to handover a sequence i, i+1, ..., j such that the absolute value of pi + pi+1 + ... + pj is minimized. If there is more than one sequence with this minimum absolute value then he would like to hand over the longest one. If the adversary is awarded the section 1 through 4, he will make a net profit of 20. On the other hand if he is given stations 6, 7 and 8, he will make loss of 5 rupees. This is the best possible value. My first approach was to make an 2d array and calculate the sums by , first adding up the number adjacent to current index the use that to add with the next adjacent number and calculate the whole matrix which eventually can give me the number nearest to 0 and their position , but due to huge number of stations , the code gave runtime errors. Hence changed to a simple solution where you dont have to save each and every result and also break the loop when the output is the minimum possible value that is 0. The code ran fine with a couple of time limit exceeded cases , but the most astonishing fact is it landed up with 4 wrong answers , the TLE's were expected but wrong answers? I then tested against the test cases , yes couple of them takes a lot of time about 5-6s , but no wrong answers (the set of vertices were different but the question says I can print any set of vertices given the output is minimum) , so probably its a bug in the server. Anyone having a better approach for this problem? The main bug that is probably causing your incorrect answers is due to your misreading of the problem. The problem says that if two or more segments have the same score, then you need to return the longest segment. Then, if there are multiple segments of the same score and length, you can return any of these segments. Currently, your code only finds the first segment with the lowest score. You could trivially modify your program to also record the best segment length and use it as a tiebreaker when you find a new segment of the same score. Depending on the problem intention, it could be a bug that your program does not consider segments of length 0 (i.e. a segment containing just one station). Your program currently only considers segments with a minimum of 2 stations. Thus, given the input 1 2 3 4 5, your program would find the segment 1 2 instead of the segment 1 1. Of course, if 0 length segments are not allowed, then your program is fine. If this addition overflows past MAX_INT, then prof will turn negative when it actually should be a large positive value. For example, if prevCost and profit[j] were both 0x7fffffff, then the addition will result in the value 0xfffffffe which should be over 4 billion, but when treated as a signed int is -2. @PeterTaylor already demonstrated an \$O(n \log n)\$ solution that is probably the simplest to understand. I had come up with another \$O(n \log n)\$ algorithm that works in a similar way. Both are based on the fact that \$S(j) - S(i)\$ gives you the profit of a segment. Create a std::map, which will use running sums (sum[i]) as keys and indices (i) as values. Note that std::map has guaranteed logarithmic insertion and find time, because it uses some form of a a balanced binary tree implementation. Loop i from 0..n, keeping a running sum of profit[0..i]. Search the map for the closest match to the current sum. If this closest match is better than the previous best match (by both score and by segment length), then record it as the new best match. If sum does not exist in the map, insert it. If it already exists, do not insert it because the earlier index with the same sum will give a longer segment so we can throw away the current index. Then go back to step #2. // starting at the first station. // Search map for closest match to sum. Need to search twice. // there is one. Use lower_bound() to find this. // find this by just decrementing the previous lower_bound. // Replace the best match if this match is better. // Add sum to map, if it doesn't already exist. Out of curiosity, I wrote a solution using @PeterTaylor's algorithm that used a sort. You can decide whether you think this one is easier to understand than the one using a map. There is one tricky part here where if there are multiple answers all with score 0, you need to handle that specially in order to find the longest segment with score 0. // Create vector of sums. // Sort vector by sum, then by element index. // need to find the max length segment. // Swap start and end if necessary. The server uses 32-bit integers and does not account for overflow. That is to say, some of the input files create segments with sums that overflow a 32-bit integer. But if you correctly solve the problem using 64-bit integers, the correct answers are marked wrong by the server. So you are expected to overflow your 32-bit integers and get the wrong answer. After accounting for the 32-bit issue, the server clearly has the wrong answer for input sets 1 and 8. For input set 1, you can find the answer of 6 18 19 by visual inspection, because the segment between stations 18 and 19 adds up to 6. The server expects the answer -48 6 8. Input set 8 should have answer 1 39396 47087 but the server expects answer -3 1021 21224. My guess is that whoever "solved" the problem to create the "correct answers" used a buggy program to do it. The trick now is to recreate the same bug to get the same "correct" answers. I was able to submit a program that passed all tests by adding some code to the map implementation. // a positive profit answer to match the server bug. Define \$S(i) = p_1 + p_2 + \ldots + p_i\$. Then you're trying to find \$s\$ and \$e\$ to minimise \$|S(e) - S(s)|\$. If you sort the values of \$S(i)\$ in \$O(n \log n)\$ time, the minimum difference will be between two consecutive values, so you can do a linear scan to find it. So here would be my take for the code using two std::vectors. The idea is that the first vector simply stores the individual profits, whereas the second vector starts with all the profits of the 2 segment chains, aka [i, i+1]. We determine the minimum of that vector. Now in the next step we increase vector2 by element i+2. So now it holds all the segments of length 3 (obviously in every step the last element gets ignored). We continue until we reach the full segment. Not the answer you're looking for? Browse other questions tagged c++ algorithm programming-challenge time-limit-exceeded dynamic-programming or ask your own question.
CommonCrawl
If differential operators are linear operators, what might it mean to act a differential operator to a function to its left? Given a differential operator like the regular derivative, or grad or curl or div etc, it can act on a function to its right to yield a new function. Because it is linear, it is effectively like a an infinite matrix acting on an infinite column vector (roughly, obviously the space of smooth functions in uncountable, +other differences). But matrices can be applied to the other side on a row vector. What would be analogous for differential operators? There is no natural "action on a function to its left" that can be assigned to a differential operator. For a linear operator, what matters is the result of its action; what does not matter is how we write it. The fact that we say "a matrix applied to other side on a row vector" is just our usage of the language. It does not create a new kind of action of a linear operator represented by the matrix. Not the answer you're looking for? Browse other questions tagged linear-algebra linear-transformations differential-operators or ask your own question. What does ad$f$ mean, for $f$ a smooth function? Let $A$ be an $8 \times 5$ matrix of rank 3, and let $b$ be a nonzero vector in $N(A^T)$. Show $Ax=b$ must be inconsistent.
CommonCrawl
Quasar absorption systems (QASs) offer a way to spectroscopically study chemical evolution in galaxies, allowing one to better understand important astrophysical processes like stellar evolution, planet formation, and the development of life. Because their rich H I content can produce a substantial fraction of observable stars, classes of QASs including Lyman limit systems (LLSs) and damped Lyman-$\alpha$ absorbers (DLAs) provide a direct probe for analyzing the chemical evolution of metals in galaxies. QASs exhibiting dust absorption lines, known as ``dusty'' galaxies, are hypothesized to be more metallically enriched than similarly-redshifted QASs lacking dust-related absorption. Using the IRAF data reduction package and the apparent optical depth method, the quasar spectra of two candidate dusty QASs, a DLA at $z=0.692$ toward the quasar 3C 286 and an LLS at $z=1.795$ toward the quasar Ton 618, were analyzed. A search for rare elements led to a novel identification of Ga in the LLS. Relative to comparable dust-free QASs, at $-1.34\pm0.05$ dex, the DLA's metallicity was significantly lower than the mean metallicity at its redshift, whereas the LLS's metallicity, $0.86\pm0.12$ dex, was much higher than the mean metallicity at its redshift. However, due to the insufficient sample size of this study, more data is needed to determine a definite trend. Raw data for six additional QASs located along the sightline toward the quasar Q1246-057 ($z=2.247$) are provided as well, and will be analyzed in full in a future study. Singh, Ishrat, "HIRES Analysis of Eight Candidate Dusty Absorbers: Implications for Chemical Evolution in Galaxies" (2018). South Carolina Junior Academy of Science. 73.
CommonCrawl
Two friends; Blake and Denise go out to eat at the Disaster Zone. But it's no ordinary restaurant...Customers at the Disaster Zone can experience natural disasters while they eat, just like in nature! The two friends have finished eating and are now just waiting for their bill. Oh, their bill's all wet and some of the numbers are smeared. This is perfect! Blake's been dying to try out his new calculator app, but Denise believes she can calculate the missing information quicker than Blake and his app. In order to help Denise with her calculations, she needs to know how to do clever calculations with money. Let's take a look at their bill. Together, Blake and Denise ordered 5 drinks. Each drink costs $1.25. What is the total cost for the drinks? We can write this as a math expression by setting up an equation to look like this. If you see a question like this on your homework, you might be tempted to reach for your calculator, just like Blake. But, if we think of this multiplication problem in terms of money, we can imagine what the answer will be. Five times one is five. That leaves us with 5 times 0.25. What do you know that has the value 0.25? If you said a quarter, you're exactly right! This problem just got a whole lot easier! If you have 4 quarters, you have a dollar, and five quarters gives you $1.25. We just add this to 5 and we have our answer! $6.25. After multiplying the menu prices by how many of each menu item was ordered, the two friends decide to tally the total. Uh-oh! EARTHQUAKE! Good luck hitting those buttons, Blake! Getting back to the bill, they ordered $6.25 in drinks, a total of $13.85 in appetizers, and $26.10 for the main courses, and don't forget their coupon for $1! If we write a mathematical expression for their bill, we get 6.25 plus 13.85 plus 26.10 minus 1. First, let's look at the cents. We can group the 2 and 8, giving us one dollar. Since our coupon is worth $1, we can ignore this part. That leaves us with 5 cents plus 5 cents plus 10 cents. Five plus five is 10...10 plus 10 is 20 cents. Let's write this down. Next, we have to look at the dollars. Now we just have to add 6, 13, and 26. 6 and 13 is 19. Now we have 19 plus 26. Taking away 1 from 26 and giving it to 19 gives us 20 plus 25 which is 45, and adding our 0.20 from earlier leaves us with a total of 45.20. Group the numbers in the way that makes the most sense to you! Since Blake and Denise want to split the bill evenly, we have to divide by 2. Let's break this up again into dollars and cents. Just looking at the dollars, let's break up 45 into 40 and 5. 40 divided by 2 is 20 and 5 divided by 2 is 2.50, so 45 divided by 2 is 22.50. 20 cents divided by 2 is 10, so adding that to our previous answer, we get 22.60 each. Remember, group the numbers in the way that makes the most sense to you! Getting back to the Disaster Zone, Denise is finished with her calculations, but what about Blake? I guess you can't always rely on technology! After this lesson, you will be able to solve real-world currency-related problems that require fast calculations. The lesson begins by teaching you to note if certain amounts are linked to specific money values. It leads you to learn how to group the given amounts by place or money value. It concludes with a firm reminder to use the grouping method that makes the most sense to you. Learn about clever money calculations by watching Denise outsmart Blake in computing their restaurant bill. This video introduces new concepts, notation, and vocabulary such as the grouping of place and money values, like cents and dollars, quarters and dimes, even tens and hundreds, as a proper application of the mathematical properties of the basic operations required to solve the problem. Before watching this video, you should already be familiar with currency values and terms, the four basic operations and the mathematical properties. After watching this video, you will be prepared to better solve other real world problems that involve units of measurement, aside from currency, like time, distance and volume. Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Clever Calculations with Money kannst du es wiederholen und üben. Explain how to use mental math using quarters. You can do this more quickly by using multiplication. One quarter is worth $0.25$ dollars. What a disaster: Blake and Denise can't recognize the total for their five drinks, with cost $ \$ 1.25 $ each. First, the keyword each indicates multiplication. So they have to multiply $5\times 1.25$. For this they first multiply $5\times 1=5$. That's quite easy. Almost done: They still have to add the results $5+1.25=6.25$ to get the total for all their drinks: $6.25$ dollars. Count the number of the notes as well as the coins. Multiply the value of any bill or coin by the number of coins or bills of that type. Then add the resulting values. Two $20$ dollar bills and one $5$ dollar bill, plus two $10$ cent coin. Two $10$ dollar bills and one $5$ dollar bill, plus one $50$ cent piece and three $10$ cent coins. One $20$ dollar bill, one $10$ dollar bill, one $5$ dollar bill, and three $1$ dollar bills, as well as one $50$ cent coin. Two $10$ dollar bills, one $5$ dollar bill, and two $1$ dollar bill, plus one $50$ cent piece and one $10$ cent coin. Explain grouping mental math strategies for addition and subtraction. Mental math is sometimes made easier with the help of a few tricks. Identify correct mental math techniques. The last digit is $7$. The middle digit is $2$. The first digit is $5+1=6$. So we get $57\times 11=627$. Check the statements with a few examples. You can always decide which calculation seems to be more comfortable for you. The last digit is $9$. The middle digit is $3$. The first digit is $4+1=5$. So we get $539$ as the result. Explain how to group division problems. Solve the following problems involving money. A quarter is $25$ cents, a dime $10$ cents, and a nickel $5$ cents. First decide if you have to add, subtract, multiply, or divide. $0.25$ for each of the $8$ hats leads to multiplication: $8\times 0.25$. Because $8$ is double $4$, it's easier to multiply a quarter by $4$ to get $1$ dollar and to double the result to get $2$ dollars. Just add the values: $6+2\times 0.25+5\times 0.10+8\times 0.05=6+0.50+0.50+0.40=6+1+0.40=7.40$ dollars. Adding these two results leads to the total of $17$ dollars. Here we have to divide: We know the total $33.50$ for $5$ burgers. Last we add the results: $6+0.60+0.10=6.70$ dollars, the price for one burger.
CommonCrawl
What is the expectation value of the 3D delta function for the Hydrogen atom ground state? I can't evaluate the delta function at zero, since it is at the endpoint of the integration limits [not inside the interval $(0,\infty)$]? of the 3D Dirac delta function. Hope thats correct, and sorry if this was trivial, certainly the expected answer. I still don't fully understand why it's not possible to directly solve this in spherical coordinates, but I understand it is related to the difficulty in evaluating $\delta(r)$ at the endpoint $r=0$ (see CuriousKev's comment). Not the answer you're looking for? Browse other questions tagged quantum-mechanics dirac-delta-distributions or ask your own question. Hydrogen ground state energy calculation? Can hydrogen atom state be a superposition of 2 pure states with opposite spin? Can we show that the ground state of the He atom is a spin singlet rather than triplet?
CommonCrawl
You are provided with video segments with one or many fish in each video segment. These videos are gathered on different boats in that are fishing for ground fish in the Gulf of Maine. The videos are collected from fixed-position cameras that are placed to look down on a ruler. A fish is placed on the ruler, the fisherman removes their hands from the ruler, and then either discards or keeps the fish based on the species and the size. The camera captures 5 frames per second. Currently, these videos are manually reviewed for both scientific and compliance purposes. The ultimate goal of this competition is to create an algorithm that can identify the number and length of fish, and what species those fish are in the videos. This will significantly reduce the human review time for the videos and increase the volume of data to manage and protect fisheries. Note: It is strictly prohibited to add additional annotations or hand-label the dataset. Algorithms must be created using the annotations as provided. Data augmentation that generates new videos and transforms the annotations is permitted. We also provide the annotations for these videos marking the locations of the fish. We provide these annotations in two formats. We provide a csv with all of the annotations for all of the videos that is used for the purposes of the competition. We also provide json files for each video in the training set that has the same annotations. These json files can be loaded using the annotation software to view the annotations. For the training data, we have provided the the x,y coordinates for both the start and end of the line drawn along the fish. This can be used to localize where in the image a fish appears when training models. In addition to the fish, we have annotated some "no-fish" video frames. These frames can be used to help ensure that your algorithm correctly identifies when there is not a fish present on the ruler. These appear in the data as rows where all of the species columns are set equal to zero. The data included in this competition can be visualized using the open source video annotation software provided by CVision, which utilizes FFMPEG for video decoding. This software guarantees frame level accuracy of annotations by constructing a map between decoding timestamps (DTS) and frame numbers when a video is loaded. This map is subsequently referenced when seeking and stepping forward and backward in the video. The imagery associated with each frame has been verified to match the imagery from decoding sequentially with both OpenCV and Scikit-Video. When developing your algorithms, you can use the same tool to visualize your outputs. To do this, simply follow the JSON file format described in the manual. For this competition, all of your annotations in the "detections" array should be line annotations ("type" field is "line"). In this case the "w" and "h" fields actually correspond to the endpoint coordinates of the line. There should be one track per detection in the "tracks" array. Corresponding tracks and detections should share the same ID. The "global_state" field should be empty. Note, for the majority of the videos there is a fish in the first frame. Also, for some videos there is a fish 4 frames before the end of the video. This is an artifact of the video processing. row_id - A is number for the row in the dataset. frame - The number of the frame in the video. Note: not every video software/library encodes/decodes videos with the same assumptions about frames. You should double check that the library you use renders frames where these annotations line up with the fish. fish_number - For a given video_id, this is the Nth (e.g., 1st, 2nd, 3rd) fish that appears in the video. length - The length of the fish in pixels. x1 - The x coordinate of the first point on the fish. y1 - The y coordinate of the first point on the fish. x2 - The x coordinate of the second point of the fish. y2 - The y coordinate of the second point of the fish. Given that this is a competition, we've created an aggregate metric that will give a general sense of performance on all of these tasks. The metric is a simple weighted combination of an individual metric for each of the tasks. While there are certain weights, we recommend you focus on a well-rounded algorithm that can contribute to each of these tasks! Group by the fish_number column, taking the maximum species probability value that appears for the frames with this fish number. This gives us a sequence of predicted fish (e.g., Cod, Cod, Haddock, Plaice). Compute the edit distance (Levenshtein distance) between this sequence and the true sequence of fish. Normalize the edit distance between [0, 1] by dividing by the actual number of fish and then clipping values greater than 1. Multiply this score by |$\alpha_N = 0.6$| since this is the most important task. Calculate the per-class AUC for each species. Take the mean of the class AUCs for an overall AUC. Multiply by |$\alpha_S = 0.3$| since this is the second most important task. Note: There are some annotated frames without any fish present. Algorithms will be expected to predict 0 for every species in these rows. Note: For situations where AUC is undefined, MAE is calculated instead (e.g., for a video where all fish are the same species). The length of a fish is given in pixels in the annotations. We evaluate an algorithm's success in this task using |$R^2$| (also known as the coefficient of determination). This is a standard metric for regression tasks, and has a range |$[-\inf, 1]$|. For this task we clip negative values to zero. Finally, we multiply by |$\alpha_L = 0.1$| since this is the least important task (but still worth 10% of the overall metric). Overall, our metric is on [0, 1] with 0 being the worst possible score and 1 being the best possible score. For each video in the test set, we ask competitors to make a submission that has predictions for every frame in the video file. These are the columns that are in the submission format. row_id - A row ID for the test set file. frame - The frame in the video. video_id - The video ID for the video in the test set. fish_number - For each video_id, this should be incremented each time a new fish appears clearly on the ruler.
CommonCrawl
Abstract: We establish interior Schauder estimates for kinetic equations with integro-differential diffusion. We study equations of the form $f_t + v \cdot \nabla_x f = \mathcal L_v f + c$, where $\mathcal L_v$ is an integro-differential diffusion operator of order $2s$ acting in the $v$-variable. Under suitable ellipticity and Hölder continuity conditions on the kernel of $\mathcal L_v$, we obtain an a priori estimate for $f$ in a properly scaled Hölder space.
CommonCrawl