text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Substantial investments are being made in health research to support the conduct of large cohort studies with the objective of improving understanding of the relationships between diverse features (e.g. exposure to toxins, genetic biomarkers, demographic variables) and disease incidence, progression, and mortality. Longitudinal cohort studies are commonly used to study life history processes, that is patterns of disease onset, progression, and death in a population. While primary interest often lies in estimating the effect of some factor on a simple time-to-event outcome, multistate modelling offers a convenient and powerful framework for the joint consideration of disease onset, progression, and mortality, as well as the effect of one or more covariates on these transitions. Longitudinal studies are typically very costly, and the complexity of the follow-up scheme is often not fully considered at the design stage, which may lead to inefficient allocation of study resources and/or underpowered studies. In this thesis, several aspects of study design are considered to guide the design of complex longitudinal studies, with the general aim being to obtain efficient estimates of parameters of interest subject to cost constraints. Attention is focused on a general $K$ state model where states $1, \ldots, K-1$ represent different stages of a chronic disease and state $K$ is an absorbing state representing death. In Chapter 2, we propose an approach to design efficient tracing studies to mitigate the loss of information stemming from attrition, a common feature of prospective cohort studies. Our approach exploits observed information on state occupancy prior to loss-to-followup, covariates, and the time of loss-to-followup to inform the selection of individuals to be traced, leading to more judicious allocation of resources. Two settings are considered. In the first there are only constraints on the expected number of individuals to be traced, and in the second the constraints are imposed on the expected cost of tracing. In the latter, the fact that some types of data may be more costly to obtain via tracing than other types of data is dealt with. In Chapter 3, we focus on two key aspects of longitudinal cohort studies with intermittent assessments: sample size and the frequency of assessments. We derive the Fisher information as the basis for studying the interplay between these factors and to identify features of minimum-cost designs to achieve desired power. Extensions which accommodate the possibility of misclassification of disease status at the intermittent assessments times are developed. These are useful to assess the impact of imperfect screening or diagnostic tests in the longitudinal setting. In Chapter 4, attention is turned to state-dependent sampling designs for prevalent cohort studies. While incident cohorts involve recruiting individuals before they experience some event of interest (e.g. onset of a particular disease) and prospectively following them to observe this event, prevalent cohorts are obtained by recruiting individuals who have already experienced this event at some point in the past. Prevalent cohort sampling yields length-biased data which has been studied extensively in the survival setting; we demonstrate the impact of this in the multistate setting. We start with observation schemes in which data are subject to left- or right-truncation in the failure-time setting. We then generalize these findings to more complex multistate models. While the distribution of state occupancy at recruitment in a prevalent cohort sample may be driven by the prevalences in the population, we propose approaches for state-dependent sampling at the design stage to improve efficiency and/or minimize expected study cost. Finally, Chapter 5 features an overview of the key contributions of this research and outlines directions for future work. | CommonCrawl |
Why is selection sort faster than bubble sort?
All complexities you provided are true, however they are given in Big O notation, so all additive values and constants are omitted.
To answer your question we need to focus on a detailed analysis of those two algorithms. This analysis can be done by hand, or found in many books. I'll use results from Knuth's Art of Computer Programming.
As you can see, bubble sort is much worse as the number of elements increases, even though both sorting methods have the same asymptotic complexity.
This analysis is based on the assumption that the input is random - which might not be true all the time. However, before we start sorting we can randomly permute the input sequence (using any method) to obtain the average case.
I omitted time complexity analysis because it depends on implementation, but similar methods can be used.
The asymptotic cost, or $\mathcal O$-notation, describes the limiting behaviour of a function as its argument tends to infinity, i.e. its growth rate.
The function itself, e.g. the number of comparisons and/or swaps, may be different for two algorithms with the same asymptotic cost, provided they grow with the same rate.
More specifically, Bubble sort requires, on average, $n/4$ swaps per entry (each entry is moved element-wise from its initial position to its final position, and each swap involves two entries), while Selection sort requires only $1$ (once the minimum/maximum has been found, it is swapped once to the end of the array).
In terms of the number of comparisons, Bubble sort requires $k\times n$ comparisons, where $k$ is the maximum distance between an entry's initial position and its final position, which is usually larger than $n/2$ for uniformly distributed initial values. Selection sort, however, always requires $(n-1)\times(n-2)/2$ comparisons.
In summary, the asymptotic limit gives you a good feel for how the costs of an algorithm grow with respect to the input size, but says nothing about the relative performance of different algorithms within the same set.
Bubble sort uses more swap times, while selection sort avoids this.
When using selecting sort it swaps n times at most. but when using bubble sort, it swaps almost n*(n-1). And obviously reading time is less than writing time even in memory. The compare time and other running time can be ignored. So swap times is the critical bottleneck of the problem.
Not the answer you're looking for? Browse other questions tagged algorithms runtime-analysis efficiency sorting or ask your own question.
Best and worse case inputs for heap sort and quick sort?
Why don't we calculate swaps and other steps except comparison for finding time complexity of a sorting algorithm?
What sorting algorithm should be used for this array?
Why is my own implementation of Bubble Sort so much slower than another one I found online?
What is the real reason that Bubble Sort runs at O(n) in best case? | CommonCrawl |
A multi-dimensional array is an array of arrays. 2-dimensional arrays are the most commonly used. They are used to store data in a tabular manner.
Consider following 2D array, which is of the size $$3 \times 5$$. For an array of size $$N \times M$$, the rows and columns are numbered from $$0$$ to $$N−1$$ and columns are numbered from $$0$$ to $$M−1$$, respectively. Any element of the array can be accessed by $$arr[i][j]$$ where $$0 \le i \lt N$$ and $$0 \le j \lt M$$. For example, in the following array, the value stored at $$arr[ 1 ][ 3 ]$$ is $$14$$.
Initializing an array after declaration can be done by assigning values to each cell of 2D array, as follows.
This is quite naive and not usually used. Instead, the array elements are read from STDIN.
These methods of declaration, initialization, and processing can be extended to 3D or higher dimensional arrays. | CommonCrawl |
You are given an array of $n$ integers. Your task is to calculate the median of each window of $k$ elements, from left to right.
The median is the middle element when the elements are sorted. If the number of elements is even, there are two possible medians and we assume that the median is the smaller of them.
The first input line contains two integers $n$ and $k$: the number of elements and the size of the window.
Then there are $n$ integers $x_1,x_2,\ldots,x_n$: the contents of the array.
Print $n-k+1$ values: the medians. | CommonCrawl |
Every evening, Farmer John rings a giant bell that summons his cows to the barn for dinner. Eager to get to the barn as quickly as possible, they all follow the shortest possible route to get there.
The farm is described by a set of $N$ fields ($1 \leq N \leq 10,000$), conveniently numbered $1 \ldots N$, with the barn residing in field 1. The fields are connected by a set of $M$ bidirectional trails ($N-1 \leq M \leq 50,000$). Each trail has a travel time associated with it, and there is a path from every field to the barn using some set of trails.
Field $i$ contains $c_i$ cows. Upon hearing the dinner bell, these cows all walk to the barn along a route that takes the minimum amount of time. If there are several routes tied for the minimum time, the cows take whichever of these is "lexicographically" smallest (i.e., they break ties between two routes by favoring the one using the lower-indexed field at the first place where the routes differ, so for example a path that visits fields 7, 3, 6, 1 would be preferable to one that visits 7, 5, 1, assuming both had the same travel time).
Farmer John is worried about the barn being far away from some fields. He adds up the travel time experienced by each cow, summed over all the cows, calling this number the total travel time. He would like to reduce this number as much as possible by adding one extra "shortcut" trail which has a travel time of $T$ ($1 \leq T \leq 10,000$), from the barn (field 1) to some other field of his choosing. If a cow stumbles upon the shortcut trail while traveling along her usual path to the barn, she will take it if it gets her to the barn faster. Otherwise, a cow will follow her usual route, even if it might have been possible to use the shortcut to improve her travel time.
Please help Farmer John determine the greatest possible amount of decrease in total travel time he can achieve by adding his shortcut trail.
The first line of input contains $N$, $M$, and $T$. The next line contains $N$ integers $c_1 \ldots c_N$, each in the range $0 \ldots 10,000$. The next $M$ lines each describe a trail using three integers $a$, $b$, and $t$, where the trail connects fields $a$ and $b$ and has travel time $t$. All travel times are in the range $1 \ldots 25,000$.
Please output the largest possible reduction in total travel time Farmer John can achieve. | CommonCrawl |
Are covariant derivatives of Killing vector fields symmetric?
A vector field $\zeta$ is conformal on a Riemannian manifold $(M,g)$ if $$\mathcal L_\zeta g=\rho g$$These vector fields have a well known geometrical interpretation. The flow of a conformal vector field consists of conformal transformations.
I want to enlarge this class of conformal vector fields as follows. A vector field $\zeta$ is $2-$conformal on a Riemannian manifold $(M,g)$ if $$\mathcal L_\zeta \mathcal L_\zeta g=\rho g$$ It is clear that any conformal vector field is $2-$conformal and the converse need not be true. T. Operea, B. Unal and me did the same think for $2-$Killing vector fields(see references blow).
What could be the physical and geometric interpretation of a $2-$conformal vector field?
In fact, I want to understand the left-hand side, is it a double consequent dragging of $g$ or some thing else? | CommonCrawl |
Combinatorial argument for solution of recursion behaving similarly as Pascals triangle?
With initial conditions $F(0,d)=1,F(n,1)=1$ and $n\in\mathbb N_0, d\in\mathbb N$.
Can this solution be justified (proven) by a combinatorial argument?
I want to avoid proving it by solving the above recursion in two vairables.
The recursion seems similar to the one for diagonals in Pascals triangle.
Browse other questions tagged recursion combinatorial-proofs or ask your own question.
Prove combinatorial Identity using a combinatorial argument. | CommonCrawl |
Riemann Integrability of Cts. Functions and Functions of Bounded Var.
Recall from the Riemann-Stieltjes Integrability of Continuous Functions with Integrators of Bounded Variation page that we proved that if $f$ is a continuous function on $[a, b]$ and $\alpha$ is a function of bounded variation on $[a, b]$ then $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$.
We have looked a lot of Riemann-Stieltjes integrals thus far but we should not forget the less general Riemann-Integral which arises when we set $\alpha (x) = x$ since these integrals are fundamentally important in calculus.
The function $\alpha(x) = x$ is a monotonically increasing function and we've already see on the Monotonic Functions as Functions of Bounded Variation page that every monotonic function is of bounded variation.
Consequentially, the following theorem follows rather naturally as a corollary for Riemann integrals from the theorem referenced at the top of this page.
Theorem 1: Let $f$ be a function defined on $[a, b]$.
a) If $f$ is continuous on $[a, b]$ then $\int_a^b f(x) \: dx$ exists.
b) If $f$ is of bounded variation on $[a, b]$ then $\int_a^b f(x) \: dx$ exists.
Proof of a): Suppose that $f$ is continuous on $[a, b]$. Note that $\alpha(x) = x$ is a function of bounded variation. Hence by the theorem referenced at the top of this page we have that $f$ is Riemann-Stieltjes integrable with respect to $\alpha(x) = x$ on $[a, b]$, that is, $\int_a^b f(x) \: dx$ exists. | CommonCrawl |
Assume there are N independent exponential random variable $(X_1, X_2,..., X_N)$ with parameter $\lambda$. Fix a real number $t > 0$. Let Y be the largest $N$ so that $X_1 + X_2 + \ldots + X_N \leqslant t$ ($Y = 0$ if $X_1 > t$). How to show independent random variable $Y$ has the Poisson distribution with parameter $t\lambda$?
Not the answer you're looking for? Browse other questions tagged probability statistics probability-distributions exponential-distribution or ask your own question.
Proof that a random variable has exponential distribution.
What's the conditional probability mass function of a Poisson random variable less than t given that it and another Poisson r.v. equal t?
How to show a given random variable obey exponential distribution?
Determine the distribution of the sum of n independent identically distrubted poisson random variable $X_i$? | CommonCrawl |
Schaller, M., Bower, R. G. & Theuns, T. (2013), On the use of particle based methods for cosmological hydrodynamical simulations, 8th International SPHERIC Workshop. Trondheim, Norway, Trondheim.
Mitchell, N.L., Bower, R.G., Theuns, T. & Vorobyov, E.I. (2012), Towards Understanding Simulated Feedback in AMR and SPH Codes and the Multi-Phase Nature of the ISM, in Capuzzo-Dolcetta, R., Limongi, M. & Tornambè, A. eds, Astronomical Society of the Pacific Conference Series 453: Advances in Computational Astrophysics: Methods, Tools, and Outcome. 19.
Kodama, T., Smail, Ian, Nakata, F., Okamura, S. and & Bower, R. G. (2004), History of Mass Assembly and Star Formation in Galaxy Cluster, Studies of Galaxies in the Young Universe with New Generation Telescope 23-31.
Gilbank, David G., Castander, F. J., Balogh, M. L. and & Bower, R. G. (2004), Wide-field spectroscopy of a galaxy cluster pair at z=0.4, IAU Colloq. 195: Outskirts of Galaxy Clusters: Intense Life in the Suburbs 95-97.
Balogh, M.L. & Bower, R.G. (2003), Galaxy Evolution: Internally or Externally Driven?, Revista Mexicana de Astronomia y Astrofisica Conference Series 17: 220-221.
Castander, F. J., Balogh, M. L., Bernardi, M., Bower, R. G., Connolly, A. J., Gilbank, D. G., Gómez, P. L., Goto, T., Hopkins, A. M., Miller, C. J., Nichol, R. C., Schneider, D. P., Seth, R. and Zabludoff, A. & I. (2003), Galaxy Star Formation as a function of Environment, Revista Mexicana de Astronomia y Astrofisica Conference Series 16: 229-232.
Kodama, T., Smail, I., Nakata, F., Okamura, S. and Bower, & R. G. (2003), Large Scale Environmental Effects in Clusters of Galaxies, Astronomical Society of the Pacific Conference Series 301: 235-.
Couch, W. J., Balogh, M., Bower, R. and Lewis, I. (2003), The Cluster `Sphere of Influence': Tracking Star Formation with Environment via Hα Surveys, ASP Conf. Ser. 289: The Proceedings of the IAU 8th Asian-Pacific Regional Meeting, Volume I 289: 235-.
Quilis, V., Bower, R. & Balogh, M. (2002), Blowing Bubbles in the ICM, ASP Conf. Ser. 268: Tracing Cosmic Evolution with Galaxy Clusters 268: 253-.
Kodama, T., Smail, I., Nakata, F., Okamura, S. and Bower, & R. G. (2002), History of Mass Assembly and Star Formation in Clusters, ASP Conf. Ser. 268: Tracing Cosmic Evolution with Galaxy Clusters 268: 301-.
Bower, R.G. (2002), Making the Connection: Feedback, X-rays and Galaxy Formation, ASP Conf. Ser. 268: Tracing Cosmic Evolution with Galaxy Clusters 268: 257-.
Ziegler, B. L., Fricke, K. J., Balogh, M. L., Bower, R. & G., Gaztelu, A., Smail, I. and Davies, R. L. (2001), Galaxy Transformation in Poor Clusters at z≈0.25, ASP Conf. Ser. 240: Gas and Galaxy Evolution 240: 619-.
Bower, R.G. (2001), Galaxy Transformation in the Cluster Environment, ASP Conf. Ser. 240: Gas and Galaxy Evolution 240: 613-.
Gilbank, D. G., Bower, R. G. and Castander, F. Javier (2001), Optical vs. X-ray Selection for Finding Clusters of Galaxies, ASP Conf. Ser. 240: Gas and Galaxy Evolution 240: 644-.
Quilis, V., Moore, B. & Bower, R. (2001), The origin of SO galaxies in clusters, Highlights of Spanish astrophysics II 65-.
Moore, Ben, Quilis, Vicent and Bower, Richard (2000), Dynamical Effects on Galaxies in Clusters, ASP Conf. Ser. 197: Dynamics of Galaxies: from the Early Universe to the Present 197: 363-.
Terlevich, Alejandro, Kuntschner, Harald and Bower, & Richard (2000), Stellar Populations and the Colour-Magnitude Relation in Coma, ASP Conf. Ser. 215: Cosmic Evolution and Galaxy Formation: Structure, Interactions, and Feedback 215: 222-.
Moore, S. A. W., Lucey, J. R., Colless, M., Kuntschner, H., Bower, R. & Davies, R. L. (2000), The fundamental properties of early-type galaxies in the Coma Cluster, IAU Symposium 201.
Bower, R.G. & Kay, S.T. (1999), Cosmological Parameters from the X-Ray Evolution of Clusters, IAU Symp. 183: Cosmological Parameters and the Evolution of the Universe 183: 243-.
Kodama, T., Bell, E. F. and Bower, R. G. (1999), Identification of High Redshift Clusters Using Photometric Redshifts, ASP Conf. Ser. 191: Photometric Redshifts and the Detection of High Redshift Galaxies 191: 160-.
Knapp, G. R., Binette, L., Bower, R. G., Brinks, E., & Goudfrooij, P., Hau, G., Pogge, R. W. and Young, L. M. (1999), Panel Discussion: Star Formation in Early-Type Galaxies, ASP Conf. Ser. 163: Star Formation in Early Type Galaxies 163: 142-.
Bower, R. G., Terlevich, A., Kodama, T. and Caldwell, N. (1999), The Formation History of Early-Type Galaxies: an Observational Perspective, ASP Conf. Ser. 163: Star Formation in Early Type Galaxies 163: 211-.
Bell, E. F., Bower, R. G., de Jong, R. S., Rauscher, B. & J., Barnaby, D., Harper, D. A., Hereld, M. and Loewenstein, R. F. (1999), The star formation histories of Low Surface Brightness galaxies, ASP Conf. Ser. 170: The Low Surface Brightness Universe 170: 245-.
Terlevich, A. I., Bower, R. G., Caldwell, N. and Rose, J. & A. (1998), The star formation history of early type galaxies in the Coma cluster, Untangling Coma Berenices: A New Vision of an Old Cluster 111-.
Bower, R.G. (1997), The Influence of Environmental Effects on Galaxy Formation, The Evolution of the Universe: report of the Dahlem Workshop on the Evolution of the Universe 245-.
Terlevich, A. I., Bower, R. G., Smail, I., Barger, A. J., & and Ellis, R. S. (1997), The X-Ray Structure of the Butcher-Oemler Clusters AC114 and AC118, The Hubble Space Telescope and the High Redshift Universe 227-.
Guzmán, R., Lucey, J.R. & Bower, R.G. (1993), The Fundamental Properties of Giant Ellipticals, Structure, Dynamics and Chemical Evolution of Elliptical Galaxies 19-.
Lorrimer, S.J. & Bower, R.G. (1991), Clustering from a fresh perspective: correlations in the Press-Schechter formalism, Clusters and Superclusters of Galaxies 125-.
Bower, R., Lucey, J. R. and Ellis, R. S. (1991), Cosmological implications of the colour-magnitude relation, Clusters and Superclusters of Galaxies 11-.
Bower, R. G., Ellis, R. S. & Efstathiou, G. F. (1988), Dynamic friction in the rich cluster A2029, NATO ASIC Proc. 229: Cooling Flows in Clusters and Galaxies 115-119.
Bahé, Yannick M, Schaye, Joop, Barnes, David J, Dalla Vecchia, Claudio, Kay, Scott T, Bower, Richard G, Hoekstra, Henk, McGee, Sean L & Theuns, Tom (2019). Disruption of satellite galaxies in simulated groups and clusters: the roles of accretion time, baryons, and pre-processing. Monthly Notices of the Royal Astronomical Society 485(2): 2287-2311.
Rosas-Guevara, Yetli M, Bower, Richard G, McAlpine, Stuart, Bonoli, Silvia & Tissera, Patricia B (2019). The abundances and properties of Dual AGN and their host galaxies in the EAGLE simulations. Monthly Notices of the Royal Astronomical Society 483(2): 2712-2720.
Thob, Adrien C.R., Crain, Robert A., McCarthy, Ian G., Schaller, Matthieu, Lagos, Claudia D.P., Schaye, Joop, Talens, Geert Jan J., James, Philip A., Theuns, Tom & Bower, Richard G. (2019). The relationship between the morphology and kinematics of galaxies and its dependence on dark matter halo structure in EAGLE. Monthly Notices of the Royal Astronomical Society 485(1): 972–987.
Salcido, J., Bower, R. G., Barnes, L. A., Lewis, G. F., Elahi, P. J., Theuns, T., Schaller, M., Crain, R. A. & Schaye, J. (2018). The impact of dark energy on galaxy formation. What does the future of our Universe hold?. Monthly Notices of the Royal Astronomical Society 477(3): 3744-3759.
Tescari, E., Cortese, L., Power, C., Wyithe, J. S. B., Ho, I.-T., Crain, R. A., Bland-Hawthorn, J., Croom, S. M., Kewley, L. J., Schaye, J., Bower, R. G., Theuns, T., Schaller, M., Barnes, L., Brough, S., Bryant, J. J., Goodwin, M., Gunawardhana, M. L. P., Lawrence, J. S., Leslie, S. K., López-Sánchez, Á. R., Lorente, N. P. F., Medling, A. M., Richards, S. N., Sweet, S. M. & Tonini, C. (2018). The SAMI Galaxy Survey: understanding observations of large-scale outflows at low redshift with EAGLE simulations. Monthly Notices of the Royal Astronomical Society 473(1): 380-397.
De Rossi, María Emilia, Bower, Richard G., Font, Andreea S., Schaye, Joop & Theuns, Tom (2017). Galaxy metallicity scaling relations in the EAGLE simulations. Monthly Notices of the Royal Astronomical Society 472(3): 3354-3377.
Pujol, A., Skibba, R. A., Gaztañaga, E., Benson, A., Blaizot, J., Bower, R., Carretero, J., Castander, F. J., Cattaneo, A., Cora, S. A., Croton, D. J., Cui, W., Cunnama, D., De Lucia, G., Devriendt, J. E., Elahi, P. J., Font, A., Fontanot, F., Garcia-Bellido, J., Gargiulo, I. D., Gonzalez-Perez, V., Helly, J., Henriques, B. M. B., Hirschmann, M., Knebe, A., Lee, J., Mamon, G. A., Monaco, P., Onions, J., Padilla, N. D., Pearce, F. R., Power, C., Somerville, R. S., Srisawat, C., Thomas, P. A., Tollet, E., Vega-Martínez, C. A. & Yi, S. K. (2017). nIFTy cosmology: the clustering consistency of galaxy formation models. Monthly Notices of the Royal Astronomical Society 469(1): 749-762.
Furlong, M., Bower, R. G., Crain, R. A., Schaye, J., Theuns, T., Trayford, J. W., Qu, Y., Schaller, M., Berthet, M. & Helly, J. C. (2017). Size evolution of normal and compact galaxies in the EAGLE simulation. Monthly Notices of the Royal Astronomical Society 465(1): 722-738.
Rahmati, A., Schaye, J., Bower, R. G., Crain, R. A., Furlong, M., Schaller, M. & Theuns, T. (2015). The distribution of neutral hydrogen around high-redshift galaxies and quasars in the EAGLE simulation. Monthly Notices of the Royal Astronomical Society 452(2): 2034-2056.
Rosas-Guevara, Y. M., Bower, R. G., Schaye, J., Furlong, M., Frenk, C. S., Booth, C. M., Crain, R. A., Dalla Vecchia, C., Schaller, M. & Theuns, T. (2015). The impact of angular momentum on black hole accretion rates in simulations of galaxy formation. Monthly Notices of the Royal Astronomical Society 454(1): 1038-1057.
Oman, K. A., Navarro, J. F., Fattahi, A., Frenk, C. S., Sawala, T., White, S. D. M., Bower, R., Crain, R. A., Furlong, M., Schaller, M., Schaye, J. & Theuns, T. (2015). The unexpected diversity of dwarf galaxy rotation curves. Monthly Notices of the Royal Astronomical Society 452(4): 3650-3665.
Stott, J.P., Sobral, D., Bower, R., Smail, I., Best, P.N., Matsuda, Y., Hayashi, M., Geach, J.E. & Kodama, T. (2013). A fundamental metallicity relation for galaxies at z = 0.84-1.47 from HiZELS. Monthly Notices of the Royal Astronomical Society 436(2): 1130-1141.
Hou, A., Parker, L.C., Balogh, M.L., McGee, S.L., Wilman, D.J., Connelly, J.L., Harris, W.E., Mok, A., Mulchaey, J.S., Bower, R.G. & Finoguenov, A. (2013). Do group dynamics play a role in the evolution of member galaxies?. Monthly Notices of Royal Astronomical Society 435(2): 1715-1726.
Matsuda, Y., Smail, I., Geach, J.E., Best, P.N., Sobral, D., Tanaka, I., Nakata, F., Ohta, K., Kurk, J., Iwata, I., Bielby, R., Wardlow, J.L., Bower, R.G., Ivison, R.J., Kodama, T., Yamada, T., Mawatari, K. & Casali, M. (2011). An H$\alpha$ search for overdense regions at z = 2.23. Monthly Notices of the Royal Astronomical Society 416: 2041-2059.
Gilbank, D.G., Baldry, I.K., Balogh, M.L., Glazebrook, K. & Bower, R.G. (2010). The local star formation rate density: assessing calibrations using [OII], H and UV luminosities. \mnras 405: 2594-2614.
McGee, S.L., Balogh, M.L., Bower, R.G., Font, A.S. & McCarthy, I.G. (2009). The accretion of galaxies into groups and clusters. \mnras 400: 937-950.
McGee, S.L., Balogh, M.L., Henderson, R.D.E., Wilman, D.J., Bower, R.G., Mulchaey, J.S., Oemler, Jr. & A. (2008). Evolution in the discs and bulges of group galaxies since z = 0.4. \mnras 387: 1605-1621.
Hau, G.K.T., Bower, R.G., Kilborn, V., Forbes, D.A., Balogh, M.L. & Oosterloo, T. (2008). Is NGC 3108 transforming itself from an early- to late-type galaxy - an astronomical hermaphrodite?. \mnras 385: 1965-1972.
Bower, R.G., McCarthy, I.G. & Benson, A.J. (2008). The flip side of galaxy formation: a combined model of galaxy formation and cluster heating. Monthly Notices of the Royal Astronomical Society 390(4): 1399-1410.
Okamoto, T., Nemmen, R.S. & Bower, R.G. (2008). The impact of radio feedback from active galactic nuclei in cosmological simulations: formation of disc galaxies. Monthly Notices of the Royal Astronomical Society 385: 161-180.
Poggianti, BM, Von der Linden, A, De Lucia, G, Desai, V, Simard, L, Halliday, C, Aragon-Salamanca, A, Bower, R, Varela, J, Best, P, Clowe, DI, Dalcanton, J, Jablonka, P, Milvang-Jensen, B, Pello, R, Rudnick, G, Saglia, R, White, SDM & Zaritsky, D (2006). The evolution of the star formation activity in galaxies and its dependence on environment. Astrophysical Journal 642(1): 188-215.
Kodama, T, Balogh, ML, Smail, I, Bower, RG & Nakata, F (2004). A panoramic H alpha imaging survey of the z=0.4 cluster Cl 0024.0+1652with Subaru. Monthly Notices Of The Royal Astronomical Society 354(4): 1103-1119.
Balogh, M.L., Baldry, I.K., Nichol, R., Miller, C., Bower, R. & Glazebrook, K. (2004). The Bimodal Galaxy Color Distribution: Dependence on Luminosity and Environment. The Astrophysical Journal 615(2): L101-L104.
Smith, Joanna, Bunker, Andrew & Bower, Richard (2003). 3D spectroscopy of z ~ 1galaxies with gemini. Astrophysics and Space Science 284: 973-976.
Kodama, Tadayuki & Bower, Richard (2003). The Ks-band luminosity and stellar mass functions of galaxies in z~ 1 clusters. Monthly Notices of the Royal Astronomical Society 346: 1-12.
Balogh, Michael L., Smail, Ian, Bower, R. G., Ziegler, B. L., Smith, G. P., Davies, Roger L., Gaztelu, A., Kneib, J.-P. & Ebeling, H. (2002). Distinguishing Local and Global Influences on Galaxy Morphology: A Hubble Space Telescope Comparison of High and Low X-Ray Luminosity Clusters. Astrophysical Journal 566(1): 123-136.
Couch, WJ, Balogh, ML, Bower, RG, Smail, I, Glazebrook, K & Taylor, M (2001). A low global star formation rate in the rich galaxy cluster AC 114 atz=0.32. Astrophysical Journal 549(2): 820-831.
Bell, EF, Barnaby, D, Bower, RG, de Jong, RS, Harper, DA, Hereld, M, Loewenstein, RF & Rauscher, BJ (2000). The star formation histories of low surface brightness galaxies. Monthly Notices Of The Royal Astronomical Society 312(3): 470-496.
Castander, F. J., Bower, R. G., Ellis, R. S., Aragon-Salamanca, A., Mason, K. O., Hasinger, G., McMahon, R. G., Carrera, & F. J., Mittaz, J. P. D., Perez-Fournon, I. and Lehto, H. J. (1995). Deficit of distant X-ray-emitting galaxy clusters and implications for cluster evolution. Nature 377: 39-41. | CommonCrawl |
This is a purely hypothetical example but is provable ignorance useful in cryptography?
For example, let's say I have a trapdoor collision resistant function. I know the trapdoor and therefore some $x_0 \neq x_1$ such that $f(x_0) = f(x_1)$. This is however, hard to find. If someone proves they know $x_0$, I can conclude that they do not know $x_1$.
Is there any context where more complicated versions of such problems is useful?
In general, you cannot prove lack of knowledge, because even if you did know something you shouldn't, you can always pretend that you don't know it and carry out the proof as if you didn't know it.
For your specific example, consider how the prover would know $x_0$. Did you tell them what it is? If so, that proves nothing, since they would then know $x_0$ even if they had also learned $x_1$ from somewhere else.
Conversely, if your function $f$ is collision resistant without the trapdoor, but is not injective (i.e. potential collisions do exist), then it must also be preimage resistant. Thus, finding $x_0$ from $y = f(x_0)$ is (at least) as hard as finding $x_1$. Thus, paradoxically, the prover exhibiting $x_0$ would in fact be evidence that they can find preimages for $f$, and thus can probably find $x_1$ as well.
You can't prove unconditional lack of knowledge, but you can create proven, shared lack of knowledge of a number by all parties that contribute.
Suppose there are two parties. They each generate a 256-bit random number (call them $r_1$ and $r_2$), and publish and sign $H(r_1)$ and $H(r_2)$ as their choices. Both parties then sign $H(r_1)||H(r_2)$ as an agreement that they both agree to $H(r_1||r_2)$ as "the number" that they both do not know.
Now a number exists that two parties have agreed on that they both have no knowledge on (to a 256-bit security level), yet if the parties decide to reveal the number (by revealing $r_1, r_2$) it can be confirmed that this was indeed the number they agreed upon.
The above scheme can be used for example for trustless (in the sense that the random number truly was random) gambling.
If you want to be sure that no one knows X, then one thing you can do is incentivize people to reveal the fact that they know X. You could offer a monetary bounty to anyone who can demonstrate that they know X. If no one takes the bounty, then you know that either no one knows X, they value the secrecy of that knowledge at higher than your bounty, they just don't yet know about your bounty, they don't trust you to pay up, or they value their privacy more than the bounty and don't believe they can claim the bounty anonymously.
Cryptocurrency systems provide a few ways for people to offer bounties in a way that solves the trust and anonymity issues, and can even allow third parties to contribute directly to the bounty too without them needing to trust the bounty-offerer.
In 2013, Peter Todd created several bitcoin addresses that were configured to automatically allow anyone to withdraw the stored bitcoin if they publicly demonstrated a hash collision in one of several algorithms. Many people pitched in and contributed bitcoin into the bounty wallets. No one, including Peter Todd, could withdraw the money back from the wallet unless they demonstrated a hash collision. The bounty for a SHA-1 collision (worth about $3000 at the time) was claimed in 2017 using the SHA-1 collision published by Google. The other bounties (for SHA-256 and RIPEMD160) are still unclaimed, implying that no one yet knows of any collisions for them.
If you computed x0 and x1 yourself out of nothing, and never leaked any information anyone could use to derive x1, then it would be true that nobody but you who knows x0 would also know x1, but it would be equally true that nobody who doesn't know x0 would also know x1. Both statements would be true because you're the only person in the universe who knows x1. Further, if information about x1 did leak out in ways you don't know about, the fact that somebody acquires x0 would almost never imply anything about their knowledge of x1.
There is one possible situation where proof of knowledge might prove ignorance of something else: in cases where it is externally possible to prove that someone has only had the opportunity to receive a certain amount of information since a piece of information was created (e.g. an amount significantly less than the combined size of x0 and x1), demonstrated possession of a sufficient quantity of information that didn't prior that time window (e.g. x0) could prove that the communications channel wasn't used to convey certain other information (e.g. x1). Such a proof would not require any relationship between x0 and x1, however. If x0 and x1 had been entirely separate and unrelated random bit strings the same situation would apply.
I: did you witness the coin toss?
I: Did you view the coin once it had landed?
I: Can you describe the picture on the coin?
W: It was a a picture of an eagle?
I: What was depicted on the obverse side of the coin?
OBJECTION, the answer would be speculation. The witness has already dis-qualified his ability answer.
Zero-knowledge proof for committing a choice? | CommonCrawl |
Since Sonya has just learned the basics of matrices, she decided to play with them a little bit.
Sonya imagined a new type of matrices that she called rhombic matrices. These matrices have exactly one zero, while all other cells have the Manhattan distance to the cell containing the zero. The cells with equal numbers have the form of a rhombus, that is why Sonya called this type so.
The Manhattan distance between two cells ($$$x_1$$$, $$$y_1$$$) and ($$$x_2$$$, $$$y_2$$$) is defined as $$$|x_1 - x_2| + |y_1 - y_2|$$$. For example, the Manhattan distance between the cells $$$(5, 2)$$$ and $$$(7, 1)$$$ equals to $$$|5-7|+|2-1|=3$$$.
Example of a rhombic matrix.
Note that rhombic matrices are uniquely defined by $$$n$$$, $$$m$$$, and the coordinates of the cell containing the zero.
She drew a $$$n\times m$$$ rhombic matrix. She believes that you can not recreate the matrix if she gives you only the elements of this matrix in some arbitrary order (i.e., the sequence of $$$n\cdot m$$$ numbers). Note that Sonya will not give you $$$n$$$ and $$$m$$$, so only the sequence of numbers in this matrix will be at your disposal.
Write a program that finds such an $$$n\times m$$$ rhombic matrix whose elements are the same as the elements in the sequence in some order.
The first line contains a single integer $$$t$$$ ($$$1\leq t\leq 10^6$$$) — the number of cells in the matrix.
The second line contains $$$t$$$ integers $$$a_1, a_2, \ldots, a_t$$$ ($$$0\leq a_i< t$$$) — the values in the cells in arbitrary order.
In the first line, print two positive integers $$$n$$$ and $$$m$$$ ($$$n \times m = t$$$) — the size of the matrix.
In the second line, print two integers $$$x$$$ and $$$y$$$ ($$$1\leq x\leq n$$$, $$$1\leq y\leq m$$$) — the row number and the column number where the cell with $$$0$$$ is located.
If there are multiple possible answers, print any of them. If there is no solution, print the single integer $$$-1$$$.
You can see the solution to the first example in the legend. You also can choose the cell $$$(2, 2)$$$ for the cell where $$$0$$$ is located. You also can choose a $$$5\times 4$$$ matrix with zero at $$$(4, 2)$$$.
In the second example, there is a $$$3\times 6$$$ matrix, where the zero is located at $$$(2, 3)$$$ there.
In the third example, a solution does not exist.
Server time: Apr/19/2019 10:48:17 (f2). | CommonCrawl |
Abstract: Often a "classification problem" can be regarded as an equivalence relation on a standard Borel space (i.e., a Polish space equipped just with its σ-algebra of Borel sets). For instance, the classification problem for countable linear orders (on $\omega$) corresponds to the isomorphism equivalence relation on a suitable subspace of $\mathcal P(\omega\times\omega)$. This allows for an analysis of the complexity of the isomorphism problem for many classes of countable structures using techniques from an area of descriptive set theory called Borel equivalence relations. In this talk we shall describe some recent results in Borel equivalence relations, as well as a couple of interactions with model theory. | CommonCrawl |
I have have calculated Kullback-Leibler divergence which is equal $0.492820258$, I want to know in general what does this number shows me? Generally, Kullback-Leibler divergence shows me how far is one probability distribution from another, right? It is similar to entropy terminology, but in terms of numbers, what does it mean? If I have a result of result of 0.49, can I say that approximately one distribution is far from another by 50%?
The Kullback-Leibler Divergence is not a metric proper, since it is not symmetric and also, it does not satisfy the triangle inequality. So the "roles" played by the two distributions are different, and it is important to distribute these roles according to the real-world phenomenon under study.
we consider the $P$ distribution to be the "target distribution" (usually considered to be the true distribution), which we approximate by using the $Q$ distribution.
where $H(P)$ is the Shannon entropy of distribution $P$ and $-E_P(\ln(Q))$ is called the "cross-entropy of $P$ and $Q$" -also non-symmetric.
(here too, the order in which we write the distributions in the expression of the cross-entropy matters, since it too is not symmetric), permits us to see that KL-Divergence reflects an increase in entropy over the unavoidable entropy of distribution $P$.
So, no, KL-divergence is better not to be interpreted as a "distance measure" between distributions, but rather as a measure of entropy increase due to the use of an approximation to the true distribution rather than the true distribution itself.
So we are in Information Theory land. To hear it from the masters (Cover & Thomas) "
...if we knew the true distribution $P$ of the random variable, we could construct a code with average description length $H(P)$. If, instead, we used the code for a distribution $Q$, we would need $H(P) + \mathbb K (P||Q)$ bits on the average to describe the random variable.
...it is not a true distance between distributions since it is not symmetric and does not satisfy the triangle inequality. Nonetheless, it is often useful to think of relative entropy as a "distance" between distributions.
But this latter approach is useful mainly when one attempts to minimize KL-divergence in order to optimize some estimation procedure. For the interpretation of its numerical value per se, it is not useful, and one should prefer the "entropy increase" approach.
In other words, you need 25% more bits to describe the situation if you are going to use $Q$ while the true distribution is $P$. This means longer code lines, more time to write them, more memory, more time to read them, higher probability of mistakes etc... it is no accident that Cover & Thomas say that KL-Divergence (or "relative entropy") "measures the inefficiency caused by the approximation."
KL Divergence measures the information loss required to represent a symbol from P using symbols from Q. If you got a value of 0.49 that means that on average you can encode two symbols from P with the two corresponding symbols from Q plus one bit of extra information.
Consider an information source with distribution $P$ that is encoded using the ideal code for an information source with distribution $Q$. The extra encoding cost above the minimum encoding cost that would have been attained by using the ideal code for $P$ is the KL divergence.
Not the answer you're looking for? Browse other questions tagged interpretation information-theory kullback-leibler or ask your own question.
Can we state that If KL-Divergence(P||Q) < H(P) then Q is "informative" of P and not otherwise?
How to calculate Kullback-Leibler divergence/distance? | CommonCrawl |
In this paper we determine new upper bounds for the maximal density of translative packings of superballs in three dimensions (unit balls for the $l_3^p$-norm) and of Platonic and Archimedean solids having tetrahedral symmetry. These bounds give strong indications that some of the lattice packings of superballs found in 2009 by Jiao, Stillinger, and Torquato are indeed optimal among all translative packings. We improve Zong's recent upper bound for the maximal density of translative packings of regular tetrahedra from $0.3840\ldots$ to $0.3745\ldots$, getting closer to the best known lower bound of $0.3673\ldots$. We apply the linear programming bound of Cohn and Elkies which originally was designed for the classical problem of packings of round spheres. The proofs of our new upper bounds are computational and rigorous. Our main technical contribution is the use of invariant theory of pseudo-reflection groups in polynomial optimization.
Dostert, M, Guzman Paredes, C.A, de Oliveira Filho, F.M, & Vallentin, F. (2015). New upper bounds for the density of translative packings of three-dimensional convex bodies with tetrahedral symmetry. arXiv.org e-Print archive. Cornell University Library . | CommonCrawl |
$$n! = 1\times \cdots\times n$$.
If $n$ is even for example, this algorithm does $[1*n][2*(n-1)]*[3*(n-2)]\cdots$ where each square bracket multiplication is done in one loop iteration. So, it reduces the number of iterations by about half. Of course, all the multiplications are still done.
To test the two programs I wrote a main function that computed the sum $1! + 2! + \cdots + 20!$, by adding to the sum each factorial computed with one of the above functions. Of course, on today's machines, such a computation is extremely fast, so I made the program execute this summation a million times so that times can actually be observed. Two programs were compiled, one for each version of the factorial, and using the -O2 compilation switch in gcc.
Then, a Python program used the time program to execute the first and then the second, and this step of running one after the other ran five hundred times. To test whether the results were different, I decided on a Mann-Whitney-Wilcoxon two-sample test, a nonparametric statistical test to see whether two samples come from the same distribution. R reports the following 95% confidence interval (in seconds) for the difference between the first and the second algorithms: [0.02996523, 0.03003752], p-value essentially zero. So there seems to be a small but significant difference in these two methods — in fact the total time saved from the second method after five hundred iterations was about 14.5 seconds.
I don't know enough about compilers to explain what's going on here properly, but I'd guess it has something to do with loop optimization and vectorization that is easier for the compiler to handle with the simpler loop. | CommonCrawl |
This paper studies an infinite-server queue in a Markov environment, that is, an infinite-server queue with arrival rates and service times depending on the state of a Markovian background process. Scaling the arrival rates $\lambda_i$ by a factor $N$, tail probabilities are examined when letting $N$ tend to $\infty$; non-standard large deviations results are obtained. An importance-sampling based estimation algorithm is proposed, that is proven to be logarithmically efficient.
Blom, J.G, & Mandjes, M.R.H. (2013). A large-deviations analysis of Markov-modulated infinite-server queues. Operations Research Letters, 41(3), 220–225. | CommonCrawl |
Abstract: In 1964, Linnik and Skubenko established the equidistribution of the integral points on the determinantal surface $\det X=P$, where $X$ is a $(3\times 3)$ matrix with independent entries and $P$ is an increasing parameter. Their method involved reducing the problem by one dimension (that is, to the determinantal equations with a $(2\times 2)$ matrix). In this paper a more precise version of the Linnik-Skubenko reduction is proposed. It can be applied to a wider range of problems arising in the geometry of numbers and in the theory of three-dimensional Voronoi-Minkowski continued fractions.
The research was supported by the "Dynasty" Foundation and the Russian Foundation for Basic Research (grant no. 14-01-90002). | CommonCrawl |
Chomp is a 2-player game, usually played with chocolate bars.
The players take turns in choosing one chocolate block and "eat it", together with all other blocks that are below it and to its right. There is a catch: the top left block contains poison, so the first player forced to eat it dies, that is, looses the game.
If you start with a rectangular bar, the first player has a winning strategy, though it may take you too long to actually find the correct first move. See this post for the strategy-stealing argument.
If you label the blocks of the rectangular bar by $(a,b)$ with $0 \leq a \leq k$ and $0 \leq b \leq l$, with the poisonous one being $(0,0)$, then this can be viewed as choosing a divisor $d$ of $N=p^k q^l$ and removing all multiples of $d$ from the set of divisors of $N$. The first person forced to name $1$ looses.
This allows for higher dimensional versions of Chomp.
If you start with the set of all divisors of a given natural number $N$, then the strategy-stealing argument shows that the first player has a winning move.
A general position of the game corresponds to a finite set of integers, closed under taking divisors. At each move the player has to choose an element of this set and remove it as well as all its multiples.
The thread of $(N|1)$, relevant in understanding a moonshine group of the form $(n|m)+e,f,\dots$ with $N=n \times h$, consists of all divisors of $N$.
But then, the union of all threads for all 171 moonshine groups is a position in higher dimensional Chomp.
Who wins starting from this moonshine thread?
Perhaps not terribly important, but it forces one to imagine the subgraph of the monstrous moonshine picture on the $97$ number-lattices way better than by its Hasse diagram.
Here's how the Hasse diagram of the moonshine thread was produced. These are 'notes to self', because I tend to forget such things quickly.
3. Copy the output to a file, say chomp.dot, and remove all new-line breaks from it.
4. Install Graphviz on Mac OS X. | CommonCrawl |
2.2 When does it fail?
But we shouldn't forget about $\mathbf b$!
the first step of the elimination was to take first and last rows unchanged, and subtract 3 times 1st row from the second.
Can we write it with Matrix Multiplication?
what should we put instead of $[? \ ? \ ?]$ so multiplication takes us from one matrix to another?
What if we need to exchange rows?
This page was last modified on 21 April 2018, at 13:51. | CommonCrawl |
Abstract : The aim of the paper is to address the long time behavior of the Kuramoto model of mean-field coupled phase rotators, subject to white noise and quenched frequencies. We analyse the influence of the fluctuations of both thermal noise and frequencies (seen as a disorder) on a large but finite population of $N$ rotators, in the case where the law of the disorder is symmetric. On a finite time scale $[0,T]$, the system is known to be self-averaging: the empirical measure of the system converges as $N\to\infty$ to the deterministic solution of a nonlinear Fokker-Planck equation which exhibits a stable manifold of synchronized stationary profiles for large interaction. On longer time scales, competition between the finite-size effects of the noise and disorder makes the system deviate from this mean-field behavior. In the main result of the paper we show that on a time scale of order $\sqrt N$ the fluctuations of the disorder prevail over the fluctuations of the noise: we establish the existence of disorder-induced traveling waves for the empirical measure along the stationary manifold. This result is proved for fixed realizations of the disorder and emphasis is put on the influence of the asymmetry of these quenched frequencies on the direction and speed of rotation of the system. Asymptotics on the drift are provided in the limit of small disorder. | CommonCrawl |
We study the number of tilings of skew Young diagrams by ribbon tiles shaped like Dyck paths, in which the tiles are "vertically decreasing". We use these quantities to compute pairing probabilities in the double-dimer model: Given a planar bipartite graph $G$ with special vertices, called nodes, on the outer face, the double-dimer model is formed by the superposition of a uniformly random dimer configuration (perfect matching) of $G$ together with a random dimer configuration of the graph formed from $G$ by deleting the nodes. The double-dimer configuration consists of loops, doubled edges, and chains that start and end at the boundary nodes. We are interested in how the chains connect the nodes. An interesting special case is when the graph is $\varepsilon(\mathbb Z\times\mathbb N)$ and the nodes are at evenly spaced locations on the boundary $\mathbb R$ as the grid spacing $\varepsilon\to0$. | CommonCrawl |
Abstract: In a reasonable topological space, large deviation estimates essentially deal with probabilities of events that are asymptotically (exponentially) small, and in a certain sense, quantify the rate of these decaying probabilities. In such estimates, upper bounds for such small probabilities often require compactness of the ambient space, which is often absent in problems arising in statistical mechanics (for example, distributions of local times of Brownian motion in the full space \$\mathbb R^d\$).
Motivated by a problem in statistical mechanics, we present a robust theory of "translation-invariant compactification" of probability measures in \$\mathbb R^d\$. Thanks to an inherent shift-invariance of the underlying problem, we are able to apply this abstract theory painlessly and solve a long standing problem in statistical mechanics, the mean-field polaron problem.
This talk is based on joint works with S. R. S. Varadhan (New York), as well as with Erwin Bolthausen (Zurich) and Wolfgang Koenig (Berlin). | CommonCrawl |
Approximation of Sobolev functions is a topic with a long history and many applications in different branches of mathematics. The asymptotic order as $n\to\infty$ of the approximation numbers $a_n$ is well-known for embeddings of isotropic Sobolev spaces and also for Sobolev spaces of dominating mixed smoothness. However, if the dimension $d$ of the underlying domain is very high, one has to wait exponentially long until the asymptotic rate becomes visible. Hence, for computational issues this rate is useless, what really matters is the preasymptotic range, say $n\le 2d$.
In the talk I will first give a short overview over this relatively new field. Then I will present some new preasymptotic estimates for $L_2$-approximation of periodic Sobolev functions, which improve the previously known results. I will discuss the cases of isotropic and dominating mixed smoothness, and also $C\infty$-functions of Gevrey type. Clearly, on all these spaces there are many equivalent norms. It is an interesting effect that – in contrast to the asymptotic rates – the preasymptotic behaviour strongly depends on the chosen norm. | CommonCrawl |
Cellular Automata (CA) represent an interesting approach to design Substitution Boxes (S-boxes) having good cryptographic properties and low implementation costs. From the cryptographic perspective, up to now there have been only ad-hoc studies about specific kinds of CA, the best known example being the $\chi$ nonlinear transformation used in Keccak. In this paper, we undertake a systematic investigation of the cryptographic properties of S-boxes defined by CA, proving some upper bounds on their nonlinearity and differential uniformity. Next, we extend some previous published results about the construction of CA-based S-boxes by means of a heuristic technique, namely Genetic Programming (GP). In particular, we propose a ``reverse engineering" method based on De Bruijn graphs to determine whether a specific S-box is expressible through a single CA rule. Then, we use GP to assess if some CA-based S-box with optimal cryptographic properties can be described by a smaller CA. The results show that GP is able to find much smaller CA rules defining the same reference S-boxes up to the size $7\times 7$, suggesting that our method could be used to find more efficient representations of CA-based S-boxes for hardware implementations. Finally, we classify up to affine equivalence all $3\times 3$ and $4\times 4$ CA-based S-boxes. | CommonCrawl |
Proving $x^2+x$ uniformly continuous in (0,1) using $\epsilon, \delta$.
Note that $|x^2-y^2|=|(x-y)(x+y)|$ and $x,y\in(0,1)$ so $(x+y)>0$$\space$ and $<2$. So it's equal to $|x-y|(x+y)<2\delta$.
It's this okay? How could I write it better? Am I wrong somewhere? Thanks.
You are correct. The same result can be obtained in a easier way by using the Mean Value Theorem: if $f(x)=x^2+x$ then for $x,y\in (0,1)$ there is $t\in (0,1)$ such that $$|f(x)-f(y)|=|f'(t)||x-y|=|2t+1||x-y|\leq 3|x-y|.$$ More generally a differentiable function whose derivative is bounded in an interval $I$ is also uniformly continuous in $I$.
Your proof is fine. A shortcut would be to note that $f$ is continuous on the compact set $[0,1]$, and so uniformly continuous there; hence, on $(0,1),$ too.
Is the space of uniformly left continuous functions on [0,1] complete?
Uniformly $\beta$-continuous functions (jumps no greater than $\beta$) converge uniformly to $f$, is $f$ continuous?
The function $f: (0,1) \to \mathbb R $ defined by $f(x) := 1/x$ is not uniformly continuous, but it is continuous.
If $f$ is uniformly continuous on $[1,4]$ and uniformly continuous on $[2,5]$ is $f$ uniformly continuous on $[1,5]$? | CommonCrawl |
Quantitative magnetic resonance imaging (qMRI) finds increasing application in neuroscience and clinical research due to its sensitivity to micro-structural properties of brain tissue, e.g. axon, myelin, iron and water concentration. We introduce the hMRI--toolbox, an easy-to-use open-source tool for handling and processing of qMRI data presented together with an example dataset. This toolbox allows the estimation of high-quality multi-parameter qMRI maps (longitudinal and effective transverse relaxation rates R1 and R2*, proton density PD and magnetisation transfer MT) that can be used for calculation of standard and novel MRI biomarkers of tissue microstructure as well as improved delineation of subcortical brain structures. Embedded in the Statistical Parametric Mapping (SPM) framework, it can be readily combined with existing SPM tools for estimating diffusion MRI parameter maps and benefits from the extensive range of available tools for high-accuracy spatial registration and statistical inference. As such the hMRI--toolbox provides an efficient, robust and simple framework for using qMRI data in neuroscience and clinical research.
Mathematical models are the foundation of numerical simulation of optoelectronic devices. We present a concept for a machine-actionable as well as human-understandable representation of the mathematical knowledge they contain and the domain-specific knowledge they are based on. We propose to use theory graphs to formalize mathematical models and model pathway diagrams to visualize them. We illustrate our approach by application to the stationary one-dimensional drift-diffusion equations (van Roosbroeck system).
Mathematical modeling and simulation (MMS) has now been established as an essential part of the scientific work in many disciplines. It is common to categorize the involved numerical data and to some extent the corresponding scientific software as research data. But both have their origin in mathematical models, therefore any holistic approach to research data in MMS should cover all three aspects: data, software, and models. While the problems of classifying, archiving and making accessible are largely solved for data and first frameworks and systems are emerging for software, the question of how to deal with mathematical models is completely open. In this paper we propose a solution -- to cover all aspects of mathematical models: the underlying mathematical knowledge, the equations, boundary conditions, numeric approximations, and documents in a flexiformal framework, which has enough structure to support the various uses of models in scientific and technology workflows. Concretely we propose to use the OMDoc/MMT framework to formalize mathematical models and show the adequacy of this approach by modeling a simple, but non-trivial model: van Roosbroeck's drift-diffusion model for one-dimensional devices. This formalization -- and future extensions -- allows us to support the modeler by e.g. flexibly composing models, visualizing Model Pathway Diagrams, and annotating model equations in documents as induced from the formalized documents by flattening. This directly solves some of the problems in treating MMS as "research data" and opens the way towards more MKM services for models.
Mathematical modeling and simulation (MMS) has now been established as an essential part of the scientific work in many disciplines and application areas. It is common to categorize the involved numerical data and to some extend the corresponding scientific software as research data. Both have their origin in mathematical models. In this contribution we propose a holistic approach to research data in MMS by including the mathematical models and discuss the initial requirements for a conceptual data model for this field.
K. Tabelow, Structural adaptation for noise reduction in magnetic resonance imaging, SIAM Conference on Imaging Science, Minisymposium MS5 ``Learning and Adaptive Approaches in Image Processing'', June 5 - 8, 2018, Bologna, Italy, June 5, 2018.
K. Tabelow, MRI data models at low SNR, 2nd Leibniz MMs Days2017, February 22 - 24, 2017, Leibniz Informationszentrum Technik und Naturwissenschaften Technische Informationsbibliothek, Hannover, February 24, 2017, DOI 10.5446/21910 .
K. Tabelow, Denoising brain images: A clinical need and a mathematical idea, Leibniz-Kolleg for Young Researchers: Challenges and Chances of Interdisciplinary Research, November 9 - 11, 2016, Leibniz-Gemeinschaft, Berlin, November 9, 2016.
K. Tabelow, Functional magnetic resonance imaging: Processing large dataset, AG DANK Autumn Meeting 2016, November 18 - 19, 2016, Gesellschaft für Klassifikation, Arbeitsgruppe ``Datenanalyse und Numerische Klassifikation'', WIAS Berlin, November 18, 2016.
K. Tabelow, Mathematical models: A research data category?, The 5th International Congress on Mathematical Software, July 11 - 14, 2016, Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB), July 13, 2016.
K. Tabelow, Advanced statistical methods for noisy and high-dimensional medical (and non-medical) data, Innovation Days 2013, December 9 - 10, 2013, Helmholtz-Gemeinschaft, Geschäftsstelle Berlin, December 9, 2013.
K. Tabelow, Diffusion MRI - news on adaptive processing, PreMoLab Workshop on: Advances in predictive modeling and optimization, May 16 - 17, 2013, WIAS-Berlin, May 17, 2013.
K. Tabelow, Structure adaptive smoothing in statistical fMRI analysis, Workshop ``Highfield MRI and MRS-3T and Beyond'', Physikalisch-Technische Bundesanstalt Berlin, February 20 - 21, 2006.
C. BOROS, T. MENG, R. RITTEL, K. TABELOW, Y. ZHANG, Formation of Color-Singlet Gluon-Cluster and Inelastic Diffractive Scattering, Phys. Rev. D 61, 094010 (2000).
J. FU, T. MENG, R. RITTEL, K. TABELOW, Criticality in quark gluon systems far beyond thermal and chemical equilibrium, Phys. Rev. Lett. 86, 1961 (2001).
K. TABELOW, Gap function in the finite Bak-Sneppen model, Phys. Rev. E 63, 047101 (2001).
T. MENG, R. RITTEL, K. TABELOW, Y. ZHANG, Formation of color-singlet gluon-clusters and inelastic diffractive scattering. Part II: Derivation of the $t$- and $M_x^2/s$-dependence of cross-sections in the SOC-approach, hep-ph/9807314 (1998).
T. MENG, R. RITTEL, K. TABELOW, Gluons in small-$x_B$ deep-inelastic scattering, hep-ph/9905538 (1999).
T. MENG, R. RITTEL, K. TABELOW, Y. ZHANG, Inelastic diffraction and meson radii hep-ph/9910331 (1999).
K. TABELOW, Formation of Color-Singlets Gluon-Clusters and Inelastic Diffractive Scattering 7th International Workshop on Deep Inelastic Scattering and QCD, DIS99, Zeuthen, Germany, 19-23 April 1999, Nucl. Phys. B (Proc. Suppl.) 79, 393-395 (1999).
K. TABELOW, Self-organized criticality in gluon systems and its consequences invited talk at the XXXth International Symposium on Multiparticle Dynamics, ISMD2000, Tihany, Hungary, 9-15 Oct 2000, published in Proceedings of ISMD2000 (World Scientific), 93-98 (2001). | CommonCrawl |
The Center for Geometry and Physics is loosely organized into multiple research groups, each of which comprises a senior scholar who leads the group and several researchers whose areas of expertise and interest overlap synergistically. A brief description of each group's areas of focus, research goals, and members can be seen below.
The current status of symplectic topology resembles that of classical topology in the middle of the twentieth century. Over time, a systematic algebraic language was developed to describe problems in classical topology. Similarly, a language for symplectic topology is emerging, but has yet to be fully developed. The development of this language is much more challenging both algebraically and analytically than in the case of classical topology. The relevant homological algebra of $A_\infty$ structures is harder to implement in the geometric situation due to the analytical complications present in the study of pseudo-holomorphic curves or "instantons" in physical terms. Homological mirror symmetry concerns a certain duality between categories of symplectic manifolds and complex algebraic varieties. The symplectic side of the story involves an $A_\infty$ category, called the Fukaya category, which is the categorified version of Lagrangian Floer homology theory. In the meantime, recent developments in the area of dynamical systems have revealed that the symplectic aspect of area preserving dynamics in two dimensions has the potential to further understanding of these systems in deep and important ways.
The mathematical foundations of field and string theories remain poorly understood. As a consequence, many mathematical theories which are intimately related to the quantization of such systems are not yet subsumed into a unifying framework that could guide their further development. In particular developing a general theory of B-type Landau-Ginzburg models, as required by a deeper understanding of mirror symmetry, and developing a general global mathematical formulation of supergravity theories, which could afford a deeper understanding of the Mathematics behind "supersymmetric geometry". Physical and mathematico-physical approaches to string theory and quantum eld theory use homotopical methods in various ways, depending on the formalizations at play. Many basic tools used in algebraic geometry are refinements of constructions from algebraic topology. In order to ensure that these tools are employed within the proper framework, with attention to contemporary developments, to facilitate their use, and to help bridge the gaps between participants in different research programs, it is useful to have an active research group focused on the fundamentals of homotopy algebra itself.
Fano varieties are algebraic varieties whose anticanonical classes are ample. They are classical and fundamental varieties that play many significant roles in contemporary geometry. Verified or expected geometric and algebraic properties of Fano varieties have attracted attentions from many geometers and physicists. In spite of extensive studies on Fano varieties for more than one centuries, numerous features of Fano varieties are still shrouded in a veil of mist. Contemporary geometry however requires more comprehensive understanding of Fano varieties.
Derived categories of coherent sheaves on algebraic varieties are important and interesting invariants of algebraic varieties. It turns out that they contain geometric, birational geometric, arithmetic information of algebraic varieties and investigation of derived categories of algebraic varieties is now one of the most important research areas in algebraic geometry. Moreover understanding derived categories of algebraic varieties are also important in many other areas of mathematics and physics such as symplectic geometry, number theory, mirror symmetry, representation theory, topology and string theory etc. The goal of the research group is to understand structures of derived caategories of algebraic varieties and find their applications to algebraic, arithemetic, symplectic geometry, mathematical physics and string theory. | CommonCrawl |
dc.identifier.citation Alvarez Montaner, Josep; Zarzuela, Santiago. "Linearization of local cohomology modules". A: Commutative algebra : interactions with algebraic geometry. Providence : American Mathematical Society, 2003, p.1-13. (Contemporary mathematics; 331). ISBN 0821832336.
dc.description.abstract The aim of this work is to describe the linear structure of regular holonomic $\mathcal D$-modules with support a normal crossing with variation zero introduced in [Local cohomology, arrangements of subspaces and monomial ideals, to appear in Adv. in Math.] with special regard to the case of local cohomology modules supported on monomial ideals. | CommonCrawl |
The instructional materials for Dimensions Math Grade 6 do not meet expectations for alignment to the CCSSM. In Gateway 1, the instructional materials do not meet the expectations for focus as they assess above-grade-level standards but do devote at least 65% of instructional time to the major work of the grade. For coherence, the instructional materials are partially coherent and consistent with the Standards. The instructional materials contain supporting work that enhances focus and coherence simultaneously by engaging students in the major work of the grade and foster coherence through connections at a single grade. In Gateway 2, the instructional materials meet the expectations for rigor and balance, but they do not meet the expectations for practice-content connections. Since the materials do not meet the expectations for alignment to the CCSSM, they were not reviewed for usability in Gateway 3.
The instructional materials reviewed for Dimensions Math Grade 6 partially meet expectations for focus and coherence in Gateway 1. For focus, the instructional materials do not meet the expectations for assessing grade-level standards, but the amount of time devoted to the major work of the grade is at least 65 percent. For coherence, the instructional materials are partially coherent and consistent with the Standards. The instructional materials contain supporting work that enhances focus and coherence simultaneously by engaging students in the major work of the grade and foster coherence through connections at a single grade.
The instructional materials reviewed for Dimensions Math Grade 6 do not meet expectations for not assessing topics before the grade level in which the topic should be introduced. The instructional materials include assessment items that align to standards above this grade level.
The instructional materials reviewed for Dimensions Math Grade 6 meet expectations for devoting the large majority of class time to the major work of the grade. The instructional materials spend at least 65% of instructional time on the major work of the grade.
The instructional materials reviewed for Dimensions Math Grade 6 meet expectations for spending a majority of instructional time on major work of the grade.
The approximate number of chapters devoted to major work of the grade (including assessments and supporting work connected to the major work) is 11 out of 13, which is approximately 86 percent.
The number of lessons devoted to major work of the grade (including assessments and supporting work connected to the major work) is 28.5 out of 34, which is approximately 84 percent.
The number of days devoted to major work of the grade (including assessments and supporting work connected to the major work) is 107 out of 134, which is approximately 80 percent.
A lesson-level analysis (which includes lessons and sublessons) is most representative of the instructional materials because it addresses the amount of class time students are engaged in major work throughout the school year. As a result, approximately 84 percent of the instructional materials focus on major work of the grade.
The instructional materials reviewed for Dimensions Math Grade 6 partially meet expectations for being coherent and consistent with the Standards. The instructional materials contain supporting work that enhances focus and coherence simultaneously by engaging students in the major work of the grade and foster coherence through connections at a single grade. The instructional materials include an amount of content that is partially viable for one year, do not attend to the full intent of some standards, and do not give all students extensive work with grade-level problems.
The instructional materials reviewed for Dimensions Math Grade 6 meet expectations that supporting work enhances focus and coherence simultaneously by engaging students in the major work of the grade.
In Lesson 6.1, students calculate average weight, average height, and average distance (supporting standard 6.SP.5c), and these are connected to unit rates (major standard 6.RP.2).
In Lesson 10.1, students graph points to draw a polygon on the coordinate plane (supporting standard 6.G.3), and this is connected to graphing points in all four quadrants (major standard 6.NS.8).
In Lesson 1.4, students use division of multi-digit numbers (supporting standard 6.NS.2) when writing equivalent expressions and solving equations (major clusters 6.EE.A,B).
In Chapter 3, students evaluate expressions (major cluster 6.EE.A) that include multi-digit decimals (supporting standard 6.NS.3).
In Chapters 11 and 12, students evaluate expressions arising from area and volume formulas (supporting standards 6.G.1,2), and this connects to writing and solving equations for unknown lengths (major standards 6.EE.2,7).
In Lesson 13.1B, students evaluate expressions (major standard 6.EE.1) to find the mean of a data set (supporting standard 6.SP.5c).
In Lesson 13.2, students calculate percentages (major standard 6.RP.3) from analyzing histograms (supporting cluster 6.SP.B).
In Lesson 6.2, students solve problems involving unit rates (major standard 6.RP.3) by dividing multi-digit numbers (supporting standard 6.NS.2).
In Lesson 12.1, students use the ratio of length to width to height of a right rectangular prism (major standard 6.RP.3) to find the volume of the prism (supporting standard 6.G.2).
These is no connection between finding factors (6.NS.4) and generating equivalent expressions (6.EE.3).
In Lesson 11.1, problem 13 involves division of fractions (major standard 6.NS.1) within the context of area, surface area, and volume (supporting cluster 6.G.A). There are no other opportunities to connect 6.G.A and 6.NS.1.
The instructional materials for Dimensions Math Grade 6 partially meet expectations that the amount of content designated for one grade level is viable for one year.
Each lesson was counted as one day of instruction.
A "lesson" with subsections (i.e., 1a, 1b, 1c) counted as three lessons or three days.
A practice day was added for each chapter.
The total days were computed based on a pacing chart provided in the teacher guide. The suggested time frame for the materials and/or the expectations for teachers and students are not viable. Some significant modifications would be necessary for materials to be viable for one school year.
The instructional materials for Dimensions Math Grade 6 partially meet expectations for being consistent with the progressions in the standards. In general, materials follow the progression of grade-level standards, though they don't always meet the full intent of the standards. In addition, lessons utilize standards from prior grade levels, though these are not always explicitly identified in the materials.
In Lesson 1.1, students write numeric expressions for statements (5.OA.2). This material is not identified as content from a prior grade.
In Lesson 2.1, the materials reference Multiplication of a Proper Fraction by a Whole Number as learning from a previous grade, but the materials do not identify Multiplication of a Proper Fraction by a Fraction as previous learning. Instead, the materials treat this topic and Division of a Whole Number by a Fraction and Division of a Fraction by a Whole Number (Lesson 2.2) as grade-level topics, though they are prior learning (5.NF.4).
In Lesson 1.4, students examine division as sharing and division as grouping (3.OA.2 & 3.OA.6), but the material do not reference this as prior learning.
Lesson 3.4 is not identified as work from a prior grade (5.MD.1).
The materials typically develop according to the grade-by-grade progressions in the standards, but one missed opportunity is in the unit on The Number System (Chapters 1-3). The standards include students representing numbers on a number line, but students are not given that opportunity. The model most commonly used in this unit is the bar model. There is some emphasis on a number line with the introduction of integers to help students compare values.
In Chapter 1, students find the GCF of two numbers but do not have an opportunity to "use the distributive property to express a sum of two whole numbers 1-100 with a common factor as a multiple of a sum of two whole numbers with no common factor," as stated in 6.NS.4.
In Lesson 3.4, students convert measurements to different units, but ratio reasoning is not used for these conversions (6.RP.3d).
In Chapter 5, rate language is not used to develop the concept of ratios (6.RP.2).
In Chapters 5 and 6, ratios are not represented with tables (6.RP.3a).
Unit rate is defined in Chapter 6 on page 172, but there is no opportunity for students to "understand the concept of a unit rate a/b associated with a ratio a:b with b ≠ 0" (6.RP.2).
In Chapter 10, students use coordinates and absolute value to find distances between points on a coordinate plane (6.NS.8) but do not apply this understanding to real-world problems.
In Chapter 2, students divide whole numbers, and in Chapter 3, students divide decimals by decimals using the standard algorithm. The materials do not provide opportunities for extensive work dividing multi-digit numbers using the standard algorithm (6.NS.3).
On pages 175 and 189, students examine the net of a triangular prism (6.G.4), but they do not "represent three-dimensional figures using nets made up of rectangles and triangles, and use nets to find the surface area of these figures," as stated in the standard.
The instructional materials for Dimensions Math Grade 6 meet expectations for fostering coherence through connections at a single grade, where appropriate and required by the standards.
In Chapters 11 and 12, students use formulas to calculate area, volume, and surface area (6.G.A) involving measures that are given in both decimal and fraction forms (6.NS.B).
In Chapter 13, students investigate appropriate use of measures of center in different contexts (6.SP.A) and make comparisons among the three measures (6.SP.B).
In Lesson 8.1, students evaluate expressions (6.EE.A) using multiplication and division of fractions (6.NS.A).
In Lesson 9.1, students solve equations (6.EE.B) with fractional coefficients (6.NS.A).
In Lesson 1.3, students model the distributive property (6.NS.B) to solve area problems (6.G.A).
In Lesson 10.3A, students write and solve equations (6.EE.B) to describe relationships between dependent and independent variables (6.EE.C).
Students do not compare rates of two or more quantities using graphs of quantities, missing a connection between 6.NS.C and 6.RP.A.
The instructional materials for Dimensions Math Grade 6 partially meet expectations for rigor and the mathematical practices. The instructional materials meet the expectations for rigor and balance by giving attention throughout the year to individual standards that set an expectation of procedural skill and fluency and spending sufficient time working with engaging applications of the mathematics. The instructional materials do not meet the expectations for practice-content connections because they do not identify the mathematical practices, use them to enrich the content, or carefully attend to the full meaning of each practice standard.
The instructional materials for Dimensions Math Grade 6 meet expectations for rigor and balance. The instructional materials give attention throughout the year to individual standards that set an expectation of procedural skill and fluency, spend sufficient time working with engaging applications of the mathematics, and do not always treat the three aspects of rigor together or always treat them separately.
The instructional materials for Dimensions Math Grade 6 partially meet expectations for developing conceptual understanding of key mathematical concepts, especially where called for in specific standards or cluster headings.
Conceptual understanding is developed by connecting models, verbal explanations, and symbolic representations of concepts, with an emphasis on the use of bar models. The following examples illustrate how the materials develop those standards addressing conceptual understanding.
Standard 6.RP.A: Understand ratio concepts and use ratio reasoning to solve problems.
In Lesson 5.1, page 138 Class activity, students use green and red cubes to explore equivalent ratios. Students are guided through the use of the manipulatives; the activity moves into the procedure of multiplying or dividing a number by the same amount.
In Lesson 5.2, pages 145-152, students use bar models to represent and solve real-world ratio problems in examples 11 - 17.
In Lesson 6.1, page 166 Class Activity, students use cubes to understand average. In page 168 example 3, students use a bar model to solve an average problem.
In Lesson 6.2, page 175, students use division with bar models to find unit rates.
Lessons 7.1 and 7.2 use bar models and 100s grids to develop the meaning of percent. In Class Activity 1, students model fractions, decimals, and percents on 100s grid. In Class Activity 2, students use bar models to represent percentages.
However, the materials do not include work with ratio tables (6.RP.3a).
Standard 6.EE.3: Apply the properties of operations to generate equivalent expressions.
In Lesson 8.1 page 2, a table connects expressions, verbal descriptions, and tiles as a model to help students understand an expression. In the examples that follow, students are not asked to create models to represent the expressions.
In Lesson 8.2, page 20 Class Activity 2, students play a game discussing why two terms are or are not like terms. The first two examples use a diagram to show why two terms are like terms.
In Lesson 9.1B, page 33 Class Activity 1, students are given a visual model of a scale to demonstrate adding and subtracting the same quantity to both sides of an equation. Examples 4-14 on pages 35-42 show bar models as a visual method of solving the equations.
6.EE.5 Understand solving an equation or inequality.
In Lesson 9.2, page 50 BrainWorks #13, students write an inequality to represent the number of days Rachel needs to clear orders for 30 cakes and graph the solution. They then determine how the inequality and graph will change if Rachel must clear the cake orders in eight days or less.
In Chapter 2, pages 56-62, examples of bar models are provided to model problems involving both measurement and partitive division, extending previous understandings of multiplication and division to divide fractions by fractions. Students see how dividing a fraction by a fraction is equivalent to multiplying the fraction by the reciprocal.
The instructional materials for Dimensions Math Grade 6 meet expectations for giving attention throughout the year to individual standards that set an expectation of procedural skill and fluency. The instructional materials develop procedural skills and fluencies and provide opportunities for students to independently demonstrate procedural skills and fluency throughout the grade level.
Chapter 3 addresses adding, subtracting, multiplying, and dividing multi-digit decimals using the standard algorithm for each operation (6.NS.3). Each lesson utilizes the standard algorithm and contains fluency practice in the lesson through Basic Practice and Further Practice in the Exercises. For example, in Lesson 3.1, #3 includes addition and subtraction problems written in a vertical format, #4 and #7 provide questions written in a horizontal format, and #9-12 students use the standard algorithm to solve problems.
Chapter 8 addresses writing and evaluating numerical and algebraic expressions (6.EE.1,2). In Lesson 8.1, there are several examples that demonstrate the procedural skills for writing and evaluating numerical and algebraic expressions, and there are several opportunities for students to practice these skills.
In Chapter 6, students divide multi-digit numbers to calculate rates and solve problems involving percentages (6.NS.2). Students also find the average of sets of numbers containing decimal values (6.NS.3) (pages 167-170) and compute with decimals as they calculate rates (pages 173-177).
In Chapter 9, students solve equations of the form px = q involving multi-digit numbers and decimals (6.NS.2,3; 6.EE.7).
In Chapters 11 and 12, students divide multi-digit numbers and perform operations with decimals as they find area, volume, and missing side lengths of two- and three-dimensional figures (6.NS.2,3; 6.EE.2c).
The instructional materials for Dimensions Math Grade 6 meet expectations for being designed so that teachers and students spend sufficient time working with engaging applications of the mathematics. Engaging applications include single- and multi-step problems, routine and non-routine, presented in a context in which the mathematics is applied.
Opportunities for students to engage in application and demonstrate the use of mathematics flexibility primarily occur in Math@Work and Brainworks included with each lesson. Additionally, the Problem Solving Corner included with some chapters engages students in application problems. Some of the Extend Your Learning Curve problems included in the chapter reviews are intended as non-routine problems (page 1 of Teacher Guide 6B and page 27 of Book B).
Basic Practice tasks tend to be single-step, routine application problems. For example, in Lesson 10.3 Basic Practice #2, students identify the independent variable and the dependent variable in given scenarios (6.EE.9).
Further Practice tasks tend to include language and/or types of numbers that increase the level of complexity, or the task may require more than one step. For example, in Lesson 2.2 Further Practice #3, students evaluate expressions involving multiple operations using the order of operations (6.EE.1).
In BrainWorks tasks, students often make decisions and explain the decisions that are made. Many of these problems are non-routine because they aren't similar to an example presented earlier in the materials, and they can usually be solved in a variety of ways. For example, in Lesson 7.2 BrainWorks #15, students choose between two options regarding pocket money and explain their choice (6.RP.3c).
In Lesson 2.2, students determine how many fractional lengths of ribbons could be cut from a whole- number length (Book A, page 52 example #9) and how to divide paint into jars in fractional amounts (Book A, page 62 example #16) (6.NS.1).
In Lesson 6.2, Class Activity 2 on page 171, students apply unit rates (6.RP.3) throughout the lesson and in the Problem Solving Corner on pages 183-187.
In Lesson 12.2, Math@Work, students find the surface area of rectangular and triangular prisms in the contexts of determining how much paint is needed to cover a cube and how much fabric is needed to create a tent, respectively (6.G.4).
The instructional materials for Dimensions Math Grade 6 meet expectations that the three aspects of rigor are not always treated together and are not always treated separately. There is a balance of the three aspects of rigor within the grade.
Conceptual understanding is generally developed through the Class Activities, examples, and corresponding Try It! tasks. For example, students use pictures and data tables to determine various ratios in Try It! tasks in Chapter 5 on pages 132-136.
Procedural skill and fluency is treated independently in the Basic Practice and Further Practice problems of every chapter. For example, in Chapter 2 page 63, students evaluate given expressions by dividing a fraction by a fraction. In Chapter 3 page 94, students divide decimals by either another decimal or a whole number.
Chapter 11 Area of Plane Figures contains the three aspects of rigor treated in a single lesson but taught on separate days. Lesson 11.1 starts with Class Activity 1 on page 112, where students are guided through an activity to develop conceptual understanding of the area of parallelograms and "derive the formula for the area of a parallelogram." This is followed by examples and Try It! problems where students find the area of parallelograms, developing procedural fluency. Finally, students "consolidate and extend the material covered thus far," by applying the mathematics to problems like the BrainWorks problem on page 122, finding the area of the yellow parallelogram in the Republic of Congo flag. This lesson is expected to take three days.
In Lesson 8.1B, Class Activity 1 page 7, students represent the number of toothpicks needed for a given amount of squares. After completing a table, they write an algebraic expression that would give the number of toothpicks for n squares (conceptual understanding). They use the algebraic expression to find the number of toothpicks needed to make four different types of squares (application and procedural skill).
In Lesson 8.1C, students create a table of values and use bar models to represent real-world problems algebraically, which integrates application, conceptual understanding, and procedural skills used together.
In Lesson 3.2, students integrate procedural skill with application by determining the total cost of 12.8 pounds of wire that cost $23.16 per pound.
The instructional materials for Dimensions Math Grade 6 do not meet expectations for practice-content connections. The instructional materials prompt students to construct viable arguments and analyze the arguments of others, and they partially assist teachers in engaging students to construct viable arguments and analyze the arguments of others and explicitly attend to the specialized language of mathematics.
The instructional materials reviewed for Dimensions Math Grade 6 do not meet expectations that the Standards for Mathematical Practice are identified and used to enrich mathematics content within and throughout the grade level.
Mathematical Practices are not identified in the materials. In the Syllabus, page 39, Mathematical Processes (Reasoning, Communication and Connections, Application and Modeling, and Thinking Skills and Heuristics) are identified, but these are not referenced in the remainder of the materials. Teachers are not provided with guidance or directions for how to carry out lessons to ensure students are developing the mathematical processes.
For MP4, students use physical models in problems. For example, in Chapter 6 Section 6.1, Class Activity 1, students use blue and yellow blocks to model averages for player A and player B. However, students do not represent the situation mathematically with an equation or a method that would help them generalize information to draw conclusions.
For MP5, students are directed which tools to use in problems, and students do not discuss which tools to select or use strategically. The instructional materials show different methods for solving problems, but students do not choose which method to use or which method would be most appropriate for problems.
For MP7, the materials do not identify looking for and making use of structure. For example, in Chapter 1 BrainWorks Exercise #8, Daniel makes use of structure to write $$3^5$$ as an equivalent expression for $$3^2\times3^3$$. However, the materials do not identify the use of structure or provide guidance for teachers as to how MP7 could be used.
The instructional materials reviewed for Dimensions Math Grade 6 do not meet expectations that the instructional materials carefully attend to the full meaning of each practice standard. The materials do not attend to the full meaning of three MPs.
In Student Workbook 6A, Lesson 1.4, students "draw a model and equation to match" for two real-world problems. On page 32 problem 3, students are prompted to "draw a model and solve" for a real-world problem. In these problems, there are no opportunities for students to revise initial assumptions or models once calculations have been made.
In Lesson 2.1C, examples 14-16 show students how to use bar models to solve real-world problems and write the solution mathematically from the models. Problems like this are also encountered in Lessons 5.2 and 7.2. In these problems, students do not make assumptions, define quantities, or choose what model to use, and there are no opportunities for students to revise initial assumptions or models once calculations have been made.
In Lesson 8.1C, students complete an example by using a given table to model the relationship in age between two children, creating an expression from the table, and using the expression to determine the age of one child. In Try It! on page 14, students complete a similar problem on their own. In this problem, students do not make assumptions, define quantities, or choose which model to use.
In Lesson 2.1, students are shown how to use a bar model as a mathematical tool for solving a problem involving multiplication of fractions. Other tools are not introduced or used, and students do not choose which tool to use.
In Lesson 10.2, students are shown a number line to help define absolute value in the introduction. Further examples in the lesson show distance on a coordinate plane through the use of absolute value, but students do not choose which tools to use to find distances.
In Lesson 12.2B, students are shown a net of a triangular prism, but students do not use nets as a tool for finding surface area in subsequent examples in the lesson.
Students do not use ratio tables or number line diagrams in problems with ratios, rate, or percentages. They are shown tape diagrams as a tool for working with ratios, but students do not choose the tool.
In Lesson 2.2, the materials demonstrate how to divide a whole number by a fraction, draw a model to represent "how many 1/3's and 2/3's are in 6," and complete a table by using the reciprocal of the divisors to write equivalent multiplication expressions. Students are asked to: "(a) Look at the patterns in the divisors and the quotients. What happens to the quotient as the divisor gets smaller? (b) What do you notice about the quotients of the division expressions and the products of the equivalent multiplication expressions?" However, students are given a summary of the activity showing the generalization, "Dividing a whole number by a fraction is the same as multiplying by its reciprocal," as an algebraic expression.
In Lesson 3.2, the materials demonstrate what happens to a rational number written in decimal form as it is multiplied by powers of 10. Students relate their results to place value. Students do not look for and express regularity in repeated reasoning. In example 5 on page 81, students determine which decimal factors produce products of a given size, but the remainder of the lesson includes teacher-led explanations of the repeated reasoning rather than students engaging in the mathematical practice.
On page 94, students divide rational numbers written in decimal form, but none of the divisions result in repeating decimals, which means students do not engage in MP8. In BrainWorks question 16 on page 94, students compare two different students' reasoning about the value of repeating decimals, but they do not look for and express regularity in repeated reasoning themselves.
The instructional materials reviewed for Dimensions Math Grade 6 meet expectations for prompting students to construct viable arguments and analyze the arguments of others concerning key grade-level mathematics.
In Chapter 7 page 198, students explain whether they would want 10 percent of $20 or 20 percent of $10.
In Chapter 3 page 101 Write in Your Journal, students determine if 6.5 x 10 = 6.50 is a student's correct application of the rule, "Add a zero when you multiply by 10," and explain their reasoning.
In Workbook 6A, Lesson 1.1B page 7, students explain Leo's error in evaluating $$6^3$$ as 18.
The instructional materials reviewed for Dimensions Math Grade 6 partially meet expectations for assisting teachers in engaging students to construct viable arguments and analyze the arguments of others concerning key grade-level mathematics. The instructional materials provide little assistance to teachers in engaging students in constructing viable arguments and analyzing the arguments of others, and the assistance that is provided is general in nature.
Making connections within mathematics and between mathematics and the real world.
The syllabus does not give specific direction to teachers about creating these opportunities for students, and this information is not found in any of the other materials besides the Syllabus.
Within the remainder of the instructional materials, there are no prompts, suggested questions, or frameworks for teachers suggesting ways to engage students in constructing viable arguments and/or analyzing the arguments of others. There is no guidance for teachers as to what constitutes a viable mathematical argument, such as the use of definitions, properties, counterexamples, cases, or if-then statements, and there is no guidance for analyzing the arguments of others, such as repeating or restating to check for understanding, asking clarifying questions, or building on a previous idea.
The instructional materials reviewed for Dimensions Math Grade 6 partially meet expectations for explicitly attending to the specialized language of mathematics.
In Lesson 1.1, "Remark" on page 2, the following information is given: "There will be a convention that when a difference between two numbers is asked for, it will be the larger minus the smaller unless otherwise specified. For division, the quotient of two numbers will be the larger divided by the smaller unless otherwise specified." This may reinforce a common misconception that division is always the larger number divided by the smaller number.
"Simplest form" is used in Chapter 2 with fractions and in Chapter 5 with ratios, but it is not used in the CCSSM.
In Student Workbook 6A problem 6 page 15, "Cora and Alyssa solved the expression independently and got different solutions: $$6 + 3\times 6 \div2 + 4$$. Cora says that the solution is 19. Using the Order of Operations convention, which girl is correct? What was the other girl thinking?" The term convention is used once in the Remark section on page 3, but a formal definition or explanation of convention is not provided.
Simplify is used throughout Chapter 8, but it is not used in CCSSM. | CommonCrawl |
We study a direct flux breaking scenario in all three heterotic string theroies on general Calabi-Yau (CY) threefolds. We show background independent formulae to construct three-generation models without chiral exotics from $SO(32)$, $SO(16)\times SO(16)$ and $E8\times E8$ heterotic string theories. In this talk, we show the algorithms to search the formulae and a concrete three-generation model on a specific CY threefold. | CommonCrawl |
This post is for jogging my old memory about a bit of theory behind it. I could go deeper about this subject. But for now, I only consider the simple case at least applicable to calculating square root.
I follow notations from above sketch.
Given $a \le 0$, we want to calculate $r$ s.t. $a = r^2$ and $r >= 0$.
For the initial value $x_0$, you can start whatever positive number. | CommonCrawl |
I also have a youtube channel where I upload programming contest solutions.
17 What do we do with so many similar combinatorics questions?
10 Completely normal comments being deleted by administrator?
32 can another topology be given to $\mathbb R$ so it has the same continuous maps $\mathbb R\rightarrow \mathbb R$?
24 What is the status on this conjecture on arithmetic progressions of primes?
18 What word does this photo of a mouse represent?
18 How does this addition work (13+7=1130)?
15 What is shiny and makes people sad when it falls? | CommonCrawl |
I'm trying to figure out how to calculate the number of whole pixels in a pixel circle using the diameter of the circle.
I understand how to find the area of a circle using diameter. But I'm wondering what else I have to do to round this number correctly. I was looking into the Midpoint circle algorithm, but I don't think that fully answers how to figure this out.
The circle I am making is 17px in diameter, which makes the area 226.865px. When I go to a pixel circle generator and make it 17x17, I have an outcome of 225 pixels. What else do I need to do to find the area in pixels?
Please consider this as an supplement to Ross's answer.
I figured out how the pixel generator count the number of unit squares inside the inscribed circle for a $d \times d$ square. It basically count the number of unit squares whose centers lie in the interior or on the boundary of the inscribed circle.
Geometrically, this falls into two cases.
Case I - $d = 2\ell + 1$ is odd.
as given in Ross' answer.
Case II - $d = 2\ell$ is even.
Matching the corresponding numbers from pixel generators and that on OEIS A124623.
Not the answer you're looking for? Browse other questions tagged geometry area or ask your own question.
What is the maximum number of $15 cm\times 15 cm$-square I can cut from a diameter $50 cm$- circle?
Finding the measure of an angle given area and diameter. | CommonCrawl |
Though I am having trouble finding out how to do this. In the documentation I did not find any easy way to handle infinite series.
I am sure this would be very helpful to many that struggle with a math problem and need to see it to believe it. A conventional plotting program will not work here, hence I searched for a better tool and found Sage Math. Is it possible in sage? Do you have any TLDR material that I could look into?
So do you want to plot a function which is 1 for x>=0 and +Infinity for x<0?
You can proceed as follows, where $N$ is given to serve as an approximation of $\infty$.
Note. For this approach to make sense, you need to show that that the partial sums converge fast, which is the case since $g(x) \le 1/2$ for all $x$.
Can I use Sage as Octave creating a figure showing and holding it on while adding points? | CommonCrawl |
36 How far out from the Sun is visible light still sufficient to read a book?
30 If all motion is relative, how does light have a finite speed?
26 Couldn't we always redefine units so that inertial mass and gravitational mass are equal?
21 What is a wave function in simple language?
21 What do sine, tan, cos actually mean?
19 Can general relativity be explained by equations describing a fabric of space embedded in a flat 5-dimensional Minkowski space?
15 Fields finitely generated as $\mathbb Z$-algebras are finite? | CommonCrawl |
I'm a Ph.D student from UChicago, studying with Prof. Emerton. Currently I'm trying to understand the cuspidal subgroup, especially the order of (0)−(∞)(0)-(\infty)(0)−(∞), for J0(pq)J_0(pq)J0(pq), where ppp and qqq are distinct primes.
I found in the page of Prof. Stein http://wstein.org/Tables/cuspgroup/index.html where it's mentioned that prof. Ogg has done calculation for the cuspidal subgroup for this case, but I tried to search online and couldn't find any reference for that.
Compute the order of $(0)-(\infty)$ in $J_0(N)$.
- sign - 0 or 1; if 0 gives correct answer.
- if 1, then code may be 10x faster, but power of 2 that divides the order may be WRONG.
- an integer, the order of the class of 0-oo. | CommonCrawl |
Let $B$ be a finite dimensional C$^\ast$-algebra equipped with its canonical trace induced by the regular representation of $B$ on itself. In this paper, we study various properties of the trace-preserving quantum automorphism group $\G$ of $B$. We prove that the discrete dual quantum group $\hG$ has the property of rapid decay, the reduced von Neumann algebra $L^\infty(\G)$ has the Haagerup property and is solid, and that $L^\infty(\G)$ is (in most cases) a prime type II$_1$-factor. As applications of these and other results, we deduce the metric approximation property, exactness, simplicity and uniqueness of trace for the reduced $C^\ast$-algebra $C_r(\G)$, and the existence of a multiplier-bounded approximate identity for the convolution algebra $L^1(\G)$.
2010 Mathematics Subject Classification: Primary 46L65, 20G42; Secondary 46L54.
Keywords and Phrases: Quantum automorphism groups, approximation properties, property of rapid decay, II_1-factor, solid von Neumann algebra, Temperley-Lieb algebra.
Full text: dvi.gz 105 k, dvi 285 k, ps.gz 441 k, pdf 455 k. | CommonCrawl |
The associative law of multiplication states that $a\times(b\times c)=(a\times b)\times c$ (where $a$, $b$, and $c$ are real numbers). That is, changing the grouping of numbers to be multiplied does not affect the answer. Therefore, $2(ab)=2\times(a\times b) =(2\times a)\times b=(2a)b$. | CommonCrawl |
$M$ is pseudo-coherent, $L \in D^+(R)$, and $K$ has tor amplitude in $[a, \infty ]$.
Proof. Proof in case $M$ is perfect. Note that both sides of the arrow transform distinguished triangles in $M$ into distinguished triangles and commute with direct sums. Hence it suffices to check it holds when $M = R[n]$, see Derived Categories, Remark 13.33.5 and Lemma 15.72.1. In this case the result is obvious.
Proof in case $K$ is perfect. Same argument as in the previous case.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0ATK. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0ATK, in case you are confused. | CommonCrawl |
Edit:" please see the attached figure. The blue part is all I have. Also, you can neglect the noise term. Assume the signal is deterministic. Typical values of the unknown parameters $\alpha$ and $\omega_0$ are in the the range [0.25,5]. Note that all the parameters are unknown $A \omega_0, \alpha, \phi_0$. The setup hints at curve fitting which I did but don't prefer."
with very low start frequency $\omega_0$ and chirp rate $\alpha$.
The signal is sampled at high sampling frequency $f_s$. But the available time domain signal is short, i.e. barely a complete cycle. The amplitude of the signal and the phase shift are also unknown. Are there ways to figure out the chirp parameters other than curve fitting or spectrogram?
Amplitude $A$ is trivial. Square and low-pass filter/average; then take the square root.
From the resulting linear function, $\alpha$ can directly be extracted as the slope, and knowing the slope, $\omega_0$ is just the constant summand.
With $A$, $\omega_0$ and $\alpha$ known, you can simply compare at any fixed $t$ to find $\phi_0$.
Not the answer you're looking for? Browse other questions tagged fft chirp or ask your own question.
How can the FFT be used for estimating linear chirp parameters? | CommonCrawl |
Sylvester equation is a matrix equation of the form $AX+XB=C$.
How to solve $AX=XB$ for $X$ Matrix?
I have two symmetric $3\times 3$ matrices $A, B$. I am interested in solving the system $$AX= XB$$ Is there a way this is usually done? The matrices are not necessarily non singular.
When a solution of the Sylvester equation is not singular? | CommonCrawl |
The properties of internal consistency ($\alpha$), classical reliability ($\rho$), and congeneric reliability ($\omega$) for a composite test with correlated item error were analytically investigated. Possible sources of correlated item error are contextual effects, item bundles, and item models that ignore additional attributes or higher-order attributes. The relation between reliability and internal consistency is determined by the deviance from true-score equivalence. Reliability (classical or congeneric) is internal consistency plus the relative deviance from true-score equivalence. The influence of correlated item error on $\alpha$, $\rho$, and $\omega$ is conveyed strictly through the total item error covariance. As the total item error covariance increases, $\rho$ and $\omega$ decrease but $\alpha$ increases. The necessary and sufficient condition for $\alpha$ to be a lower bound to $\rho$ and to $\omega$ is that the total item error covariance not exceed the deviance from true-score equivalence. Coefficient $\alpha$ will uniformly exceed $\rho$ or $\omega$ in true-score equivalent tests with positively correlated item error. The factor analytic item model with specific factors alters some of these relationships. Correlated item error in this model can cause $\alpha$ to exceed $\omega$ but not $\rho$ or to exceed both $\omega$ and $\rho$. Positively correlated specific factors can further contribute to $\alpha$'s exceeding $\omega$. The compound symmetric item error covariance matrix is sufficient to study effects of correlated item error. Contrary to the standard practice in test development, $\alpha$ cannot be assumed to be a lower bound to $\rho$ or $\omega$. For congeneric tests, a better approach is to fit an item model and estimate $\omega$ directly. | CommonCrawl |
Speaker. Mr. E.P. Unny, Chief Political Cartoonist, The Indian Express.
About the speaker. Mr Nakul Bhalla is a graduate in mechanical engineering from the Manipal Institute of Technology. He worked as a central planning engineer at Larsen and Toubro limited and then as a research assistant at IISc Bangalore. Starting a new chapter of his life, he joined the Dramanon Theatre Company as a creative partner and then later went on to start his own theatre company, The SparkPlug Theatre Company which is more than three and a half years old now.
Speaker. Prof. K. L. Sebastian.
Abstract. Many a times, long chain biological molecules have to pass through nano-sized pores. Entering a pore is a constraint, which results in a decrease of entropy or in other words, increase in free energy. Thus, the molecule has to cross a free energy barrier in space. Because of these biological examples, passage of long chain molecules have been studied in vitro too. Motivated by this we consider the process of passage of a long chain molecule over a free energy barrier. Interestingly, the simplest model for the process would be a chain of drunken walkers, climbing over a hill. One can find analytical solutions for simplest possible model for such systems, known as the Rouse model. Within this model, we find that calculating the activation energy for the process is (mathematically) equivalent to calculting the exponent in a quantum mechanical tunneling problem. Using this, it is easy to see that for a long enough molecule, the activation energy has to be independent of the length of the molecule. Further, a long enough molecule will cross the barrier with a steady velocity which is determined by the "steadily moving kink" like solutions of the associated non-linear equation. As a result, the time that it takes for the molecule to cross over the barrier is proportional to its length. The relevance of the results to biology will be quickly outlined.
If time permits, and if there is sufficient interest, my experiences in publishing the results too will be outlined, and it will be stressed how useful I found the arXiv.org to be.
Speaker. Prof. B. V. R. Tata.
Abstract. Coulomb's law tells that like charges repel and unlike charges attract. I will convince you by showing my own experimental and simulation results on charged colloids that like-charges attract under certain conditions viz., in the presence of large number of counterions. Direct measurements of pair-potential between like charged colloids have shown existence of long-range attraction in addition to the usual screened Coulomb repulsion. Though counterions are known to mediate the attraction, the exact mechanism of attraction still needs to be modelled and understood.
The prime aim of this talk is to share with you all the excitement in establishing the "Like likes Like" through experiments using home built set-up and own simulation codes.
Abstract. The talk will trace the history of engines from inception to the present and bring out various challenges faced in the course of its development. The role of science and technology in sustaining mobility without further suffocation will be highlighted and future prospects discussed.
Crosstalk 5. Designs For Purpose: Nature is a Brilliant Designer!
Title: Designs For Purpose: Nature is a Brilliant Designer!
Abstract. The talk will introduce you through different molecular assemblies ( specifically the macromolecules) in living systems, their organisation and functions, giving emphasize on PROTEINS (polypeptides). To conclude, a small polypeptide is investigated to understand its structural preference.
Abstract. During World War II various countries attained varied level of technological expertise. The technological expertise or readiness covered various fields of science and technology. Some of the fields with lot of research activities, inventions, design and experiments were electronic warfare and navigational technology.
This talk will give an introduction to various electronic warfare techniques, related hardware inventions and uses. It will also provide an insight into the war doctrines of countries and resultant uses of electronic warfare of 1935-45. This talk will also introduce the basic details of the navigational technology used by guided bombers (for bombing and landing) and first missiles (V1 and V2) of World War 2 years.
Abstract. Dr. David Hu and his team recently showed that nearly all mammals take 21 s to empty their bladder irrespective of their body size. This discussion will reveal a conspiracy between isometry of the urinary system and hydrodynamics of urination to achieve this invariance.
Ref: Yang, P. J., Pham, J., Choo, J., & Hu, D. L. (2014). Duration of urination does not change with body size. Proceedings of the National Academy of Sciences, 111(33), 11932-11937.
Abstract. Droplets and sprays are ubiquitous in daily life and play very important roles in diverse fields of engineering. The first part of the talk, will give a brief introduction to the kind of work we are pursuing in this field.
The main portion of the talk will deal with droplet evaporation. Droplet evaporation is at the heart of all combustion systems, and also important in varied applications such as spray drying to form powders, spray painting, ink-jet printing, 3-D printing for additive manufacturing, etc. While several studies have been performed on single evaporating droplets in literature, the phenomenon has still not been completely explained. A fundamental study was performed to explore the reasons for deviations in the experimental and calculated (diffusion driven) evaporation rates of a pendant droplet in a 'quiescent' ambient. The results of the experiments, which show interesting insights into the common assumption of a quiescent environment in the presence of evaporation, will be presented.
The last few minutes of the talk will very briefly introduce some thoughts on the dehumidification and water extraction process which we plan to work on. Studies on this area could especially benefit from a multi-disciplinary approach from interested members of the audience.
Dr. Albert Sunny from Indian Institute of Science, Bangalore is going to present his work on wireless and social networks on 1-March-2017 at 11 am.
In sensor networks, the absence of infrastructure mandates the use of ad-hoc network architectures. In these architectures, nodes are required to route data to gateway nodes over a multi-hop network. In the first half of the talk, Dr. Sunny will present a unified framework that can be used to compare different deployment scenarios, and provide a means to design efficient large-scale energy harvesting multi-hop wireless sensor networks. In spite of the presence of voluminous reservoirs of information such as digital libraries and the Internet, asking around still remains a popular means of seeking information. In scenarios where the person is interested in communal, or location-specific information, such kind of retrieval may yield better results than a global search. Hence, wireless networks should be designed, analyzed and controlled by taking into account theevolution of the underlying social networks. This alliance between social network analysis and ad-hoc network architectures can greatly advance the design of network protocols, especially in environments with opportunistic communications. Therefore, in the second half of the talk, I will present a model that captures the temporal evolution of information in social networks with memory.
Dr. Albert Sunny completed his B.Tech. in Electrical and Electronics engineering from National Institute of Technology, Calicut, India and M.Sc.(Engg.) from Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India. He went on to complete his Ph.D. at the Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India. His research interests are in modelling, analysisand control of wireless and social networks.
Abstract. We all 'know' that among all closed geometric plane figures with a fixed perimeter, the circle has the maximum area. What about proving this? An interesting idea was by Jacob Steiner. But his proof had one issue. He could prove that it cannot be anything other than circle. But does circle has this property? That he could not prove.
We fill this gap using the infinite avatar of Pigeonhole principle. We use it to first prove the famous Bolzano-Weierstrass theorem from Analysis and its generalisations. It can then be used to fill the gap in the proof of Steiner.
Abstract. A biological cell is a complex soft matter system in which the various physical and chemical processes span multiple spatial and temporal scales. The various theoretical and computational tools developed in the context of soft matter physics and statistical physics may be utilized to build highly quantitative models to understand these processes. In this talk, I will first present a brief overview of multiscale modelling techniques and show how information can be transferred from one scale to the other in a self-consistent manner. In the next part, I will present an atomic to thermodynamic model to study the problem of how proteins interact with the cell membrane and how does the collective interactions of these proteins lead to large scale changes in the morphology of the cell membrane.
A talk on First principles investigation on Quantum Materials by Dr. Subhasish Mandal, Postdoctoral Research Associate, Dept. of Applied Physics,Yale University.
Computer simulations based on first principles calculations play a central role in helping us understand, predict, and engineer physical, chemical, and electronic properties of technologically relevant materials. This can solve many problems towards building faster, smaller and cheaper devices for processing and storing information as well as for saving energy. Many of these processes involve electron excitations and strong local magnetic fluctuation that the 'standard model' of electronic structure, Density Functional Theory (DFT), can't capture properly. In this context, I will highlight two popular approaches that go beyond the standard DFT. First, I will discuss how Dynamical Mean Field Theory (DMFT) in combination with DFT has recently been successful for detailed modeling of the electronic structure of many complex materials with strong electron correlation. To give an example, I will show the iron-based superconductors on both bulk and monolayer phases and their anomalous properties, which have their origin in strong Hund's coupling and give rise to the rich physics of Hund's metals. Next, I will discuss my collaborative effort toward developing a high scalable, open-source GW software to compute electronic excited states more efficiently for petascale architectures using the Charm++ parallel framework. At the end, I will briefly discuss topological crystalline insulators, which are a new class of topological materials where electronic surface states are topologically protected along certain crystallographic directions by crystal symmetry. I will show that, without any external perturbation, both massless Dirac fermions protected by the crystal symmetry and massive Dirac fermions with crystal symmetry breaking can coexist on a single surface.
Abstract: Dr. K.P. Naveen will introduce infrastructure-based wireless network that comprises two types of nodes, namely, relays and sinks. The relay nodes are used to extend the network coverage by providing multi-hop paths to the sink nodes (that are connected to a wireline infrastructure). Restricting to the one-dimensional case, we aim to characterize the fraction of covered region for given densities of sink and relay nodes. We first compare and contrast our infrastructure-based model with the traditional setting, where a point is said to be covered if it simply lies within the range of some node. Then, drawing an analogy between the connected components of the network and the busy periods of an M/D/\infty queue, and using renewal theoretic arguments we obtain an explicit expression for the average vacancy (which is the complement of coverage). We also compute an upper bound for vacancy by introducing the notion of left-coverage (i.e., coverage by a node from the left). We prove a lower bound by coupling our model with an independent-disk model, where the sinks' coverage regions are independent and identically distributed. Through numerical work, we study the problem of minimizing network deployment cost subject to a constraint on the average vacancy. If time permits I will discuss about the generalization of the above model to a hop-count constrained model; I will also mention about our on-going work on the 2-dimensional setting.
Bio: Dr. K.P. Naveen received the B.E. degree in ECE from the Visveswaraya Technological University (VTU), Belgaum (2005), and Ph.D degree from the Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore (2013). Subsequently, he was a post-doctoral fellow with the INFINE team at INRIA Saclay, France. Since Jan. 2016 he is with the Department of Electrical Engineering, Indian Institute of Technology Madras, as a DST-INSPIRE faculty. His research interests include modeling and performance analysis of wireless networks, stochastic games and optimal control.
This talk will focus on distributed power control in a cellular network in the presence of strategic users using game theory. Strategic users in a wireless network cannot be assumed to follow the network algorithms blindly. A pricing mechanism is proposed and the optimal prices are obtained to make the users comply with the network objective. Some of the users, which we call malicious users, aim to hurt the performance of other users. Examples of such behavior is jamming where the jammer transmit with higher power in order to create interference to other users. A modified utility model is used to model this malicious behaviour of malicious users. The talk will consider a scenario, in which the network and regular users gather probabilistic information about the presence of jammers by observing the network over a long time period. The regular users modify their actions according to this Bayesian information. Bayesian pricing mechanisms which have power price signals from the network to the users are analyzed and the Bayesian Nash Equilibrium (BNE) points are obtained. The optimal prices are also obtained for the Bayesian case.
Anil Kumar obtained his B. Tech from the Government College of Engineering Kannur, Kerala in 2006. In 2010, he obtained his MS by Research, from the Department of Electrical Engineering, at Indian Institute of Technology Madras, India under Prof. Srikrishna Bhashyam. During his MS, he did his thesis work for 6 months at Indian Instiute of Science, Bangalore with Prof. Rajesh Sundaresan. Between May 2010 and December 2011, he worked as a research scientist with the Telekom Innovation Laboratories (T-Labs) of Deutsche Telekom in Berlin which is associated with TU Berlin. He obtained his PhD in August 2015, from the Chair of Theoretical Information Technology at Technical University of Munich, Germany, under Prof. Holger Boche. His PhD thesis was titled 'Resource Allocation and Pricing Mechanisms for Wireless Networks with Malicious Users'. Now he is a postdoctoral researcher jointly at the Chair of Communication Networks at TU Dresden and 5G Labs Germany.
Title: Computer simulation of membrane elasticity and morphology.
About the speaker: Prof. John H. Ipsen is currently Associate Professor at the MEMPHYS - Center for Biomembrane Physics in University of Southern Denmark.
Ensuring physical and mental health is important not only for the students, but also for every one of us. One of the easily implementable measures to ensure good physical and mental health as well as high level of concentration is regular practice of yoga. It is, therefore, planned to start regular yoga classes for all@iitpkd.
Nov 2003 - Elected Fellow of the Indian National Academy of Engineering.
Topic-"My Experiences in the Indian Army – In the Service of the Nation"
Major General(retd) VN Prasad is a senior army veteran with corporate experience at top management level. He is also an exceptional mentor. He was a fellow at the Daniel K..Inouye Asia-Pacific Center for Security Studies in 2005. He served as the General Officer Commanding at the Sino Indian border from 2011 to 2012. He currently heads the L&T Manufacturing Complex at Hazira and is also the Vice President, Central Management Committee, L&T Ltd, Coimbatore. He is also associated with 'Project Shine' and 'Single Teacher School' Project, sponsored by L&T.
Dr Anil Prakash Joshi is an Indian green activist, social worker, botanist and the founder of Himalayan Environmental Studies and Conservation Organization(HESCO), a Dehradun-based non-governmental organization involved in the development of environmentally sustainable technologies for the agricultural sector.He is a recipient of the Jamnalal Bajaj Award and is an Ashoka Fellow.The Government of India awarded him the fourth highest civilian honour of the Padma Shri, in 2006, for his contributions to Indian society.
Prof. Manu Santhanam completed his B. Tech from IIT Madras, Jul 1994 and M. S. from Purdue University, May 1996 and PhD Purdue University, Aug 2001.
Mr Santhanam worked as Senior R&D Chemist in Sika Corporation, USA from May 1996 to Nov 1998. His job involved preparation of formulations for chemical admixtures for concrete, laboratory and field evaluations, assistance in production and quality control, and product implementation.
He was the instructor in Purdue University, USA, May – Aug 2001 (Teaching of a summer term course). He was Assistant Professor in IIT Madras from Oct 2001 to Mar 2009 and was Teaching, research, and industrial consultancy. Currently, he is Professor and placement advisor at IIT Madras.
He is a Life member of Indian Concrete Institute, Member of American Concrete Institute and Member of RILEM.
He has received prestigious awards like W.L. Dolch Award for Outstanding Graduate Student in Civil Engineering Materials, Purdue University, May 2001, Department of Science and Technology (DST), Government of India, Young Scientist Grant, 2003 – 2006, All India Council for Technical Education (AICTE), Career Award for Young Teachers, 2006 – 2009, Indian Concrete Institute – Prof. V. Ramakrishnan Award for Outstanding Young Researcher in Concrete Technology, 2006 and Indian National Academy of Engineering (INAE) Young Engineer Award 2008.
A talk on "Cements for a sustainable future"
A talk on "Integrating Theories of Language Comprehension and Production"
Rajakrishnan Rajkumar 's research interests lie at the intersection of language technology and science inquiries about language production and comprehension. After completing a PhD in computational linguistics from The Ohio State University, he now teaches linguistics at the Indian Institute of Technology (IIT), Delhi, India. Before that, he completed an undergraduate degree in mechanical engineering from College of Engineering, Trivandrum and subsequently switched to linguistics at the masters level at Jawaharlal Nehru University (JNU). His recent research has looked at modelling choice in the grammar of languages using computational models and techniques. He has also conducted eye-tracking experiments to study comprehension of synthetic speech.
A session on "Awareness of Cancer" will be conducted on Wednesday, 27 September, 4:30 to 5:30pm by Ahalya Women and Children's hospital. The focus of the talk will be on awareness about cancers which occur more among women. They might also touch upon rubella vaccination.
The intended audience will be the women community of IIT Palakkad consisting of students, staff, faculty and their family members. However, anyone interested to know about the topic is welcome.
Quantum Codes led by Dr. Piyush.
Abstract. In this talk, Dr. Piyush will explain the theory of quantum codes, in particular the theory of stabiliser codes, starting from the theory of classical error correcting codes. The goal of the talk is to show the following: Construction of an $n$-length quantum stabilizer code is essentially the same as constructing a classical linear code of length $2n$ over a finite field $F_p$, but for a "minor" technical requirement of isotropy. What this means in practice is that one can study quantum codes purely as a combinatorial object with no mention of Hilbert spaces, measurements or the life of half-dead cats. For folks who *do* care about feline lives, he will also explain how the isotropy condition arise in this context (Hint: It has to do with certain commutation relation of operators over a Hilbert space).
No background on error correcting codes, whether quantum or classical will be assumed.
Title: "Concrete Ways of Improving the Teaching of Experimental Physics"
every bit as (and possibly, even more) exciting than theoretical pursuits?
Ken Wilson, Steven Weinberg and Richard Feynman.
students have, by now, received the Bhatnagar, Infosys and other prizes.
several algorithms for simulating them.
Relativistic Astrophysics to the University of Illinois at Urbana-Champaign.
His PhD thesis earned him the Chu Award for Excellence in Graduate Research.
Research Division at Quazar Technologies, where he has just completed a year.
equations in fields as varied as astrophysics and condensed matter physics.
the package, which his group can carry out at Quazar Research.
Touchstone Foundation, an ISKCON Bangalore initiative is a registered non-profit organization comprising of various corporate employees. As Akshaya Patra, an ISKCON Bangalore initiative is satisfying the hungry stomachs of 1.5 million children across the country; Touchstone Foundation has been formed to satisfy the hungry intellect of thousands of youth using the principles of timeless Vedic wisdom.
The newfound freedom of an urban lifestyle and the demanding pressure exerted for constant performance takes the toll on the young minds of late. Having observed this trend, Touchstone Foundation has come up with various specially designed interactive workshop for the students. Once such program is "ART OF MIND CONTROL" where the students get to learn the techniques of how to handle tough circumstances in their life. The workshop may help in managing stress. There is no fee for workshop. You all are invited to attend. The investment is only your precious time and the return on investment is enormous.
About Bhartanatyam: Bharatanatyam is an Indian classical dance form and presumably the oldest classical dance heritage of India. The name of the dance form was derived by joining two words, 'Bharata' and Natyam' where 'Natyam in Sanskrit means dance and 'Bharata' is a mnemonic comprising 'bha', 'ra' and 'ta' which respectively means 'bhava' that is emotion and feelings; 'raga' that is melody; and 'tala' that is rhythm. Thus, traditionally the word refers to a dance form where bhava, raga and tala are expressed.
About the Artist: Narthaki Nataraj is a disciple of Tanjore Shri K P Kittappa Pillai, who is a direct descendant of the Tanjore Quartet Brothers (considered as fathers of Bharatnatyam). She learnt and practised under him in Gurukul for 14 years and specialized in Tanjore style Nayaki Bhava tradition. She has received the SNA Puraskar Award from The President of India and a Senior Fellowship from Dept. of Culture, Govt. Of India. She has performed at all leading festivals in India, USA, UK and Europe.
Prof. Srinvasa Moorthy from IIT Delhi is going to give a lecture session for first year B.Tech students (any one can attend).
driver less cars are becoming reality.
Abstract: The speaker will provide a walkthrough on the various techniques employed in combustion diagnostics like Multi-parameter laser imaging techniques, fuel spray imaging and droplet sizing, combustion imaging, laser induced phosphorescence, 3D shadowgraphy technique etc.
- Time: 15:00-16:00 hrs, Jan 2, 2018.
extending basic definitions and concepts.
overcome them. The four strategic pillars that hold the key to startup success will be shared. The talk will also touch upon how investors perceive startups and what they look for before funding a startup. For those students interested in an industry internship, the discussion will center around the process of internship itself and the maturity and professionalism required to work as an intern.
G Venkat is a serial entrepreneur, investor, speaker, and author who is a true technology enthusiast. He launched an AI-driven eLearning company called bitWise Academy with a vision to bring applied computer science education to every student in the world by leveraging cognitive sciences and artificial intelligence. Venkat's recent book on automating application modernization in enterprises was published by McGraw-Hill with the foreword written by Thomas Kurian, President, Oracle. His areas of focus include AI & machine learning, Internet of Things (IoT), distributed computing, big data & analytics, cloud and mobility. Venkat serves as an advisor to multiple startups. He holds a B.Tech. degree in Aerospace Engineering from The Indian Institute of Technology (IIT-M), Madras, India, MS and Ph.D. (ABD) in Interdisciplinary Studies (Computer Science and Aerospace) from The University of Alabama, Tuscaloosa.
Actuator selection and placement for active flow control is largely based on experience and trial-and-error because of the system's large dimensionality and complexity. We develop a novel method for estimating how to select and place a linear feedback control system suitable for affecting the dynamics of compressible, viscous flows, that utilizes the information contained within the global stability analysis of the baseflow (a time-averaged solution of the compressible Navier-Stokes equations), obtained from direct numerical/large eddy simulations. A wavemaker, defined by a suitable inner product of the forward and adjoint global modes of the baseflow, identifies regions of the flow-field with high dynamical sensitivity, and an optimization procedure determines effective actuator locations. The algorithm is flexible, and different types of control and feedback can be developed to obtain flow control. The efficacy of the method is demonstrated with two different flow control problems - flow stabilization in a Mach 0.65 diffuser, and the control of noise radiated by a turbulent Mach 0.9 jet.
Mahesh Natarajan is currently a postdoctoral researcher in the Department of Mechanical and Aerospace Engineering at Cornell University. He received his Bachelor's degree in Mechanical Engineering from the University of Calicut in 2007, and Master's in Aerospace Engineering from the Indian Institute of Science, Bangalore, in 2009. He completed his Ph.D. from the Department of Aerospace Engineering at the University of Illinois at Urbana-Champaign in 2016. His research interests lie in the field of numerical methods, computational fluid dynamics and scientific computing.
On 16 October 2017, LIGO Virgo Scientific Collaboration announced their first detection of gravitational waves from a merging neutron star binary.
GRB detection triggered a world-wide electro-magnetic follow up observations, involving around 70 ground and space based observatories.
About the Artist: Pandit Shubhendra Rao is a composer and sitar player who is ranked amongst the top soloists of India. He stayed with his guru Pandit Ravi Shankar in the Guru-Shishya Parampara for over 10 years. Shubhendra Rao has performed at major music festivals and concert halls like Broadway and Carnegie Hall in New York, WOMAD festival in Guernsey, UK, Sydney Opera House in Australia, National Arts Festival in South Africa, Theatre de le Ville in Paris, Edinburgh festival and Doverlane Music Conference in India.
P.S.: Apologies for the short notice.
Abstract: Conventional unsupervised data analytics techniques have largely focused on processing datasets of single-type data, e.g., one of text, ECG, Sensor Readings and Image data. With increasing digitization, it has become common to have data objects having representations that encompass different "kinds" of information. For example, the same disease condition may be identified through EEG or fMRI data. Thus, a dataset of EEG-fMRI pairs would be considered as a parallel two-view dataset. Datasets of text-image pairs (e.g., a description of a seashore, and an image of it) and text-text pairs (e.g., problem-solution text, or multi-language text from machine translation scenarios) are other common instances of multi-view data. The challenge in multi-view data analytics is about effectively leveraging such parallel multi-view data to perform analytics tasks such as clustering, retrieval and anomaly detection. This talk will cover some emerging trends in processing multi-view parallel data with a focus on exploratory data analytics over them. In addition to providing a high-level view of the area, this talk will cover two recent research publications authored by the speaker, one on multi-view clustering, and another on multi-view dimensionality reduction.
Dr. Deepak Padmanabhan holds a faculty position in Computer Science at Queen's University Belfast, United Kingdom. He received his B.Tech from Cochin University and his M.Tech and PhD from Indian Institute of Technology Madras, all in Computer Science. His current research interests include data analytics, similarity search, information retrieval and natural language processing. Deepak has published over 40 research papers across major venues in Information and Knowledge Management. His work has led to seven patents from the USPTO. Recently, he authored a book titled "Operators for Similarity Search" which was published by Springer in 2015. A Senior Member of the IEEE and the ACM, he is also the recipient of the INAE Young Engineer Award 2015, an award recognizing scientific work by researchers across engineering disciplines in India. He may be reached at [email protected]. | CommonCrawl |
The naïve way of calculating this is to read $x_1$ and $p_1$ through $x_k$ and $p_k$ and then crunch the numbers. However, this necessitates a list to keep track of all the numbers and an extra loop at the end to crunch them. Since we don't actually need the numbers, there's an easier way.
Recently, someone challenged me to write a program to find the mode (or modes) of a list in TI-BASIC. I picked up my TI-83+ and whipped this program up in about half an hour. Somebody else asked me for a copy, so I figured I'd post it on my website. | CommonCrawl |
Consider a homogeneous space $M$, which for the sake of concreteness, let's take to be $M = \mathbb R^d$. Fix some space $A$, and consider the space of functions $X = C(M,A)$, along with its Borel $\sigma$-algebra $\mathcal B(X)$ and an appropriate $\sigma$-ideal of null sets $\mathcal N(X)$. A measure on $(X,\mathcal B, \mathcal N)$ is called a random field on $M$ (this is also called a stochastic process, but I prefer to reserve that language for one-dimensional parameter spaces).
Since $M$ is homogeneous, there are natural actions arising from its symmetry group $G$. In our case $M = \mathbb R^d$, this means rotations and translations. Measures push-forward under actions, and we call a measure stationary (and isotropic) when it is invariant under these symmetries. A measure is ergodic if there is only one invariant set (up to null sets).
Let $\mathcal P^G$ denote the space of stationary, ergodic measures on $(X,\mathcal B, \mathcal N)$.
Side Question: Is there a nice characterization of this space $\mathcal P^G$?
More pertinently, I would like a numerical method to rapidly generate a stationary, ergodic random field. If the space $M$ were discrete, one plausible mechanism would be to use IID random variables. The independence is a strong form of ergodicity, and identical dependence is stationarity. On the other hand, this is no longer natural from the point of view of continuum random geometry.
One can take a dynamical approach, by starting with an arbitrary distribution on $X$, then transforming it by random transformations and taking averages. This is lengthy, though, and doesn't seem numerically efficient.
Main Question: Is there a nice class of stationary, ergodic distributions which one can easily sample from numerically?
Browse other questions tagged pr.probability homogeneous-spaces measure-theory ergodic-theory gr.group-theory or ask your own question.
Do regular conditional distributions almost surely assign trivial measure to all members of the conditioning $\sigma$-algebra?
Can ergodic theory help to prove ergodicity of general Markov chain? | CommonCrawl |
Abstract: We study the one loop new physics effects to the CP even triple neutral gauge boson vertices $\gamma^\star \gamma Z$, $\gamma^\star Z Z$, $Z^\star Z \gamma$ and $Z^\star Z Z$ in the context of Little Higgs models. We compute the contribution of the additional fermions in Littles Higgs model in the framework of direct product groups where $[SU(2)\times U(1)]^2 $ gauge symmetry is embedded in SU(5) global symmetry and also in the framework of simple group where $SU(N)\times U(1)$ gauge symmetry breaks down to $SU(2)_L\times U(1)$. We calculate the contribution of the fermions to these couplings when $T$ parity is invoked. In addition, we re-examine the MSSM contribution at the chosen point of SPS1a' and compare with the SM and Little Higgs models. | CommonCrawl |
EDIT: I apparently need to rewrite this from the RE's, but I'll leave it for continuity sake. Please read my replies for my point/question clarifications.
The closest is this, though its old and I am disallowed from commenting. What is the proof that the universal constants ($G$, $\hbar$, $\ldots$) are really constant in time and space? In that thread the OP has essentially the same (I think) question, though he slightly mis-worded it. The confusion in that thread was people responding to 'what is a constant / how is it determined'or 'do constants change in time/space' which I believe was NOT the question, NOR is my question.
Simply: how do we know / why are we assuming Newtons formula is universal, and not just describing gravity reasonably well on our frame? In other words, perhaps big G is itself a function, or even the whole formula needs tweaking. It stands to reason there may be alot more going on w gravity than what we empirically measure on our scale. We cannot use the orbits/behaviors of bodies, since those masses are estimated by the Newton equation and that is recursive.
EG/thought experiment: conduct an ideal Cavendish type experiment with neutron stars instead of lead balls. Repeat with individual atoms then mixed. Is G the same? If so, how is this proven?
The period of the Hulse-Taylor binary, a system 21,000 light-years away, is decreasing at just the rate predicted based on gravitational waves carrying away energy, using the standard value of $G$ that we measure in Earth laboratories.
Now, you might argue, "But we calculate the masses of the two stars in this system using the observed period and our standard value of $G$." That's true, but then we turn around and put these masses, and $G$, into the equation for the power radiated as gravitational waves, which depends on these parameters in a different way than the period does. So if the value if $G$ at this system were different from what it is on Earth, I don't think the predicted rate of period change would match the observed rate.
This is good evidence that $G$ is the same throughout our galaxy.
I think similar arguments can be made that $G$ is the same a billion light years away (and therefore a billion years ago) based on LIGO's initial detection of a gravitational waveform from merging black holes that matches theoretical predictions from General Relativity. The $G$ used in these models is of course the standard Earth-measured $G$, and the nonlinear dynamics of the inspiral and ringdown depend on $G$ and the masses in a complicated way which I think precludes $G$ having a different value there and then compared with here and now.
This system is a substantial fraction of the way across the expanse and history of the observable universe, and therefore suggests that $G$ is constant over a large swath of spacetime.
There are a few different possibilities for the idea that "$G$ is a function": it could be a function of space and time, it could be a function of mass, or it could be a function of separation between the two masses.
It's also important to note that any measurement can only impose limits on how much that $G$ could change, and it's always technically possible that there is a subtle enough variation in $G$ that current measurements will not be able to detect it. This is why the concept of a "proof" that $G$ is constant everywhere is largely not something that can be reasonably expected. There is no evidence that it is not constant, and we have observed that it's constant to within certain limits, but asking for a proof that it's exactly constant under all circumstances is asking for a measurement with infinite precision. The constancy of $G$ is something that can only be disproved, if it is someday observed to vary. As of right now, we assume that it is constant because 1) predictions of the theories that assume this match observations to an extremely precise degree, 2) the symmetry that this uniformity provides is integral to the fundamental nature of how we think about reality (namely, translational symmetry of the laws of physics gives us conservation of energy via Noether's theorem), and 3) there isn't any evidence against it, despite there being a very large number of tests.
As such, this answer will mainly focus on which measurements would be different if there was a substantial change to $G$ under different circumstances.
Is $G$ a function of space and/or time?
If Newtonian gravity worked differently at other locations in our Solar System, then we would have seen it in the orbital trajectories of the objects of known mass that we have sent through the Solar System (like the New Horizons probe, for instance). This doesn't run into any recursive problems because we already know the mass of these objects, since we were the ones who made them.
If gravity worked substantially differently in other star systems within our galaxy, then stellar astrophysics wouldn't work in the same way that it does nearby ("nearby" meaning "within a few hundred light-years"). The life of a star is a constant battle between a few different forces: radiation pressure from nuclear fusion at the core and hydrostatic pressure from the fluid dynamics of the stellar interior push outward, and gravity pulls inward (in older stars, there is also electron degeneracy pressure, but we will consider only youngish, main-sequence stars here). If gravity was stronger, then stars of a given mass would be smaller and more dense; likewise, if gravity was weaker, then stars of a given mass would be bigger and less dense.
The luminosity of a star (the total radiation output) is directly related by the Stefan-Boltzmann law to two quantities: the temperature of the star and its size. We can measure the temperature of a star by observing its color, and we can measure the luminosity of a star by measuring its apparent brightness and distance. Both of these things would change for a star of a given mass. It turns out that we have plotted the luminosity vs. temperature of tens of thousands of stars on the Hertzsprung-Russell diagram, and this plot contains large empty regions. If the luminosity and temperature change in the right way due to this (stellar modeling is complicated, so I don't know precisely how they would change), you might see a bunch of stars from a particular location/time in an otherwise-empty region of the Hertzsprung-Russell diagram, which would indicate that there's something weird going on in that region of space/time. We don't currently see anything like this.
There is still the possibility that the luminosity and temperature would change in just the right way to keep this population of stars within the already-filled regions of the Hertzsprung-Russell diagram. In that case, we have another tool at our disposal: stellar spectra, namely, the width of the spectral lines in different populations of main-sequence stars. It turns out that for stars along the main sequence, we have a pretty good idea what their mass is, given their luminosity, from the appropriately-named mass-luminosity relation. This is important because, given the mass, temperature, and size of a star (where the size is derived from luminosity and temperature), you can determine the average pressure inside the star. A star with higher internal pressure has broader spectral lines - the atoms within it are perturbed more by their neighbors. So if gravity was stronger in a particular region of space/interval of time, we would see a population of stars that would have abnormally high pressures given their luminosity and temperature, and therefore would have abnormally broad spectral lines given their position in the Hertzsprung-Russell diagram. Once again, we have not seen any evidence of this.
If Newtonian gravity was substantially different in other galaxies, then there's a very important, quite visible event that might change: the type Ia supernova, which occurs when the electron degeneracy pressure in a white dwarf is insufficient to combat gravitational collapse. Due to the nature of electron degeneracy pressure, this basically always occurs once a white dwarf reaches a specific mass, called the Chandrasekhar limit. If gravity is stronger, this limit gets smaller, and white dwarfs explode with less energy. Importantly, we can see these supernovae from our galaxy, and we can monitor their apparent brightness with time; in fact, this is one of the ways we can calculate the distance to galaxies. If we can determine the distance by another means, like using redshift measurements, then we could easily see that the Type Ia supernovae from a particular region of space would seem to be abnormally dim, which would mean that the white dwarfs in that region could not get as massive before exploding, which means that the Chandrasekhar limit is different there and $G$ is higher (only making this conclusion after having accounted for other effects, of course). Even if we can't determine the distance by other means, it turns out that the more luminous the Type Ia supernova is, the slower its luminosity declines over time, so we would notice that supernovae from a particular region get dimmer abnormally quickly. Once again, we have not seen any evidence of this as of yet.
Is $G$ a function of the masses?
We have tested the gravitational attraction between two large bodies by examining the orbital motion of the Earth and Moon. We know the mass of the Earth because seismology and geology tell us the density of various parts of it, and we know the volume of those parts. Using the mass of the Earth, we then calibrated scales that we took to the Moon, which means we can also measure the mass of the Moon independent of the orbital motion. We have also tested the gravitational attraction between a large object and a very small one; ultracold neutrons are often kept in open-topped magnetic bowls for experiments, and gravity prevents them from escaping. If we were wrong about the value of $G$ in that experiment, we would notice something odd about the distribution of neutrons in the bowl. We have not yet measured the gravitational interaction between two extremely small objects, but that likely requires a theory of quantum gravity anyway. So, in all cases that we're able to measure, we haven't seen any difference in $G$ as a function of mass.
Is $G$ a function of separation?
Measurements of $G$ have been done at quite close separations, in various iterations of the Cavendish experiment. Measurements have also been done at medium ranges, by again examining the orbital motion of the Earth-Moon system. Measurements have also been done at the interstellar scale, since we are able to discern the luminosity, temperature, spectra, and hence masses of several nearby binary-star systems. None of these measurements seem to be inconsistent with a constant $G$ as a function of separation. At the intergalactic scale, things get a bit muddled due to the presence of dark matter; there are almost surely still a few modified-Newtonian-dynamics theories out there that haven't been completely ruled out by experimental evidence. That said, Newtonian gravity with a constant $G$ and dark matter explains the current experimental evidence very well, especially the evidence in the Bullet Cluster of a direct observation of an invisible lump of mass that produced gravitational lensing, which was the final nail in the coffin for many modified-gravity theories.
In general, these are far from the only tests that have been done; I merely wanted to provide a relatively straightforward example in each case.
3) I think the fairly accurate validation/prediction of the planets' orbits (with Newton's law of gravity doing fairly well and deviations corrected by GR) is a validation of $G$ using large masses on large scales.
Not the answer you're looking for? Browse other questions tagged newtonian-gravity physical-constants or ask your own question.
What is the proof that the universal constants ($G$, $\hbar$, $\ldots$) are really constant in time and space?
Is it possible to speak about changes in a physical constant which is not dimensionless?
Why do universal constants have the values they do?
Have the values of constants ever changed before?
How accurate is Newtonian Gravity?
Why didn't we replace our SI units with a better system?
With calculations involving GR & gravity, when are Newtonian mechanics & Newtonian gravity sufficient and when are they not? | CommonCrawl |
Abstract: Applying the machinery of random matrix theory and Toeplitz determinants we study the level $k$, $U(N)$ Chern-Simons theory coupled with fundamental matter on $S^2\times S^1$ at finite temperature $T$. This theory admits a discrete matrix integral representation, i.e. a unitary discrete matrix model of two-dimensional Yang-Mills theory. In this study, the effective partition function and phase structure of the Chern-Simons matter theory, in a special case with an effective potential namely the Gross-Witten-Wadia potential, are investigated. We obtain an exact expression for the partition function of the Chern-Simons matter theory as a function of $k,N,T,$ for finite values and in the asymptotic regime. In the Gross-Witten-Wadia case, we show that ratio of the Chern-Simons matter partition function and the continuous two-dimensional Yang-Mills partition function, in the asymptotic regime, is the Tracy-Widom distribution. Consequently, using the explicit results for free energy of the theory, new second-order and third-order phase transitions are observed. Depending on the phase, in the asymptotic regime, Chern-Simons matter theory is represented either by a continuous or discrete two-dimensional Yang-Mills theory, separated by a third-order domain wall. | CommonCrawl |
I'm confused about the terminology in the two contexts since I can't figure out if they have a similar motivation. Afaik, the definitions state that quantum processes should be very slow to be called adiabatic while adiabatic thermodynamic processes are supposed to be those that don't lose heat. Based on my current intuition, this would mean that the thermodynamic process is typically fast (not leaving enough time for heat transfer). What gives, why the apparent mismatch?
Any transformation the system can undergo in thermal isolation is said to take place adiabatically.
This gradual change in external conditions characterizes as adiabatic process.
I would say, from personal experience, that the more widely held convention for the term adiabatic is not the one used by Landau and Lifshitz. In particular, most physicists I know use the term adiabatic in the context of thermodynamics to mean thermally isolated, while they use the term adiabatic in the context of quantum mechanics to mean sufficiently slow that certain approximations can be made.
Addendum. In the context of thermodynamics, the free expansion of a thermally isolated ideal gas is often referred to as an "adiabatic free expansion of a gas," see, for example here. Such a process is not isentropic. Using Slavik's definition would deem invalid the characterization of such a free expansion as adiabatic. However, all you need to do is google "adiabatic free expansion" to see how widespread such use of the terminology is.
The etymology of adiabatic appears to be from the Greek meaning "not passable" (native Greek speakers should feel free to clarify and/or correct that). In the technical meanings, the "passing" refers to heat transfer. So in thermodynamics, adiabatic means there is no heat transfer between the system and the environment.
In practice, of course, that's an approximation. In practice, adiabatic means that the thermodynamic process is slow enough that the system is always very nearly in equilibrium, so the heat exchange with the environment is negligible. In quantum mechanics, the analog to equilibrium is an eigenstate. So adiabatic means that the change is so slow that the system is always very nearly in equilibrium, so the system is always in an eigenstate. Both of these are approximations.
In quantum mechanics, the opposite of the adiabatic approximation is the sudden approximation. Take a system with initial Hamiltonian $H_0$, and change the Hamiltonian to $H_1$ over some time $T$. Then the adiabatic approximation is $T \rightarrow \infty$, and the sudden approximation is $T \rightarrow 0$. In the sudden approximation, the state of the system doesn't change (it "doesn't have time to change"), and it finds itself suddenly not in an eigenstate. In the adiabatic approximation, the state follows the perturbation, and is always in an eigenstate of the Hamiltonian.
Adiabatic means quasi-static and isoentropic - slow enough to create negligible amount of irreversible excitation. This is the common rationale of technically different definitions. E.g., Landau & Lifshic'es definition has two components - thermally isolated (to prevent entropy change by heat exchange) and slow (to prevent irreversible excitation). For a gaped quantum system adiabatic can by quite fast (just keep Planck constant times the characteristic driving rate below the value of the energy gap).
To sum up, don't make "waves" (entropy) and you'll adiabatic.
Not the answer you're looking for? Browse other questions tagged quantum-mechanics thermodynamics terminology adiabatic or ask your own question.
How slow is a reversible adiabatic expansion of an ideal gas?
Adibatic process in QM and thermodynamics?
Does a fast process always have to be adiabatic? | CommonCrawl |
Abstract: This paper discusses what we should mean by "Heisenberg-picture quantum field theory." Atiyah--Segal-type axioms do a good job of capturing the "Schrödinger picture": these axioms define a "$d$-dimensional quantum field theory" to be a symmetric monoidal functor from an $(\infty,d)$-category of "spacetimes" to an $(\infty,d)$-category which at the second-from-top level consists of vector spaces, so at the top level consists of numbers. This paper argues that the appropriate parallel notion "Heisenberg picture" should also be defined in terms of symmetric monoidal functors from the category of spacetimes, but the target should be an $(\infty,d)$-category that in top dimension consists of pointed vector spaces instead of numbers; the second-from-top level can be taken to consist of associative algebras or of pointed categories. The paper ends by outlining two sources of such Heisenberg-picture field theories: factorization algebras and skein theory. | CommonCrawl |
Recent work on information extraction has suggested that fast, interactive tools can be highly effective; however, creating a usable system is challenging, and few publicly available tools exist. In this paper we present IKE, a new extraction tool that performs fast, interactive bootstrapping to develop high quality extraction patterns for targeted relations. Central to IKE is the notion that an extraction pattern can be treated as a search query over a corpus. To operationalize this, IKE uses a novel query language that is expressive, easy to understand, and fast to execute - essential requirements for a practical system. It is also the first interactive extraction tool to seamlessly integrate symbolic (boolean) and distributional (similarity-based) methods for search. An initial evaluation suggests that relation tables can be populated substantially faster than by manual pattern authoring while retaining accuracy, and more reliably than fully automated tools, an important step towards practical KB construction. We are making IKE publicly available.
The field of Artificial Intelligence has made great strides forward recently, for example AlphaGo's recent victory against the world champion Lee Sedol in the game of Go, leading to great optimism about the field. But are we really moving towards smarter machines, or are these successes restricted to certain classes of problems, leaving other challenges untouched? In 2016, the Allen Institute for Artificial Intelligence (AI2) ran the Allen AI Science Challenge, a competition to test machines on an ostensibly difficult task, namely answering 8th Grade science questions. Our motivations were to encourage the field to set its sights broader and higher by exploring a problem that appears to require modeling, reasoning, language understanding, and commonsense knowledge, to probe the state of the art on this task, and sow the seeds for possible future breakthroughs. The challenge received a strong response, with 780 teams from all over the world participating. What were the results? This article describes the competition and the interesting outcomes of the challenge.
It is generally believed that a metaphor tends to have a stronger emotional impact than a literal statement; however, there is no quantitative study establishing the extent to which this is true. Further, the mechanisms through which metaphors convey emotions are not well understood. We present the first data-driven study comparing the emotionality of metaphorical expressions with that of their literal counterparts. Our results indicate that metaphorical usages are, on average, significantly more emotional than literal usages. We also show that this emotional content is not simply transferred from the source domain into the target, but rather is a result of meaning composition and interaction of the two domains in the metaphor.
Diagrams are common tools for representing complex concepts, relationships and events, often when it would be difficult to portray the same information with natural images. Understanding natural images has been extensively studied in computer vision, while diagram understanding has received little attention. In this paper, we study the problem of diagram interpretation, the challenging task of identifying the structure of a diagram and the semantics of its constituents and their relationships. We introduce Diagram Parse Graphs (DPG) as our representation to model the structure of diagrams. We define syntactic parsing of diagrams as learning to infer DPGs for diagrams and study semantic interpretation and reasoning of diagrams in the context of diagram question answering. We devise an LSTM-based method for syntactic parsing of diagrams and introduce a DPG-based attention model for diagram question answering. We compile a new dataset of diagrams with exhaustive annotations of constituents and relationships for about 5,000 diagrams and 15,000 questions and answers. Our results show the significance of our models for syntactic parsing and question answering in diagrams using DPGs.
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in $32\times$ memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations (in terms of number of the high precision operations) and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy.
Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation. Following this procedure we collect a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities. The dataset is composed of 9,848 annotated videos with an average length of 30 seconds, showing activities of 267 people from three continents. Each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects. In total, Charades provides 27,847 video descriptions, 66,500 temporally localized intervals for 157 action classes and 41,104 labels for 46 object classes. Using this rich data, we evaluate and provide baseline results for several tasks including action recognition and automatic description generation. We believe that the realism, diversity, and casual nature of this dataset will present unique challenges and new opportunities for computer vision community.
Large-scale annotated datasets allow AI systems to learn from and build upon the knowledge of the crowd. Many crowdsourcing techniques have been developed for collecting image annotations. These techniques often implicitly rely on the fact that a new input image takes a negligible amount of time to perceive. In contrast, we investigate and determine the most cost-effective way of obtaining high-quality multi-label annotations for temporal data such as videos. Watching even a short 30-second video clip requires a significant time investment from a crowd worker; thus, requesting multiple annotations following a single viewing is an important cost-saving strategy. But how many questions should we ask per video? We conclude that the optimal strategy is to ask as many questions as possible in a HIT (up to 52 binary questions after watching a 30-second video clip in our experiments). We demonstrate that while workers may not correctly answer all questions, the cost-benefit analysis nevertheless favors consensus from multiple such cheap-yet-imperfect iterations over more complex alternatives. When compared with a one-question-per-video baseline, our method is able to achieve a 10% improvement in recall (76.7% ours versus 66.7% baseline) at comparable precision (83.8% ours versus 83.0% baseline) in about half the annotation time (3.8 minutes ours compared to 7.1 minutes baseline). We demonstrate the effectiveness of our method by collecting multi-label annotations of 157 human activities on 1,815 videos.
We propose Deep3D, a fully automatic 2D-to-3D conversion algorithm that takes 2D images or video frames as input and outputs stereo 3D image pairs. The stereo images can be viewed with 3D glasses or head-mounted VR displays. Deep3D is trained directly on stereo pairs from a dataset of 3D movies to minimize the pixel-wise reconstruction error of the right view when given the left view. Internally, the Deep3D network estimates a probabilistic disparity map that is used by a differentiable depth image-based rendering layer to produce the right view. Thus Deep3D does not require collecting depth sensor data for supervision.
Random projections have played an important role in scaling up machine learning and data mining algorithms. Recently they have also been applied to probabilistic inference to estimate properties of high-dimensional distributions; however , they all rely on the same class of projections based on universal hashing. We provide a general framework to analyze random projections which relates their statistical properties to their Fourier spectrum, which is a well-studied area of theoretical computer science. Using this framework we introduce two new classes of hash functions for probabilistic inference and model counting that show promising performance on synthetic and real-world benchmarks.
For AI systems to reason about real world situations, they need to recognize which processes are at play and which entities play key roles in them. Our goal is to extract this kind of rolebased knowledge about processes, from multiple sentence-level descriptions. This knowledge is hard to acquire; while semantic role labeling (SRL) systems can extract sentence level role information about individual mentions of a process, their results are often noisy and they do not attempt create a globally consistent characterization of a process. To overcome this, we extend standard within sentence joint inference to inference across multiple sentences. This cross sentence inference promotes role assignments that are compatible across different descriptions of the same process. When formulated as an Integer Linear Program, this leads to improvements over within-sentence inference by nearly 3% in F1. The resulting role-based knowledge is of high quality (with a F1 of nearly 82).
A common model for question answering (QA) is that a good answer is one that is closely related to the question, where relatedness is often determined using generalpurpose lexical models such as word embeddings. We argue that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings. With causality as a use case, we implement this insight in three steps. First, we generate causal embeddings cost-effectively by bootstrapping cause-effect pairs extracted from free text using a small set of seed patterns. Second, we train dedicated embeddings over this data, by using task-specific contexts, i.e., the context of a cause is its effect. Finally, we extend a state-of-the-art reranking approach for QA to incorporate these causal embeddings. We evaluate the causal embedding models both directly with a casual implication task, and indirectly, in a downstream causal QA task using data from Yahoo! Answers. We show that explicitly modeling causality improves performance in both tasks. In the QA task our best model achieves 37.3% P@1, significantly outperforming a strong baseline by 7.7% (relative).
Situated question answering is the problem of answering questions about an environment such as an image or diagram. This problem requires jointly interpreting a question and an environment using background knowledge to select the correct answer. We present Parsing to Probabilistic Programs (P3), a novel situated question answering model that can use background knowledge and global features of the question/environment interpretation while retaining efficient approximate inference. Our key insight is to treat semantic parses as probabilistic programs that execute nondeterministically and whose possible executions represent environmental uncertainty. We evaluate our approach on a new, publicly-released data set of 5000 science diagram questions, outperforming several competitive classical and neural baselines.
QA systems have been making steady advances in the challenging elementary science exam domain. In this work, we develop an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges. In particular, we model the requirements based on appropriate sources of evidence to be used for the QA task. We create requirements by first identifying suitable sentences in a knowledge base that support the correct answer, then use these to build explanations, filling in any necessary missing information. These explanations are used to create a fine-grained categorization of the requirements. Using these requirements, we compare a retrieval and an inference solver on 212 questions. The analysis validates the gains of the inference solver, demonstrating that it answers more questions requiring complex inference, while also providing insights into the relative strengths of the solvers and knowledge sources. We release the annotated questions and explanations as a resource with broad utility for science exam QA, including determining knowledge base construction targets, as well as supporting information aggregation in automated inference.
Example-based explanations are widely used in the effort to improve the interpretability of highly complex distributions. However, prototypes alone are rarely sufficient to represent the gist of the complexity. In order for users to construct better mental models and understand complex data distributions, we also need criticism to explain what are not captured by prototypes. Motivated by the Bayesian model criticism framework, we develop MMD-critic which efficiently learns prototypes and criticism, designed to aid human interpretability. A human subject pilot study shows that the MMD-critic selects prototypes and criticism that are useful to facilitate human understanding and reasoning. We also evaluate the prototypes selected by MMD-critic via a nearest prototype classifier, showing competitive performance compared to baselines.
A key challenge in sequential decision problems is to determine how many samples are needed for an agent to make reliable decisions with good probabilistic guarantees. We introduce Hoeffding-like concentration inequalities that hold for a random, adaptively chosen number of samples. Our inequalities are tight under natural assumptions and can greatly simplify the analysis of common sequential decision problems. In particular, we apply them to sequential hypothesis testing, best arm identification, and sorting. The resulting algorithms rival or exceed the state of the art both theoretically and empirically. | CommonCrawl |
Hitchin component for $SL(n,R)$ is the component in the space of surface group representations into $SL(n,R)$ which can deform to Fuchsian locus. The Hitchin component is in correspondence with the moduli space of $SL(n,R)$-Higgs bundles. I will introduce recent work with Brian Collier on asymptotic behaviors of families in Hitchin component in terms of certain families of Higgs bundles. Namely, given a family of Higgs bundles by scaling Higgs field by $t$, we analyze the asymptotic behavior of the corresponding representations as $t$ goes to $\infty$ in two special cases. | CommonCrawl |
Let $\mathscr P$ be a proof system for $\mathcal L$.
That is, some logical formula $\phi$ is not a theorem of $\mathscr P$.
Suppose that in $\mathscr P$, the Rule of Explosion (Variant 3) holds.
Consistency is obviously necessary for soundness in the context of a given semantics.
Therefore it is not surprising that some authors obfuscate the boundaries between a consistent proof system (in itself) and a sound proof system (in reference to the semantics under discussion). | CommonCrawl |
I have been working on the topic of camera pose estimation for augmented reality and visual tracking applications for a while and I think that although there is a lot of detailed information on the task, there are still a lot of confussions and missunderstandings.
I think next questions deserve a detailed step by step answer.
How do I compute homography from a planar marker?
If I have homography how can I get the camera pose?
It is important to understand that the only problem here is to obtain the extrinsic parameters. Camera intrinsics can be measured off-line and there are lots of applications for that purpose.
$\alpha_u$ and $\alpha_v$ are the scale factor in the $u$ and $v$ coordinate directions, and are proportional to the focal length $f$ of the camera: $\alpha_u = k_u f$ and $\alpha_v = k_v f$. $k_u$ and $k_v$ are the number of pixels per unit distance in $u$ and $v$ directions.
$c=[u_0,v_0]^T$ is called the principal point, usually the coordinates of the image center.
$s$ is the skew, only non-zero if $u$ and $v$ are non-perpendicular.
A camera is calibrated when intrinsics are known. This can be done easily so it is not consider a goal in computer-vision, but an off-line trivial step.
Camera extrinsics or External Parameters $[R|t]$ is a $3\times4$ matrix that corresponds to the euclidean transformation from a world coordinate system to the camera coordinate system. $R$ represents a $3\times3$ rotation matrix and $t$ a translation.
Computer-vision applications focus on estimating this matrix.
In order to compute homography we need point pairs world-camera. If we have a planar marker, we can process an image of it to extract features and then detect those features in the scene to obtain matches.
We just need 4 pairs to compute homography using Direct Linear Transform.
Due to redundancy it is necessary to normalize $[R|t]$ dividing by, for example, element [3,4] of the matrix.
While explaining the two-dimensional case very well, the answer proposed by Jav_Rock does not provide a valid solution for camera poses in three-dimensional space. Note that for this problem multiple possible solutions exist.
This paper provides closed formulas for decomposing the homography, but the formulas are somewhat complex.
OpenCV 3 already implements exactly this decomposition (decomposeHomographyMat). Given an homography and a correctly scaled intrinsics matrix, the function provides a set of four possible rotations and translations.
The intrinsics matrix in this case needs to be given in pixel units, that means your principal point is usually (imageWidth / 2, imageHeight / 2) and your focal length is usually focalLengthInMM / sensorWidthInMM * imageHeight.
Not the answer you're looking for? Browse other questions tagged computer-vision homography camera-pose visual-tracking or ask your own question.
FFT of color images incorporated into an Object Recognition method? | CommonCrawl |
If we have an $n \times n$ positive semidefinite matrix $A$ and we have two decompositions such that $A = B B^T = C C^T$ for some $n \times n$ matrices $B$ and $C$.
Is it true that $B$ and $C$ are related by a unitary matrix? Specifically, is $B U = C$ for some unitary matrix $U$?
Browse other questions tagged linear-algebra matrices matrix-decomposition positive-semidefinite or ask your own question.
$A$ is hermitian iff $A=M-N$ for some $M,N$ positive semidefinite matrix?
How to prove Cholesky decomposition for positive-semidefinite matrices? | CommonCrawl |
where B denotes base address, W denotes element size in bytes, n is the number of columns; $L_r$ is the first row number, $L_c$ is the first column number.
A similar formula is given for address in column major order.
I would like to know how this formula is obtained.
To determine the address of an element in this list, we need to know how many elements come before it. For element $[I,J]$, this is the number of complete rows above row $I$ times the length of a row, which is $(I-L_r)\times n$, plus the number of elements before it in the current row, which is $J-L_c$.
Since the first element is at address $B$ and each element takes $W$ bytes, the addresses of the elements are $B$, $B+W$, $B+2W$, ... and, in general, if there are $k$ elements before you, your address is $B+kW$. For element $[I,J]$, we have calculated that $k=n(I-L_r)+(J-L_c)$.
For column-major, the argument is basically the same. If you understand the above, it should be easy to adapt it.
B is the base address where the first element residents. For simplicity, let's assume that the array indexes start at zero, so $L_r$ and $L_c$ are both zero.
So the first element $[0,0]$, we have I=0 and J=0, you get address = B +W[n(0-0)+[0-0]] = B.
In row major arrays, each element of row i is contiguous in memory, followed immediately afterwards with the elements of row i+1, etc.
Each row i+1, contains n elements of W bytes, so row i+1 starts in memory nW bytes after the previous row. Element $[0,0]$ is at address B, and element $[1,0]$ is at address B+W*n = B+W[n(1-0)+[0-0]].
Combining those two concepts, you can find any element [I,J] using the formula. The first term in the enclosed sum computes where the row starts in memory relative to the start, and the second term computes where the jth element starts relative to that row.
The terms Lr and Lc just generalize things if your indexes do not start with zero. By subtracting these from the indexes, you are treating the indexes as if they did start with base zero.
Here is an example using C. Notice that we can index element in array a2D using indexes, or your formula. Both work and in fact the C compiler treats them the same.
Now the best thing is to get out a piece of paper, draw out this array as it is laid out in memory assuming 4 bytes per int, and see if you can reproduce the calculations to find the addresses of some array elements.
Try it again using different size elements, and with $L_r$ and $L_c$ set to values other than zero.
I noticed that you included the label Java but I provided an answer in C.
Because Java stores Arrays as objects. Each row is an Array object with an array of n Integers. To connect up the rows, there is an Array object that contains the object references to each of the row objects.
Each row object is stored contiguously, but each row will be stored non-contiguously on the heap, so they are not contiguous and this formula will not work.
You still access an array in Java using the notation A[i][j], but the index i provides an object reference to the ith row object, and j provides an index into that Array, but your formula will not work in that case.
Not the answer you're looking for? Browse other questions tagged arrays java memory-allocation or ask your own question. | CommonCrawl |
Format: MarkdownItexIf at [[Deligne completeness theorem]] we read > When in the early 70s the connection between topos theory and logic became manifest William Lawvere (1975) pointed out that the theorem may be viewed as a variant of the classical Gödel-Henkin completeness theorem for first-order logic, what could be said in a similar vein at it's $(\infty, 1)$-generalization [[Deligne-Lurie completeness theorem]]? What would play the role of coherent logic?
what could be said in a similar vein at it's (∞,1)(\infty, 1)-generalization Deligne-Lurie completeness theorem? What would play the role of coherent logic? | CommonCrawl |
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1159-1167, 2017.
The stochastic composition optimization proposed recently by Wang et al. minimizes the objective with the composite expectation form: $\min_x (\mathbbE_iF_i ∘\mathbbE_j G_j)(x).$ It summarizes many important applications in machine learning, statistics, and finance. In this paper, we consider the finite-sum scenario for composition optimization: $\min_x f (x) := \frac1n \sum_i = 1^n F_i \left( \frac1m \sum_j = 1^m G_j (x) \right)$. In this paper, two algorithms are proposed to solve this problem by combining the stochastic compositional gradient descent (SCGD) and the stochastic variance reduced gradient (SVRG) technique. A constant linear convergence rate is proved for strongly convex optimization, which substantially improves the sublinear rate $O(K^-0.8)$ of the best known algorithm.
%X The stochastic composition optimization proposed recently by Wang et al. minimizes the objective with the composite expectation form: $\min_x (\mathbbE_iF_i ∘\mathbbE_j G_j)(x).$ It summarizes many important applications in machine learning, statistics, and finance. In this paper, we consider the finite-sum scenario for composition optimization: $\min_x f (x) := \frac1n \sum_i = 1^n F_i \left( \frac1m \sum_j = 1^m G_j (x) \right)$. In this paper, two algorithms are proposed to solve this problem by combining the stochastic compositional gradient descent (SCGD) and the stochastic variance reduced gradient (SVRG) technique. A constant linear convergence rate is proved for strongly convex optimization, which substantially improves the sublinear rate $O(K^-0.8)$ of the best known algorithm. | CommonCrawl |
This puzzle provides an interesting context which challenges students to apply their knowledge of the properties of numbers. Students need to work with various types of numbers at the same time and consider their relationships to each other (e.g. primes, squares and specific sets of multiples).
Show a $3 \times 3$ grid with six headings on the board, ask students to suggest numbers that could fit into each of the nine segments (an easy start, but useful revision of vocabulary).
The students (ideally working in twos or threes) can then be set the challenge of filling the $5 \times 5$ board with the available numbers. The challenge is to fit all 25 number cards onto the grid, so if they haven't been able to find a 'home' for all the number cards, suggest that they move some of the property cards around.
There is more than one possible solution so students could display their different arrangements. When a student/pair finishes allocating numbers to a grid, they should record the grid headings and how many numbers they placed.
A concluding plenary could ask students to share any insights and strategies that helped them succeed at this task.
This is definitely one that needs them to persevere. My class spent a full hour on this in groups and not one group found a solution.
Which numbers are hard to place?
Encourage students to pay attention to the order in which they allocate numbers to cells - recognising the key cells to fill, and the key numbers to place.
Teachers can adapt the task by changing the heading cards or by asking students to create a new set of heading cards and a set of numbers that make it possible to fill the board. Students could then swap their new puzzles.
Is it possible to create a puzzle that can be filled with $25$ consecutive numbers?
Some students could be given a larger range of numbers to choose from, or offered a smaller grid and appropriately restricted numbers - this could work with students choosing from the full set of $10$ categories, or with an adapted set.
Handouts for teachers are available here (word document, pdf document), with the problem on one side and the notes on the other.
Factors and multiples. Resilient. Working systematically. Divisibility. Triangle numbers. Square numbers. Multiplication & division. Resourceful. Prime numbers. Practical Activity. | CommonCrawl |
Does charge of the body gets affected by relativity or is constant in all frames of references?
The Lorentz transformations affect position and time, not the charge. Therefore, charge is invariant but the current (i.e. $q\mathbf v$) is not.
That the charge of a body is constant in all frames of reference is proved in several textbooks.
Not the answer you're looking for? Browse other questions tagged special-relativity charge inertial-frames or ask your own question.
What axiomatizations exist for special relativity?
Why do charge on an object remains unaffected by the motion of the object?
In what ratio does the charge distribute if a charge and uncharged body touch each other?
Given the 1st Postulate of SR, doesn't the 2nd Postulate go without saying?
What happens if the moving frame in special relativity is non-inertial? | CommonCrawl |
$\alpha$ and $\beta$ are two angles of the two right angled triangles and the compound of angle of them is $\alpha + \beta$. The sine of the sum of two angles can be expanded in terms of sine and cosine of both angles.
One the basis of this trigonometric formula, the sum of two arcsin functions can be expressed in mathematical form.
Take $\sin \alpha = x$ and $\sin \beta = y$.
According to the Pythagorean identity of sine and cosine functions, express $\cos \alpha$ and $\cos \beta$ in square root form of $\sin \alpha$ and $\sin \beta$ respectively.
As per our consideration, $\sin \alpha = x$ and $\sin \beta = y$. Now, transform the above two equations in terms of $x$ and $y$ purely.
Transform the sine of sum of angles rule purely in terms of x and y to obtain the formula of sum of two inverse sine functions.
Express sine function in its inverse form.
Replace, $\alpha$ and $\beta$ in terms of $x$ and $y$.
It is a property of the inverse trigonometry to write summation of two inverse sine functions in the form of same inverse sine function. | CommonCrawl |
What is the time complexity of increasing the precision of finding matrix eigenvalues?
There are various algorithms that output the eigenvalues of an $n \times n$ matrix in time $O(n^3)$. However, I can't find anywhere that tells me about the precision of the output of the algorithm. That is, what is the time complexity of increasing the number of bits I require the answer to?
Furthermore, how does this rely on the number of bits we use to specify the initial matrix? For example, if I specify the matrix elements to exponential precision in $n$.
Any starting point for this would be helpful.
the input matrix has coefficients that are algebraic numbers (*): then the eigenvalues are also algebraic numbers and can be determined effectively, one can also determine the multiplicities of the eigenvalues. There exists very good algorithms to get rational approximations of algebraic numbers such a where the bit complexity is $O((b+n)n^2)$ (neglecting some log factors) where $b$ is the number of bits and $n$ is the degree of the polynomial that represents the algebraic number (which in turns depends polynomially on the heigh of the coefficient of the matrix).
the input matrix has coefficients that can be arbitrary real numbers but we can query arbitrary rational approximations of them: this fits into the theory of Computable Analysis and things get complicated. It is known that eigenvalues are uniformly computable (because they are roots of the characteristic polynomial) from the input coefficient , the algorithm runs in polynomial time (in the precision), probably using it runs in linear time (if we ignore the cost of computing approximations of the input). However the eigenvectors are not computable nor are multiplicities, unless you known them in advance .
your do not care about the representation of numbers and assume all arithmetic operations have a unit cost: this is the BSS model and gives a very precise arithmetic complexity that is logarithmic in the precision (ie $O(\log b)$ where $b$ is the number of bit). But of course this completely neglects the complexity of representing and operating on the numbers and assume that equality is a unit operation (whereas it is uncomputable in general). I personally find this model to be very inadequate to express the complexity of this problem.
Based on your question, I suspect your matrix has rational inputs (which are algebraic numbers of degree 1), thus it would fit in the first case. In this case, I think (but have not checked) that the bit complexity will be linear (maybe with polylogarithmic factors) in the precision, but the constant will depend on the size and numbers in the input matrix (most probably in a polynomial fashion).
(*) algebraic numbers can be represented exactly with a finite number of bits, all arithmetic operations (and more) on algebraic numbers can be performed effectively, however the complexity details on the height of the numbers, thus on field extension in which the coefficients of the input lie.
Not the answer you're looking for? Browse other questions tagged cc.complexity-theory time-complexity na.numerical-analysis or ask your own question.
What is the complexity of (possibly succinct) Nurikabe?
Complexity of computing the discrete Fourier transform?
Is there a polynomial time algorithm to determine if the span of a set of matrices contains a permutation matrix?
Subset sum solver. Worth continue working on this method?
Consequences of the existence of the following algorithm: does it imply any complexity class separation / collapse?
What is the time complexity of base conversion on a multi-tape Turing machine? | CommonCrawl |
I am trying to find an $x$ and $y$ that solve the equation $15x - 16y = 10$, usually in this type of question I would use Euclidean Algorithm to find an $x$ and $y$ but it doesn't seem to work for this one. Computing the GCD just gives me $16 = 15 + 1$ and then $1 = 16 - 15$ which doesn't really help me. I can do this question with trial and error but was wondering if there was a method to it.
are all the solution for $15a+16b=1$ and from here just multiply by $10$.
In this case you don't really need the full power of the Euclidean algorithm. Since you know $$ 16 - 15 = 1 $$ you can just multiply by $10$ to conclude that $$ 16 \times 10 + 15 \times(-10) = 10. $$ Now you have your $y$ and $x$.
Not the answer you're looking for? Browse other questions tagged algebra-precalculus euclidean-algorithm or ask your own question.
Find the value of $x,y$ for the equation $12x+5y=7$ using number theory .
How to solve an equation involving the floor of a number?
Solving equation using Euclidean Algorithm? | CommonCrawl |
where $a$ and $b$ are the coordinates of the centre.
A straight line passing through two points of the circle is called a secant; the segment of it which lies within the circle is called a chord. Chords which are equidistant from the centre are equal. A chord passing through the centre of the circle is called its diameter. The diameter perpendicular to a chord divides it in half.
The two parts into which two points of the circle divide the circle are called arcs.
An angle formed by two radii of the circle joining its centre to the ends of an arc is called a central angle, and the corresponding arc is the arc on which it depends. The angle formed by two chords with a common end is called an inscribed angle. An inscribed angle is equal to half the central angle depending on the arc confined between the ends of the inscribed angle. The length of the circle equals $C=2\pi R$, while the length of an arc is $l=(\pi Ra^\circ)/180^\circ=R\alpha$, where $a^\circ$ is the size (in degrees) of the relevant central angle and $\alpha$ is its radial measure.
If through any point of a plane several secants are drawn towards the circle, the product of the distances from that point to both points of intersection of each secant with the circle is a constant number (for the given point); in particular, it is equal to the square of the length of the segment touching the circle from that point (the power of the point). The totality of all circles in the plane in relation to which a given point has an identical power is a bundle of circles. The totality of all common circles of two bundles in one plane is called a pencil of circles.
The part of the plane bounded by a circle and containing its centre is called a disc. The part of the disc bounded by an arc of the circle and by the radii leading to the ends of this arc is called a sector. The part of the disc between an arc and its chord is called a segment.
The area of the disc is $S=\pi R^2$, the area of a sector is $S_1=\pi R^2(a^\circ/360^\circ)$, where $a^\circ$ is the measure in degrees of the appropriate central angle, while the area of a segment is $S_2=\pi R^2(a^\circ/360^\circ)\pm S_\Delta$, where $S_\Delta$ is the area of the triangle with vertices at the centre of the circle and at the ends of the radii bounding the relevant sector. The sign "-" is used if $a^\circ<180^\circ$, and the sign "+" if $a^\circ>180^\circ$.
A circle on a convex surface is locally almost isometric to the boundary of a cone of the convex surface (Zalgaller's theorem). A circle on a manifold of bounded curvature can have a highly complex structure (i.e. corner and multiple points may exist, the circle may have several components, etc.). Nevertheless, the points of a circle on a manifold of bounded curvature can be ordered naturally, turning the curve into a cyclically ordered set (see ).
For circles in more general spaces, such as Banach, Finsler and other spaces, see Sphere.
For circles in more general spaces, see also [a1], Chapt. 10.
This page was last modified on 27 September 2014, at 15:05. | CommonCrawl |
Graphene in spintronics has so far meant a material with low spin-orbit coupling which could be used as high-performance spin current leads. If the spin-orbit interaction could be enhanced by an external effect, the material could serve also as an active element in a spintronics device such as the Das-Datta spin field effect transistors. We show that by intercalation of Au under graphene grown on Ni(111), a Rashba-type spin-orbit splitting of $\sim$ 100 meV can be created in a wide energy range while the Dirac cone is preserved and becomes slightly p-doped. We discuss different superstructures of Au under the graphene which are observed in the experiment. Ab initio calculations indicate that a sharp graphene-Au interface at the equilibrium distance accounts for only $\sim$ 10meV spin-orbit splitting and enhancement can occur due to Au atoms in the hollow position that get closer to graphene while preserving the sublattice symmetry. For the system graphene/Ir(111) we observe a large splitting of the Dirac cone as well. The large lattice mismatch of this system allows us to investigate properties of the pseudospin that are related to the structure of minigaps that occur at the zone boundary of the superstructure. We also report on the giant Rashba splitting of an Ir(111) surface state which persists underneath the graphene. Finally, we re-investigate with p(1 $\times$ 1) graphene/Ni(111) and Co(0001) typical examples where the sublattice symmetry breaking by the substrate is believed to lead to a large band gap at the Dirac point. We show that this is not the case and the Dirac point of graphene stays instead intact, and we discuss implications of this finding.
*Supported by SPP 1459 of the Deutsche Forschungsgemeinschaft. | CommonCrawl |
When you study a topic for the first time, it can be difficult to pick up the motivations and to understand where everything is going. Once you have some experience, however, you get that good high-level view (sometimes!) What I'm looking for are good one-sentence descriptions about a topic that deliver the (or one of the) main punchlines for that topic.
For example, when I look back at linear algebra, the punchline I take away is "Any nice function you can come up with is linear." After all, multilinear functions, symmetric functions, and alternating functions are essentially just linear functions on a different vector space. Another big punchline is "Avoid bases whenever possible."
What other punchlines can you deliver for various topics/fields?
Homological algebra - In an abelian category, the difference between what you wish was true and what IS true is measured by a homology group.
Functional analysis: Everything you know from linear algebra is true, under the right conditions; otherwise it's false.
Complex Analysis: Holomorphic functions are just rotations and dilations up to the first order.
Calculus: Differentiation is approximation by a linear map.
Real Analysis: Get your hypotheses right, or suffer the counter-examples!
Complex Analysis: Taylor series behave the way you want them to in real analysis.
One punchline in algebraic geometry is that all commutative rings are actually the ring of functions on some space.
Operator theory: all separable infinite-dimensional Hilbert spaces are isomorphic, but they aren't all the same and moving your problem between them works wonders.
Analytic combinatorics: generating functions are awesome.
"Algebraic topology is the "art" of Not doing the integral"
Homotopy theory is an attempt to do homological algebra in non-abelian categories.
Algebraic geometry: CommRing behaves a lot like Setop.
"Free" is just another word for nothing to do on the left.
Logic teaches us that (untrained) intuition is often wrong, but that when it's right, it's for the wrong reason.
Homological algebra: How badly do modules fail to behave like vector spaces?
Algebraic group theory: In order to differentiate a function on a Lie group, we just have to consider the group over $\mathbb R\left[\varepsilon\right]$ for an infinitesimal $\varepsilon$ ($\varepsilon^2=0$).
Semisimple algebras: The representations of a sufficiently nice algebra mirror a structure of the algebra itself, namely how it breaks into smaller algebras.
$n$-category theory: all the obvious isomorphisms, homotopies, congruences you have always been silently sweeping under the rug are coming back to have their revenge.
Modern algebraic geometry (schemes instead of varieties): let's have the beauty of geometry without its perversions.
How many of these did I get totally wrong?
Did I see that quote in Havil's book Gamma?
Noncommutative Ring Theory: If it is not modules, then it is idempotents.
Geometric group theory: the large-scale geometry of a group is invariant under quasi-isometry.
Topological Vector Spaces: You can make an infinite dimensional space have every nice property of finite dimensional spaces- but not all of them at once.
Configuration space integrals: Don't take limits- compactify!
Dror Bar-Natan explained this punchline to me when I was just starting grad school.
Terry Tao, in a post on Google Buzz, has given an overview of mathematics in the form of multiple "punch-lines" of the requested variety.
Algebra is the mathematics of the "equals" sign, of identity, and of the "main term"; analysis is the mathematics of the "less than" sign, of magnitude, and of the "error term".
Algebra prizes structure, symmetry, and exact formulae; analysis prizes smoothness, stability, and estimates.
Most of geometry would not be classified as either algebra or analysis, but simply as geometry.
Quantum mechanics is the algebraic geometry of $n$-particle Hamiltonian flows and Lindbladian compressions as pulled-back onto the natural $r$-indexed stratification of $r$'th secant varieties of $n$-factor Segre varieties whose $r\to\infty$ limit is … $n$-particle Hilbert space.
… and it turns out to be very useful (and great fun) to rewrite standard quantum physics texts like Charles Slichter's Principles of Magnetic Resonance based upon this one sentence definition.
Joseph Landsberg's recent Bull. AMS review "Geometry and the complexity of matrix multiplication" (2008), which has been praised in multiple MathOverflow posts, provides an overview of the broad utility—despite their unwieldy name—of stratifications of secant varieties of Segre varieties (which extends far beyond quantum physics).
I'll offer two punchlines for Galois Theory.
There's a one-to-one, order-reversing correspondence between intermediate fields of a finite, normal, separable extension $K$ of $F$, and subgroups of the group of automorphisms of $K$ fixing $F$.
A polynomial is solvable in radicals if and only if the Galois group of its splitting field is a solvable group.
Geometric representation theory: keep translating the problem until you run into Hard Lefschetz, then you are done.
Etale cohomology - you can apply fixed-point theorems from algebraic topology to Galois actions on varieties.
Not the answer you're looking for? Browse other questions tagged soft-question big-picture big-list or ask your own question.
How should the Math Subject Classification (MSC) be revised or improved?
Alternating forms as skew-symmetric tensors: some inconsistency?
Can one make high-level proofs about chess positions?
What definitions were crucial to further understanding? | CommonCrawl |
Will be using this site much less often until things slow down a bit... or not.
22 If a coin toss is observed to come up as heads many times, does that affect the probability of the next toss?
18 Why do parabolas' arms eventually become parallel?
15 Why is there no product/quotient rule for integration?
13 Derivative of $e^x:$ What's wrong with this proof?
13 Is $\infty \times 0$ just $\frac00$? | CommonCrawl |
For gas of identical particles, when we cool it down to extremely low temperature we can see one of two types of behaviour depending on the symmetry of wavefunction with respect to argument interchange, fermion and boson. This happens because particles become indistinguishable at quantum level and their wavefunction will be either symmetric or anti-symmetric.
But what if we consider a gas of non-identical particles? Let us say their masses are different. If we cool down this gas, because of different masses particles will still be distinguishable.
What will happen then, will the quantum effects (indistinguishability, wave-particle duality) appear then? Because of distinguishability of masses, could we use the Maxwell distribution at very low temperature too?
I would start by saying that Bose-Einstein and Fermi-Dirac distributions are all single-particle equilibrium distributions.
So in a sense, a generic many-body state is always "Boltzmann" distributed.
If you can write your $|\Psi_i\rangle$ a single particle state $|n\rangle$, then you can show that from the above you can get the Bose-Eintein (by letting the sum go from $n=0$ to $\infty$ and taking the limit of the geometric series) or the Fermi-Dirac (by restricting $n$ to $0,1$) distributions.
Also, The irreducible representations of the Poincaré group are labelled by mass $m$ and the spin $s$. So as soon you have particles that have different mass, they are intrinsically different and obey their own Bose/Fermi/... statistical distribution.
So, if you have non-interacting distinct particles and reduce the temperature (/density), eventually they will independently display quantum effects. I.e. if you have two bosonic species, they will both Bose condense, not caring about what the other does.
If, however, the two species are interacting, then it is a completely different story and it is not trivial to answer. You would have to do numerics to know the answer.
At some point, confinement and quantization will become important, and this will make that the Maxwell distribution does not apply.
For example He-4 which has six distinguishable particles: two electrons, two protons, two neutrons (two different spins of each), all in the ground state of a potential well.
Not the answer you're looking for? Browse other questions tagged quantum-mechanics statistical-mechanics quantum-information bose-einstein-condensate quantum-statistics or ask your own question.
Why does a quantum gas lose its "quantum nature" in the limit $(\epsilon-\mu)/k_BT\gg1$?
Can two bosons be called identical although their momenta are different?
In a gravitational field, will the temperature of an ideal gas will be lower at higher altitude? | CommonCrawl |
An amateur proof that the popular game is NP-hard.
Boulder Dash is a videogame created by Peter Liepa and Chris Gray in 1983 and released for many personal computers and console systems under license from First Star Software. Its concept is simple: the main character must dig through caves, collect diamonds, avoid falling stones and other nasties, and finally reach the exit within a time limit. In this report we show that the decision problem "Is an $N\times N$ Boulder Dash level solvable?" is NP-hard. The constructive proof is based on a simple gadget that allows us to transform the Hamiltonian cycle problem on a 3-connected cubic planar graph to a Boulder Dash level in polynomial time.
NOTE: the same result has been proved by G. Viglietta in the paper: Gaming Is a Hard Job, But Someone Has to Do It! ; his proof, which is embedded in a more general and powerful framework that can be used to prove complexity of games, doesn't require the Dirt element.
This entry was posted in CSTheory and tagged Arcade games, computational complexity, NP-hardness by admin. Bookmark the permalink. | CommonCrawl |
Abstract: The model Hamiltonian of a two-dimensional Bose liquid (proposed earlier by Kane, Kivelson, Lee and Zhang as the Hamiltonian which has Jastrow-type wavefunctions as the ground-state solution), is shown to possess nonrelativistic supersymmetry. For the special value of the coupling constant $\alpha=1/2$ the quantum mechanics described by this Hamiltonian is shown to be equivalent to the dynamics of (complex) eigenvalues of random Gaussian ensemble of normal complex matrices. For general $\alpha$, an exact relation between the equal-time current-current and density-density correlation functions is obtained, and used to derive an asymptotically exact (at low wavevectors q) spectrum of single-particle excitations beyond the superfluid ground-state (realized at low $\alpha$'s). The ground-state at very large $\alpha$ is shown to be of ``Quantum Hexatic" type, possessing long-range orientational order and quasi-long-range translational order but with zero shear modulus. Possible scenaria of the ground-state phase transitions as function of $\alpha$ are discussed. | CommonCrawl |
Teach our Personalized Swim Instruction© curriculum to classes of only 4 students!
In 1987, Jeff Kelly began creating the Personalized Swim Instruction© curriculum. We offer this unique curriculum at our indoor heated facility from April through August, and at homes associations during June and July. As detailed in the "Who's Jeff" page, 2019 will be a another "regrouping" year for our company. For the past eight years, Jeff has been dealing with health challenges and has decreased the volume and intensity of the corporation to focus on his health. As a result, we are not teaching at as many locations as we had in years past. However, we are in need of a few new instructors this season.
No previous swim instruction experience or certification is necessary. You will be paid to learn our curriculum and enhance your swimming ability. In addition to small classes, we offer our customers professional instruction. These are the two main aspects of our services that set us apart from other swim instruction providers. Therefore, we are seeking educated professionals.
While we hire almost exclusively elementary education professionals, we do occasionally hire exceptional university juniors and seniors pursuing degrees in elementary education, secondary education professionals, and professionals with degrees in other humanities-related fields, such as social work and psychology. However, candidates in the latter three categories must have significant experience with children age three to ten.
We believe that learning to swim is as important as any subject children learn in school. We require each instructor to gain an in-depth knowledge of our curriculum and techniques. Jeff Kelly and his team of "Instructor Mentors" will train you. These mentors will work with you to enhance your swimming and teaching skills throughout the summer.
Do I enjoy working with children age three to ten?
Do I enjoy being in the water? Would I enjoy being in the water at least 15-20 hours per week?
Do I enjoy learning new teaching techniques?
Do I enjoy being challenged?
Do I graciously accept and implement critique?
If you can not answer "yes" to all these questions, then this job is not for you. We make no apologies for our candor. We are seeking people who truly love to work with children, enjoy the aquatic environment, have a passion for learning new teaching techniques, thrive on challenges and are not sensitive to critique. It is also important that you understand and accept the administrative component of this position. Specifically, you will be required to complete detailed "Student Ability Reports" (grade cards). While this administrative component is certainly less exciting than working with the students, it is an extremely important responsibility. As previously stated, we offer our customers professional instruction. That includes providing them with a professional assessment of each student's ability and progress.
Regarding the acceptance and implementation of critique, Jeff elaborates, "Though I began creating the Personalized Swim Instruction© program in 1987, and I am extremely confident in my abilities as an educator; I accept that I can learn something from everyone. Through the years, I have learned new and creative ideas from my instructors. I feel that my main strength as a business owner, employer and educator is my ability to listen to others and to accept and implement critique and new ideas. Therefore, I expect the same from my employees. Each summer, my goal is to surround myself with passionate, motivated and like-minded people. I accept that I can learn from you and I expect the same attitude from you. If this sounds like the kind of employment relationship that you would enjoy, please continue reading."
Please thoroughly review the information below before submitting your online application. After reviewing your application, we will contact you to schedule an interview. During the interview, candidates sometimes inquire about how and why Jeff started this program, where and when they would teach and how the training program is conducted, etc. Hiring and training new instructors is only one of our many responsibilities. Therefore, we need to use our time efficiently. When we speak with you, it is important that we spend most of our time learning about you – not speaking about the corporation. Of course, we can dedicate some interview time for clarification of information you will read here on this website.
So that we can spend our time getting to know you, please thoroughly review this section of the website. In addition, please read Who's Jeff? to learn about the history of our company. To familiarize yourself with the curriculum, please see Curriculum. Below, you will find most of what you need to know regarding the training program and teaching schedule for this summer.
CPR Certification: To secure a position as a Swim Instructor, you must obtain CPR certification from a nationally recognized certification organization prior to your first day of autonomous teaching.
This training program is specifically designed to prepare you to teach our Abilities 1 through 3 (Basic Safety and Freestyle). The instructors who teach Abilities 4 through 6 (Backstroke, Breaststroke and Butterfly) receive additional training not itemized here. This additional training is very personalized. The amount of time required depends largely on one's ability to perform those strokes.
Student Teaching & Assessment: You will teach an actual session (Session 4) of classes under the guidance of your mentors. You will teach four classes: At least two classes of Ability 1s\2s and two classes of Ability 2s\3s. During this phase of the training, the mentors will guide, assist and then grade your swimming and teaching techniques. After "Shadowing" and an intense week of "Swimming & Teaching Techniques" training, this two-week period of hands-on time with the children will provide you with the skills and confidence needed for autonomous teaching. Monday, June 3 through Thursday, June 6.
1. Training in and testing of the administrative components of your role as a professional Swim Instructor. One such administrative component is the completion of detailed "Student Ability Reports" for each student whom you teach.
2. Continued fine-tuning of your swimming and teaching techniques.
We offer an ideal schedule for teachers who want to work part-time, but also want to have an enjoyable summer with time for vacation. Teaching requires only a Monday through Friday commitment. The classes are Monday through Thursday of each week. Fridays are reserved for weather related make-ups. We have teaching schedules available in the morning, early afternoon to late afternoon, late afternoon to evening, and evening only. We will work with you to create a teaching schedule that meets our mutual needs.
Paid Training: Observation of Experienced Instructors Between 04/22 and 05/16. Determined individually for each instructor. We will customize a "shadowing" schedule for you during our "Session 3" classes.
Paid Training: Swimming & Teaching Techniques 05/28 to 05/31 See the table above titled, "2019 Summer Swim Instructor Training Program".
Paid Training: Student Teaching ("Session 4" for the Customers), Administrative Responsibilities of the Position, Continued Swimming & Teaching Techniques, Final Assessment 06/03 to 06/14 See the table above titled, "2019 Summer Swim Instructor Training Program".
Session 5 (Sessions 1 - 3 are Spring Sessions. Session 4 is the Student Teaching, etc. portion of your paid training. Session 5 will be your first autonomous session). 06/17 to 06/28 We have teaching schedules available in the morning, early afternoon to late afternoon, late afternoon to evening, and evening only.
VACATION 06/29 to 07/07 Enjoy a guaranteed 9 days of vacation!
Session 6 07/08 to 07/19 We have teaching schedules available in the morning, early afternoon to late afternoon, late afternoon to evening, and evening only.
VACATION ... until you return to your school district meetings, etc.
Or you may request to teach one or both of the final two sessions: Enjoy more vacation before returning to the classroom!
Session 7 07/22 to 08/02 Late afternoon to evening and/or evening only.
Session 8 08/05 to 08/16 Late afternoon to evening and/or evening only.
The customer will score your performance based on a 0 to 100 point system. For each evaluation in which you receive a score of 95 to 100, you will receive $2.00. Receiving a $2.00 bonus per student each session can increase your actual hourly wage by as much as $1.60 per hour. Example: If you teach from 8:00 a.m. to 1:00 p.m. each day, you will work 20 hours per week, which is 40 hours for a two-week session. If you make $10.25 per hour, you will earn $410.00 per session (40 hours per session x $10.25/hour = $410.00). The average shift from 8:00 a.m. to 1:00 p.m. consists of 8:00 a.m. to 10:20 a.m. (4 classes) and 10:40 a.m. to 1:00 p.m. (4 classes). Thus, you will teach as many as 32 students per session (8 classes x 4 students/class = 32 students). If each parent gives you a score of 95-100, you will be paid an extra $64.00 for that two-week session (32 students x $2.00/student). $410.00 per session from your hourly earnings + $64.00 per session from your performance bonuses equal $474.00 of total earnings for the session. Thus, by dividing $474.00 by the actual hours you worked, you will have actually earned $11.85 per hour ($474.00 / 40 hours = $11.85/hour).
By applying online, you are confirming that you have answered "yes" to all of the questions in the first part of this employment section. In addition, you are confirming that you have thoroughly reviewed this entire section regarding the training program and the summer schedule. Upon receiving your online application, we know that you are serious enough about this opportunity to have invested your time in reading the entire section. With that in mind, thank you for your interest and your time. We look forward to receiving your online application and speaking with you soon.
We provide lifeguards as an integral part of our Aquatic Facility Management services. We are seeking professional educators to mentor and manage our lifeguards. This role with our company provides an excellent opportunity to hone one's leadership skills without the expectation of being the sole manager from Memorial Day through Labor Day; and without the expectation of being "on call". When your shift is done for the day, you are done for the day! You will not have to solve staffing or pool-related problems before or after your shift as is the case with most pool manager positions with the public pools, country clubs and other pool management companies. Furthermore, you will determine your commitment level (20, 25, 30, 35 or 40 hours per week) as well as your specific availability. Do you have one or more hard-earn trips planned? Take the time off and enjoy! Have a regular weekly social night planned with your friends? Enjoy it. Via our online scheduling software, you will input your "time-off requests" (the term used in the software) but these are really "time-off-notifications" because we will approve them and the software will then show you as unavailable for those dates and times. We will hire accordingly, to ensure that we have enough Leaders \ Managers to cover all of the hours; so that no one member of the leadership team is over-burdened with the responsibilities. This will also allow members of the leadership team to trade shifts as needed. This level of flexibility will further lessen the burden for each leader, thereby allowing you to enjoy your summer vacation!
1. You will earn supplemental income utilizing your experience as a professional educator and mentor.
2. You can accomplish benefit #1 without the "before and after hours" expectations usually associated with the role of a pool manager.
3. You get to determine your commitment level and availability!
4. You can trade shifts with other members of the leadership team and thus have flexibility that allows for spontaneity, as you enjoy your hard-earned summer vacation from the school district.
No previous lifeguard experience is necessary. To secure a position as a Lifeguard Leader \ Pool Manager, you must obtain a lifeguard certification from a nationally recognized certification organization. We are offering two American Red Cross Lifeguard Certification courses in late May. Please read the "Lifeguard" section below for the details about the lifeguard certification courses.
Lifeguard Leaders \ Pool Managers: Compensation ranges from $10.00 to $12.00 per hour; and will be determined based on a combination of experience, interview, availability and degree\level of commitment. Furthermore, our Lifeguard Course Tuition will be FREE (saving you $199.00!) contingent upon fulfillment of your commitment. You can also earn BONUSES. Please scroll to the bottom of this page for the details.
Summer Employment that Makes a Difference!
We provide lifeguards as an integral part of our Aquatic Facility Management services. Part-time to full-time positions are available at beautiful homes associations from Memorial Day Weekend through Labor Day Weekend.
Every season, our Lifeguards make rescues and save lives! We believe Lifeguarding is the most important job available to high school and college students. No other employment opportunity places young people in the role of "First Responder". Lifeguards are trained to save lives and must remain prepared to do so at any moment. Nothing is more important than protecting and saving lives. We are seeking young people who: are 15 years of age (by May 25) and older, desire to make a difference in the lives of others; pursue opportunities to demonstrate excellence; strive to remain physically fit; challenge themselves; consider hard work a personal standard; explore opportunities to sharpen their communication and problem solving skills; desire to work among like-minded paraprofessionals; and demonstrate an interest in maintaining the level of vigilance necessary to become a superior Lifeguard. Join us to protect and save lives!
To secure a position as a lifeguard, you must be at least 15 years of age (by May 25); and you must obtain a lifeguard certification from a nationally recognized certification organization. We are offering two American Red Cross Lifeguard Certification courses in late May. We may offer a July course as well. See below for details regarding the course dates\times.
Curriculum: Our Lifeguard Instructors teach the American Red Cross Lifeguard Certification curriculum. Successful completion of this course will result in a two-year American Red Cross Lifeguard \ First-Aid \ CPR Certification.
Requirements: These classes are available to students 15 years of age (by May 25) and older, who meet the swimming ability requirements below. Employment with Jeff Kelly inc. is a requirement to enroll in our Lifeguard Certification Courses.
Swimming Ability Requirement #1: Swim 25 yards (the length of a standard lap-swim pool) without stopping.
Swimming Ability Requirement #2: Hold breath for 1 minute as you swim \ surface-dive to the bottom of a pool with a deep-end of 10 feet; and return to the surface with sufficient energy to swim back to the edge of the pool.
2019 Course Schedules with Jeff Kelly inc.: Employment with Jeff Kelly inc. is a requirement to enroll in our Lifeguard Certification Courses.
May Course #1: Weekend. You can be working by May 25!
Sat, May 18 & Sun, May 19 10:00 a.m. to 6:00 p.m. TBD. The specific locations will be posted by April 30.
This course may be at two different locations. We are currently working out the details, which will be posted by or before April 30. We will enable online enrollment for this course at that time.
May Course #2: Weeknights. You can be working by May 25!
Thursday, May 24 & Friday, May 25 for the deep water rescues. 05:00 p.m. to 08:00 p.m. TBD. The specific locations will be posted by or before April 30.
The deep water portion of this course will be at the Red Bridge YMCA in South KC and/or Nottingham by the Green Area Homes Association; based on the weather forecast the week prior. We will enable online enrollment for this course by or before April 30.
This is not actually a separate course. Rather, we are listing it here as a possible option for those who are unable to attend the entire weekend course and/or may have conflicts on one or two of the weeknights. It is possible, depending on the exact dates\times of your conflicts, that we may be able to "customize" the course for you. The bottom line is that you learn, practice and successfully execute all of the rescues and skills. We know how challenging late May can be for high school students in particular. Therefore, we will be as creative and as flexible as possible in training you. We cannot promise that we can be as creative and as flexible as some may need, depending on the extent of conflicts; but we will do our best. So, if you are eager for the opportunity to work as a lifeguard but you are unable to attend the entire weekend course or the entire weeknight course, please specify that in the "Availability" section of the online application. Please detail exactly which dates\times you would miss. We can then discuss the possibility of customizing \ personalizing your training by having you participate in specific "modules" on each of the dates\times for which you are available, in order to cover all of the content, and thus fully prepare you for your responsibilities as a lifeguard.
Depending on need and applicant interest\availability, we may offer a July course, which would be conducted at Nottingham by the Green Area Homes Association pool. Therefore, if you will not be 15 years of age by May 25 and/or you are not available for one of the May courses; please submit your online application with a note that you would need a July course.
Tuition for Courses with Jeff Kelly inc: $199.00. However, we are offering $100.00 Off! If you successfully complete our American Red Cross Lifeguard Certification course, fulfill all obligations of your employment and score at least a 90% on your season-end performance evaluation; we will reimburse $100.00 of your tuition on the final paycheck of the year. Tuition is $199.00 and therefore, after the reimbursement; your tuition will have cost you only $99.00! Tuition for those hired to be Lifeguard Leaders \ Pool Managers will be FREE, contingent upon fulfillment of commitment.
Of course, we welcome those who have already obtained their lifeguard certification from any nationally recognized certification organization. If you fulfill all obligations of your employment and score at least a 90% on your season-end performance evaluation; we will provide you with a $100.00 "Signing Bonus" on the final paycheck of the year.
As detailed above in the section for "Tuition", if you successfully complete our American Red Cross Lifeguard Certification course, fulfill all obligations of your employment and score at least a 90% on your season-end performance evaluation; we will reimburse $100.00 of your tuition on the final paycheck of the year. Tuition is $199.00 and therefore, after the reimbursement; your tuition will have cost you only $99.00! Tuition for those hired to be Lifeguard Leaders \ Pool Managers will be FREE, contingent upon fulfillment of commitment.
We will provide you with a $100.00 "Friend Referral" Bonus for EACH qualifying friend; to be paid on the final paycheck of the year. To earn these bonuses, you and each of the friends whom you refer to us, must be certified lifeguards and/or successfully complete a lifeguard certification course offered by one of the nationally recognized certification organizations; fulfill all obligations of your employment and score at least a 90% on your season-end performance evaluation.
* Performance Bonus: $0.50 per HOUR for EVERY HOUR WORKED!
Your performance as a Lifeguard is a critical component of the superior quality that we offer to the communities we serve. Therefore, you will be rewarded for superior service. In addition to your base wage, you can earn up to $0.50 per hour for every hour that you work throughout the season! You will be evaluated by our leadership team on very specific criteria that will be provided to you as part of your employment manual. Therefore, you will know exactly what is expected of you to earn an hourly performance bonus; to be paid on the final paycheck of the year. | CommonCrawl |
Update: I've now started a website motivated by the idea outlined in this post. It's called Physics Travel Guide.com. For each topic, there are different layers such that everyone can find an explanation that speaks a language he/she understands.
Over the years I've had many discussions with fellow students about the question: when do you understand something?
In other words: you've only understood a given topic if you can explain it in simple terms.
Many disagree. Especially one friend who, studies math, liked to argue that some topics are simply too abstract and such "low-level" explanations may not be possible.
Of course, the quote is a bit exaggerated. Nevertheless, I think as a researcher you should be able to explain what you do to an interested beginner student.
I don't think that any topic is too abstract for this. When no "low-level" explanation is available so far, this does not mean that it doesn't exist, but merely that it hasn't been found yet.
In my first year as a student, I went on a camping trip to Norway. At that time, I knew little math and nothing about number theory or the Riemann zeta function. During the trip, I devoured "The Music of the Primes" by Marcus Du Sautoy. Sautoy managed to explain to a clueless beginner student why people care about prime numbers (they are like the atoms of numbers), why people find the Riemann zeta function interesting (there is a relationship between the complex zeros of the Riemann zeta function and the prime numbers) and what the Riemann hypothesis is all about. Of course, after reading the book I still didn't know anything substantial about number theory or the Riemann zeta function. However, the book gave me a valuable understanding of how people who work on these subjects think. In addition, after several years I still understand why people get excited when someone proposes something new about the Riemann hypothesis.
I don't know any topic more abstract than number theory and if it is possible to explain something as abstract as the Riemann zeta function to a beginner student, it can be done for any topic, too.
My point is not that oversimplified PopSci explanations are what all scientists should do and think about. Instead, my point is that any topic can be explained in non-abstract terms.
Well maybe, but why should we care? An abstract explanation is certainly the most rigorous and error-free way to introduce the topic. It truly represents the state of the art and how experts think about the topic.
While this may be true, I don't think that this is where real understanding comes from.
Maybe you are able to follow some "explanation" that involves many abstract arguments or some abstract proof and maybe afterward you realize that the concept or theorem is correct. However, what is still missing is some understanding of why it is correct.
You can proof this, for example, by induction. After such a proof you are certainly convinced that the formula $\sum_1^n (2k-1) = n^2$ is correct. But still, you have no idea why it is correct.
The odd numbers as puzzle pieces fit together such that we get a $n\times n$ square. We can see here that the sum is $n^2$. After seeing this proof, you'll never forget why the sum of the first $n$ odd numbers equals $n^2$.
Especially most math books are guilty of relying solely on abstract explanations without any pictorial explanations or analogies. I personally find this oftentimes incredibly frustrating. No one gets a deep understand by reading pages full of definitions and proofs. This type of exposition discourages beginner students and simply communicates the message "well real math is complicated stuff".
For example, Needham manages to give you beautiful pictures for the series expansion of the complex exponential function, which otherwise is just another formula. Another example, I've written about here is what is really going between a Lie algebra and a given Lie group. You can accept the relationship as some abstract voodoo or you can draw some pictures and get some deep understanding that allows you to always remember the most important results.
This problem with too abstract explanations is not only a problem in mathematics. Many physics books suffer from the same problem. A great example is how quantum field theory is usually explained by the standard textbooks (Peskin-Schröder and Co.). Most pages are full of complicated computations and comments about high-level stuff. After reading one of these books you can not help yourself but have the impression: "well, quantum field theory is complicated stuff". In contrast, when you read "Student Friendly Quantum Field Theory" by Robert Klauber, you will come to the conclusion that quantum field theory is at its core quite easy. Klauber carefully explains things with pictures and draws lots of analogies. Thanks to this, after reading his book, I was always able to remember the most important, fundamental features of quantum field theory.
Another example from physics are anomalies. Usually they are introduced in highly complicated way, although there exists a simple pictorial way to understand anomalies. Equally, the Noether theorem is usually just proven. Students accept its correctness, but have no clue why it is correct. On the other hand, there is Feynman's picture proof of the Noether's theorem.
The message here is similar to what I wrote in "One Thing You Must Understand About Studying Physics". Don't get discouraged by explanations that are too abstract for your current level of understanding. On any topic there exists some book or article that explains it in a language that you can understand and that brings you to the next level. Finding this book or article can be long and difficult process, but it is always worth it. If there really isn't any readable on the topic that you are interested, write it yourself! | CommonCrawl |
The hyperparameter, $\alpha$, lets us control how much we penalize the coefficients, with higher values of $\alpha$ creating simpler modelers. The ideal value of $\alpha$ should be tuned like any other hyperparameter. In scikit-learn, $\alpha$ is set using the alpha parameter. | CommonCrawl |
In the first example Daisy and Akram were counting in twos.
Akram made the longer chain because he made $6$ sets of twos ($6\times2=12$).
Daisy made only $4$ sets of twos ($4\times2=8$).
In the second example they were counting in fives.
Daisy made two sets of fives ($5\times2=10$) and Akram made $3$ sets ($5\times3=15$).
In the third example Daisy was counting in threes and Akram was counting in fours.
Thank you, Ria, although I wonder whether your pictures for counting in sixes and sevens are rather similar? Jesse and Emma from Creston thought that Akram was making a pattern of blue, yellow, red and green in the final part. Well done also to Sue-Min and Anna from the Canadian Academy who also explained their answers clearly.
Addition & subtraction. Factors and multiples. Generalising. Games. Comparing and Ordering numbers. Counting. Interactivities. Investigations. Odd and even numbers. Describing Sequences. | CommonCrawl |
Abstract: Here shape space is either the manifold of simple closed smooth unparameterized curves in $\mathbb R^2$ or is the orbifold of immersions from $S^1$ to $\mathbb R^2$ modulo the group of diffeomorphisms of $S^1$. We investige several Riemannian metrics on shape space: $L^2$-metrics weighted by expressions in length and curvature. These include a scale invariant metric and a Wasserstein type metric which is sandwiched between two length-weighted metrics. Sobolev metrics of order $n$ on curves are described. Here the horizontal projection of a tangent field is given by a pseudo-differential operator. Finally the metric induced from the Sobolev metric on the group of diffeomorphisms on $\mathbb R^2$is treated. Although the quotient metrics are all given by pseudo-differential operators, their inverses are given by convolution with smooth kernels. We are able to prove local existence and uniqueness of solution to the geodesic equation for both kinds of Sobolev metrics.
Journal reference: Applied and Computational Harmonic Analysis 23 (2007), 74-113. | CommonCrawl |
I still cannot understand why the topic is true.
The feasible region for $Gx \preceq h$ is a polyhedron, which is easy to understand and can be easily drawn on the paper.
The feasible region for $Ax=b$ is the intersection of all affine set defined by $a_i^Tx = b_i$ for each $i$, where $a_i, \forall i$ are rows of $A$. So there are three possibilities: 1. No solution. Then the feasible set of LP is empty. 2. One solution. Then the feasible set of LP is "at most" a point. 3. $\infty$ solution. Then the feasible set of LP is "at most" a line.
Why is the feasible set a polyhedron?
The feasible region for $Ax=b$ is an intersection of affine sets, therefore also an affine set. However there are no further restrictions on the dimension of the solution space for $Ax=b$. It may very well be two- or higher-dimensional.
Not the answer you're looking for? Browse other questions tagged convex-analysis convex-optimization linear-programming or ask your own question.
Find the edges of a polyhedron P.
Why is it necessary to have convexity outside the feasible region?
In convex optimization, must equality constraints be linear or affine?
How to show that the convex hull of a set describes a polyhedron?
Why is this set not a polyhedron? | CommonCrawl |
ESAIM: Control, Optimisation and Calculus of Variations articles are available online as soon as the production process is complete.
Designing metrics; the delta metric for curves.
On boundary stability of inhomogeneous $2\times2$ 1-D hyperbolic systems for the $C^1$ norm. | CommonCrawl |
This work considers the organization and performance of computations on parallel computers of tree algorithms for the N-body problem where the number of particles is on the order of a million. The N-body problem is formulated as a set of recursive equations based on a few elementary functions, which leads to a computational structure in the form of a pyramid-like graph, where each vertex is a process, and each arc a communication link. The pyramid is mapped to three different processor configurations: (1) A pyramid of processors corresponding to the processes pyramid graph; (2) A hypercube of processors, e.g., a connection-machine like architecture; (3) A rather small array, e.g., $2 \\times 2 \\ times 2$, of processors faster than the ones considered in (1) and (2) above. Simulations of this size can be performed on any of the three architectures in reasonable time. | CommonCrawl |
Imagine I have a bottle with a random quantity of molecular hydrogen, and can apply exactly enough energy to break only one molecule of hydrogen. Is the reaction going to occur or the energy would be shared between other molecules in the bottle?
The questioner asks that exactly enough energy is added to a molecule of hydrogen just to dissociate it and will energy be shared. The answer is both yes and no.
It is better to take a step back and consider what happens when a photon is absorbed. In the case of hydrogen, dissociation must occur from an electronically excited state as it is a homo-nuclear diatomic and does not have a permanent or varying dipole. This means that the ground state cannot absorb a photon's energy from v=0,1,2 etc to any other vibrational level. (I'm assuming absorption into continuum just above v=$\infty$ is zero.) So excitation has to be with a UV photon.
If the photon is just at dissociation, the H atoms will have effectively zero kinetic energy and so not depart from one another. In fact at this point life becomes complicated and interesting. The potential energy between two atoms increases very slowly with increasing separation as one reaches dissociation. In fact it is possible to form Rydberg atoms which may be a micron in diameter when excited to within fractions of wavenumbers of dissociation. This size is huge, about ten thousand times larger than the minimum H$_2$ atom separation, bigger than a protein and approaching the size of a bacterium! Clearly the two atoms hardly influence one another and consequently a small perturbation, a minute magnetic field for example, and certainly a nearby molecule may cause the Rydberg molecule either to break apart or recombine depending on whether some energy is added or subtracted in the interaction.
If there is not enough energy then the H$_2$ will remain in a highly excited vibrational level until it suffers a collision with another molecule and some vibrational energy is transferred to this molecule. Alternatively the electronically excited H$_2$ could fluoresce and any vibrational energy left in the ground state shared on collision with another H$_2$.
If there is far too much energy than needed to dissociate, then the H atoms ballistically fly apart in opposite directions and transfer some of their energy at each collision with other H$_2$ molecules. Possibly reaction also occurs, but H$_3$ is almost certainly unstable.
Not the answer you're looking for? Browse other questions tagged energy reaction-control or ask your own question.
How do I find the rate constant in this situation?
When is a reaction feasible at room temperature?
What happens to a molecule while it is reacting?
Do electric charges increase activation barriers?
Which reaction is more violent?
Can oxidation of hydrogen happen in room temperature?
Could a non-spontaneous reaction occur "on its own"? | CommonCrawl |
A Remark on Gromov-Witten Invariants of Quintic ThreefoldMay 18 2017Jan 05 2018The purpose of the article is to give a proof of a conjecture of Maulik and Pandharipande for genus 2 and 3. As a result, it gives a way to determine Gromov-Witten invariants of the quintic threefold for genus 2 and 3.
The Discrete AKNS-D HierarchyJun 06 2006In this paper, we consider the discrete AKNS-D hierarchy, find the construction of the hierarchy, prove the bilinear identity and give the construction of the $\tau$-functions of this hierarchy.
Mott made easyDec 10 2012The realization of a Mott insulating state in a system of ultracold fermions comprising far more internal components than the electron, provides an avenue for probing many-body physics that is difficult to access in solids.
Burgess-like subconvex bounds for $GL_2 \times GL_1$Sep 26 2012Apr 01 2014We give a Burgess-like subconvex bound for $L(s, \pi \otimes \chi)$ in terms of the analytical conductor of $\chi$, where $\pi$ is a $GL_2$ cuspidal representation and $\chi$ is a Hecke character.
On the Slicing Genus of Legendrian KnotsMay 12 2005Feb 06 2007We apply Heegaard-Floer homology theory to establish generalized slicing Bennequin inequalities closely related to a recent result of T. Mrowka and Y. Rollin proved using Seiberg-Witten monopoles.
Supersymmetric U(1) Gauge Field Theory With Massive Gauge FieldMar 06 1998A supersymmetric model with U(1) gauge symmetry will be discussed in this paper. The model has strict U(1) gauge symmetry and supersymmetry simultaneously. Besides, there is a massive U(1) gauge field contained in the model. | CommonCrawl |
The Fibonacci word starts from $0$ subject to the rules $0 \mapsto 1, 1 \mapsto 01$ (or some variant thereof). The come from cutting sequences of the torus of a line of golden ratio slope. It is a 1D version of the Penrose Tiling.
The Fibonacci word is has minimal complexity above a periodic word -- there are n+1 subwords of length n, making it a Sturmian word.
5 subwords of length 4: "0101", "0110", "1010", "1011", "1101"
8 subwords of length 7: "0101101", "0110101", "0110110", "1010110", "1011010", "1011011", "1101011", "1101101"
As an experiment, I sorted the subwords of length n alphabetically The words are arranged in an infinite tree, where each word is descendant of its subword.
As a shorthand, I only placed the last letter of each word on each diagonal. Edges represent inclusion... drawn so any path from the top left corner "." appears in the Fibonacci word. Each Fibonacci subword corresponds to a path.
What is the structure of this tree? Is there any regularity to the location of the branches?
The complement of the tree becomes a tesselation of the Euclidean plane by "ribbon $\infty$-ominos", which is amusing.
The answer to the question "Is there any regularity to the location of the branches?" appears to be yes. Here is a diagram of the first 500 branches (a black pixel denotes a branch, and a white pixel denotes no branch, with the root in the upper-left-hand corner just like in your diagram). The regularity is apparent. You might need to zoom in if the pixels are too small.
Of course, I say "appears to be" because I haven't yet proved that the pattern continues, or even described precisely what the pattern is.
P.S.: I would have left this as a comment, since I doubt it qualifies as a full "answer", but apparently I need more reputation to do that. My apologies.
This is interesting. Do you have a reference to "thin ∞-ominos"?
Not the answer you're looking for? Browse other questions tagged nt.number-theory ds.dynamical-systems or ask your own question.
Is every Fibonacci number "Fibonacci-prime"?
Can someone explain this appearance of the Fibonacci series in the formula of the Fibonacci series? | CommonCrawl |
This paper is concerned with an optimal shape control problem for the stationary Navier-Stokes system. A two-dimensional channel flow of an incompressible, viscous fluid is examined to determine the shape of a bump on a part of the boundary that minimizes the viscous drag. by introducing an artificial compressibility term to relax the incompressibility constraints, we take the penalty method. The existence of optima solutions for the penalized problem will be shown. Next, by employing Lagrange multipliers method and the material derivatives, we derive the shape gradient for the minimization problem of the shape functional which represents the viscous drag.
In this paper we consider an age dependent branching process whose particles move according to a Markov process with continuous state space. The Markov process is assumed to the stationary with independent increments and positive recurrent. We find some sufficient conditions for he Markov motion process such that the empirical distribution of the positions converges to the limiting distribution of the motion process.
We present the general solution of the functional equation f(x$_1$y$_1$,x$_2$y$_2$) + f(x$_1$y$_1$(sup)-1,x$_2$) + f(x$_1$,x$_2$y$_2$(sup)-1) = f(x$_1$y$_1$(sup)-1,x$_2$y$_2$(sup)-1) + f(x$_1$y$_1$,x$_2$) + f(x$_1$,x$_2$y$_2$). Furthermore, we also prove the Hyers-Ulam stability of the above functional equation.
In this paper we introduce the modified conditional Yeh-Wiener integral. To do so, we first treat the modified Yeh-Wiener integral. And then we obtain the simple formula for the modified conditional Yeh-Wiener integral and valuate the modified conditional Yeh-Wiener integral for certain functional using the simple formula obtained. Here we consider the functional using the simple formula obtained. Here we consider the functional on a set of continuous functions which are defined on various regions, for example, triangular, parabolic and circular regions.
In this paper we define the concept of a conditional Fourier-Feynman transform and a conditional convolution product and obtain several interesting relationships between them. In particular we show that the conditional transform of the conditional convolution product is the product of conditional transforms, and that the conditional convolution product of conditional transforms is the conditional transform of the product of the functionals.
If an Azumaya algebra A is a homomorphic image of a finite group ring RG where G is a direct product of subgroups then A can be decomposed into subalgebras A(sub)i which are homomorphic images of subgroup rings of RG. This result is extended to projective Schur algebras, and in this case behaviors of 2-cocycles will play major role. Moreover considering the situation that A is represented by Azumaya group ring RG, we study relationships between the representing groups for A and A(sub)i.
We prove that the germ of a CR mapping f between real analytic real hypersurfaces has a holomorphic extension and satisfies a complete system of finite order if the source is of finite type in the sense of Bloom-Graham and the target is k-nondegenerate under certain generic assumptions on f.
For a linear operator Q from R(sup)d into R(sup)d and 0<1, the (Q,b)-semi-stability and the strict (Q,b)-semi-stability of probability measures on R(sup)d are defined. The (Q,b)-semi-stability is an extension of operator stability with exponent Q on one hand and of semi-stability with index $\alpha$ and parameter b on the other. characterization of strictly (Q,b)-semi-stable distributions among (Q,b)-semi-stable distributions is made. Existence of (Q,b)-semi-stable distributions which are not translation of strictly (Q,b)-semi-stable distribution is discussed.
Independent $\sigma$-algebras Α and Β on X, L$^2$(X, Α V Β), L$^2$(X x X, Α x Β), and the Hilbert space tensor product L$^2$(X,Α), (※Equations, See Full-text) L$^2$(X,Β), are isomorphic. In this note, we show that various Hilbert C(sup)*-algebra tensor products provide the analogous roles when independence is weakened to conditional independence.
Let M(sup)n be a space-ike submanifold in a de Sitter space M(sub)p(sup)n+p (c) with constant scalar curvature. We firstly extend Cheng-Yau's Technique to higher codimensional cases. Then we study the rigidity problem for M(sup)n with parallel normalized mean curvature vector field.
This paper establishes exponential decay estimates for the solution of a stationary magnetohydrodynamics equations in a semi-infinite pipe flow when homogeneous lateral surface boundary conditions are applied.
We examine Orlicz-type integral inequalities for operators and obtain as a corollary a characterization of such inequalities for the Hardy-Littlewood maximal operator extending the well-known L(sup)p-norm inequalities.
We consider a linear differential game described by the delay-differential equation in a Hilbert space H; (※Equations, See Full-text) U and V are Hilbert spaces, and B(t) and C(t) are families of bounded operators on U and V to H, respectively. A(sub)0 generates an analytic semigroup T(t) = e(sup)tA(sub)0 in H. The control variables g, and u and v are supposed to be restricted in the norm bounded sets (※Equations, See Full-text). For given x(sup)0 ∈ H and a given time t > 0, we study $\xi$-approximate controllability to determine x($.$) for a given g and v($.$) such that the corresponding solution x(t) satisfies ∥x(t) - x(sup)0∥ $\leq$ $\xi$($\xi$ > 0 : a given error). | CommonCrawl |
How I can compute a matrix given the characteristic equation?
All I found are references and functions that do the exact opposite, but I know the characteristic equation and I need the corresponding matrix / linear system.
I have to find the matrix $A$ such that $A$ itself maps to this polynomial.
However, it is probably preferable to choose mat such that it has only as many unknowns as the matrix dimension.
If there exists such a relation it is said that $\bf A$ and $\bf D$ are similar to each other.
In this case it will be either a diagonalization ( if there are only $\lambda_k$ and $0$ in $\bf D$ matrix ) and a Jordan canonical form if there are blocks like the one to the right with $1$ values on the first superdiagonal.
$\bf T$ is free to design as you will - as long as it has determinant $\neq 0$.
The values on the diagonal of $\bf D$ are fixed to $\lambda_k$, but not how large blocks or contents of $\bf T$.
There are infinitely many more, though.
Not the answer you're looking for? Browse other questions tagged matrices linear-algebra polynomials or ask your own question.
How to conclude that the minimal polynomial is the characteristic?
Solving the characteristic equation of a $3\times 3$ matrix to find the eigenvalues. | CommonCrawl |
Since the pioneering work of Heinrich Hertz, perfect-electric conductor (PEC) loop antennas for RF applications have been studied extensively. Meanwhile, nanoloops are promising in the optical regime for their applications in a wide range of emerging technologies. Unfortunately, analytical expressions for the radiation properties of conducting loops have not been extended to the optical regime. This paper presents closed-form expressions for the electric fields, total radiated power, directivity, and gain for thin-wire nanoloops operating in the terahertz, infrared and optical regimes. This is accomplished by extending the formulation for PEC loops to include the effects of dispersion and loss. The expressions derived for a gold nanoloop are implemented and the results agree well with full-wave computational simulations, but with a speed increase of more than $300\times $ . This allows the scientist or engineer to quickly prototype designs and gain a deeper understanding of the underlying physics. Moreover, through rapid numerical experimentation, these closed-form expressions made possible the discovery that broadband superdirectivity occurs naturally for nanoloops of a specific size and material composition. This is an unexpected and potentially transformative result that does not occur for PEC loops. Additionally, the Appendices give useful guidelines on how to efficiently compute the required integrals. | CommonCrawl |
Abstract: Recent years have witnessed the trend of leveraging cloud-based services for large scale content storage, processing, and distribution. Data security and privacy are among top concerns for the public cloud environments. Towards these security challenges, we propose and implement CloudaSec framework for securely sharing outsourced data via the public cloud. CloudaSec ensures the confidentiality of content in the public cloud environments with flexible access control policies for subscribers and efficient revocation mechanisms. CloudaSec proposes several cryptographic tools for data owners, based on a novel content hash keying system, by leveraging the Elliptic Curve Cryptography (ECC). The separation of subscription-based key management and confidentiality-oriented asymmetric encryption policies uniquely enables flexible and scalable deployment of the solution as well as strong security for outsourced data in cloud servers. Through experimental evaluation, we demonstrate the efficiency and scalability of CloudaSec, build upon OpenStack Swift testbed.
Abstract: It is well known that not all intrusions can be prevented and additional lines of defense are needed to deal with intruders. However, most current approaches use honeynets relying on the assumption that simply attracting intruders into honeypots would thwart the attack. In this paper, we propose a different and more realistic approach, which aims at delaying intrusions, so as to control the probability that an intruder will reach a certain goal within a specified amount of time. Our method relies on analyzing a graphical representation of the computer network's logical layout and an associated probabilistic model of the adversary's behavior. We then artificially modify this representation by adding "distraction clusters" – collections of interconnected virtual machines – at key points of the network in order to increase complexity for the intruders and delay the intrusion. We study this problem formally, showing it to be NP-hard and then provide an approximation algo- rithm that exhibits several useful properties. Finally, we present experimental results obtained on a prototypal implementation of the proposed framework.
Abstract: Non-interactive key exchange (NIKE) allows two parties to establish a shared key without communications. In ID-based non-interactive key exchange (ID-NIKE) protocols, private key generator (PKG) knows user's private key, so it can calculate the shared key between two participants, and most constructions of ID-NIKE need expensive pairing operation. To overcome these disadvantages, a security model of certificateless non-interactive key exchange (CL-NIKE) is proposed in this paper. And a scheme without pairings is also given. The proposed protocol is proved secure in the Random Oracle Model (ROM) based on the gap Diffie-Hellman (GDH) and computational Diffie-Hellman (CDH) problem. In the new protocol, key generation center (KGC) only knows user's partial key and is not able to calculate the shared key. Moreover, the new protocol is more efficient than the existing ID-NIKE schemes because it is pairing-free.
Abstract: We propose an efficient adaptive oblivious transfer protocol with hidden access policies. This scheme allows a receiver to anonymously recover a message from a database which is protected by hidden attribute based access policy if the receiver's attribute set satisfies the associated access policy implicitly. The proposed scheme is secure in the presence of malicious adversary under the q-Strong Diffie-Hellman (SDH), q-Power Decisional Diffie-Hellman (PDDH) and Decision Bilinear Diffie-Hellman (DBDH) assumption in full-simulation security model. The scheme covers disjunction of attributes. The proposed protocol outperforms the existing similar schemes in terms of both communication and computation.
Abstract: A proxy signature scheme enables a signer to delegate its signing rights to any other user, called the proxy signer, to produce a signature on its behalf. In a proxy multi-signature scheme, the proxy signer can produce one single signature on behalf of multiple original signers. We propose an efficient and provably secure threshold-anonymous identity-based proxy multi-signature (IBPMS) scheme which provides anonymity to the proxy signer while also providing a threshold mechanism to the original signers to expose the identity of the proxy signer in case of misuse. The proposed scheme is proved secure against adaptive chosen-message and adaptive chosen-ID attacks under the computational Diffie-Hellman assumption. We compare our scheme with the recently proposed anonymous proxy multi-signature scheme and other ID-based proxy multi-signature schemes, and show that our scheme requires significantly less operation time in the practical implementation and thus it is more efficient in computation than the existing schemes.
Abstract: Designing efficient key agreement protocols is a fundamental cryptographic problem. In this paper, we first define a security model for key agreement in certificateless cryptography that is an extension of earlier models. We note that the existing pairing free protocols are not secure in our model. We design an efficient pairing-free, single round protocol that is secure in our model based on the hardness assumption of the Computational Diffie Hellman (CDH) problem. We also observe that previously existing pairing-free protocols were secure based on much stronger assumptions such as the hardness of the Gap Diffie Hellman problem. We use a restriction of our scheme to design an efficient pairing-free single round identity based key agreement protocol that is secure in the id-CK+ model based on the hardness assumption of the CDH problem. Additionally, both our schemes satisfy several other security properties such as forward secrecy, resistance to reflection attacks etc.
Abstract: Mobile devices - especially smartphones - have gained widespread adoption in recent years, due to the plethora of features they offer. The use of such devices for web browsing and accessing email services is also getting continuously more popular. The same holds true with other more sensitive online activities, such as online shopping, contactless payments, and web banking. However, the security mechanisms that are available on smartphones and protect their users from threats on the web are not yet mature, as well as their effectiveness is still questionable. As a result, smartphone users face increased risks when performing sensitive online activities with their devices, compared to desktop/laptop users. In this paper, we present an evaluation of the phishing protection mechanisms that are available with the popular web browsers of Android and iOS. Then, we compare the protection they offer against their desktop counterparts, revealing and analyzing the significant gap between the two.
Abstract: Usage control extends access control by enabling the specification of requirements that should be satisfied before, while and after access. To ensure that the deployment of usage control policies in target domains achieves the required security goals, policy verification and analysis tools are needed. In this paper, we present an approach for the dynamic analysis of usage control policies using formal descriptions of target domains and their usage control policies. Our approach provides usage control management explicit labeled transition system semantics and enables the automated verification of usage control policies using model checking.
Abstract: Universities and other educational organizations are adopting computer and Internet-based assessment tools (herein called e-exams) to reach widespread audiences. While this makes examination tests more accessible, it exposes them to new threats. At present, there are very few strategies to check such systems for security, also there is a lack of formal security definitions in this domain. This paper fills this gap: in the formal framework of the applied pi-calculus, we define several fundamental authentication and privacy properties and establish the first theoretical framework for the security analysis of e-exam protocols. As proof of concept we analyze two of such protocols with ProVerif. The first "secure electronic exam system" proposed in the literature turns out to have several severe problems. The second protocol, called Remark!, is proved to satisfy all the security properties assuming access control on the bulletin board. We propose a simple protocol modification that removes the need of such assumption though guaranteeing all the security properties.
Abstract: Physically co-located virtual machines should be securely isolated from one another, as well as from the underlying layers in a virtualized environment. In particular the virtualized environment is supposed to guarantee the impossibility of an adversary to attack a virtual machine e.g., by exploiting a side-channel stemming from the usage of shared physical or software resources. However, this is often not the case and the lack of sufficient logical isolation is considered a key concern in virtualized environments. In the academic world this view has been reinforced during the last years by the demonstration of sophisticated side-channel attacks (SCAs). In this paper we argue that the feasibility of executing a SCA strongly depends on the actual context of the execution environment. To reflect on these observations, we propose a feasibility assessment framework for SCAs using cache based systems as an example scenario. As a proof of concept we show that the feasibility of cache-based side-channel attacks can be assessed following the proposed approach.
Abstract: Payment schemes based on mobile devices are expected to supersede traditional electronic payment approaches in the next few years. However, current solutions are limited in that protocols require at least one of the two parties to be on-line, i.e. connected either to a trusted third party or to a shared database. Indeed, in cases where customer and vendor are persistently or intermittently disconnected from the network, any on-line payment is not possible. This paper introduces FORCE, a novel mobile micro payment approach where all involved parties can be fully off-line. Our solution improves over state-of-the-art approaches in terms of payment flexibility and security. In fact, FORCE relies solely on local data to perform the requested operations. Present paper describes FORCE architecture, components and protocols. Further, a thorough analysis of its functional and security properties is provided showing its effectiveness and viability.
Abstract: In this paper, we address the problem of privacy preserving delegated word search in the cloud. We consider a scenario where a data owner outsources its data to a cloud server and delegates the search capabilities to a set of third party users. In the face of semi-honest cloud servers, the data owner does not want to disclose any information about the outsourced data; yet it still wants to benefit from the highly parallel cloud environment. In addition, the data owner wants to ensure that delegating the search functionality to third parties does not allow these third parties to jeopardize the confidentiality of the outsourced data, neither does it prevent the data owner from efficiently revoking the access of these authorized parties. To these ends, we propose a word search protocol that builds upon techniques of keyed hash functions, oblivious pseudo-random functions and Cuckoo hashing to construct a searchable index for the outsourced data, %of distinct words in the encrypted outsourced data, and uses private information retrieval of short information to guarantee that word search queries do not reveal any information about the data to the cloud server. Moreover, we combine attribute-based encryption and oblivious pseudo-random functions to achieve an efficient revocation of authorized third parties. The proposed scheme is suitable for the cloud as it can be easily parallelized.
Abstract: Mobile devices in corporate IT infrastructures are frequently used to process security-critical data. Over the past few years powerful security features have been added to mobile platforms. However, for legal and organisational reasons it is difficult to pervasively enforce using these features in consumer applications or Bring-Your-Own-Device (BYOD) scenarios. Thus application developers need to integrate custom implementations of security features such as encryption in security-critical applications. Our manual analysis of container applications and password managers has shown that custom implementations of cryptographic functionality often suffer from critical mistakes. During manual analysis, finding the custom cryptographic code was especially time consuming. Therefore, we present the Semdroid framework for simplifying application analysis of Android applications. Here, we use Semdroid to apply machine-learning techniques for detecting non-standard symmetric and asymmetric cryptography implementations. The identified code fragments can be used as starting points for subsequent manual analysis. Thus manual analysis time is greatly reduced. The capabilities of Semdroid have been evaluated on 98 password-safe applications downloaded from Google Play. Our evaluation shows the applicability of Semdroid and its potential to significantly improve future application analysis processes.
Abstract: In current society, reliable identification and verification of individuals are becoming more and more necessary tasks for many fields, not only in police environment, but also in civilian applications, such as access control or financial transactions. Biometric systems are used nowadays in these fields, offering greater convenience and several advantages over traditional security methods based on something that you know (password) or something that you have (keys). In this paper, we propose an efficient online personal identification system based on Multi-Spectral Palmprint (MSP) images using Contourlet Transform (CT) and Gabor Filter (GF) response. In this study, the spectrum image is characterized by the contourlet coefficients sub-bands. Then, we use the Hidden Markov Model (HMM) for modeling the observation vector. In addition, the same spectrum is filtered by the Gabor filter. The real and imaginary responses of the filtering image are used to create another observation vector. Subsequently, the two sub-systems are integrated in order to construct an efficient multi-modal identification system based on matching score level fusion. Our experimental results show the effectiveness and reliability of the proposed method, which brings both high identification and accuracy rate.
Abstract: More and more networks and services are reachable via IPv6 and the interest for security monitoring of these IPv6 networks is increasing. Honeypots are valuable tools to monitor and analyse network attacks. HoneydV6 is a low-interaction honeypot which is well suited to deal with the large IPv6 address space, since it is capable of simulating a large number of virtual hosts on a single machine. This paper presents an extension for HoneydV6 which allows the detection, extraction and analyses of shellcode contained in IPv6 network attacks. The shellcode detection is based on the open source library libemu and combined with the online malware analysis tool Anubis. We compared the shellcode detection rate of HoneydV6 and Dionaea. While HoneydV6 is able to detect about 25 % of the malicious samples, the Dionaea honeypot detects only about 6 %.
Abstract: Mobile telephony based on UMTS uses finite-state control schemes for wireless channels and for signaling across the network. These schemes are used systematically in various phases of the communication and are vulnerable to attacks that can bring down the network through unjustified bandwidth allocation and excessive signaling across the control plane. In this paper we identify those system parameters which are critical to the success of such attacks, and propose changes that can limit the effect of the attack. The approach is based on establishing a mathematical model of a UMTS system that is undergoing attacks, and on showing how parameters can be optimally modified to minimise the effect of the attack as experienced by the mobile device and the network.
Abstract: One-way hash chains have been used to secure many applications over the last three decades. To overcome the fixed length limitation of first generation designs, so-called infinite length hash chains have been introduced. Such designs typically employ methods of asynchronous cryptography or hash based message authentication codes. However, none of the proposed schemes offers perfect forward secrecy, keeping former outputs secret once the system got compromised. A novel algorithm for constructing infinite length hash chains with built-in support for perfect forward secrecy is presented in this work. Thereby, the scheme differs significantly from existing proposals by using a combination of two different hash functions. It avoids the computational complexity of public-key algorithms, utilises well studied standard hash functions and keeps the benefits of a hash chain without a length constraint.
Abstract: Spam has been infesting our emails and Web experience for decades; distributing phishing scams, adult/dating scams, rogue security software, ransomware, money laundering and banking scams... the list goes on. Fortunately, in the last few years, user awareness has increased and email spam filters have become more effective, catching over 99% of spam. The downside is that spammers are constantly changing their techniques as well as looking for new target platforms and means of delivery, and as the world is going mobile so too are the spammers. Indeed, mobile messaging spam has become a real problem and is steadily increasing year-over-year. We have been analyzing SMS spam data from a large US carrier for over six months, and we have observed all these threats, and more, indiscriminately targeting large numbers of subscribers. In this paper, we touch on such questions as what is driving SMS spam, how do the spammers operate, what are their activity patterns and how have they evolved over time. We also discuss what types of challenges SMS spam has created in terms of filtering, as well as security.
Abstract: In this paper we introduce a general framework for automatic construction of empirical tests of randomness. Our new framework generalises and improves a previous approach (Švenda et al., 2013) and it also provides a clear statistical interpretation of its results. This new approach was tested on selected stream ciphers from the eSTREAM competition. Results show that our approach can lay foundations to randomness testing and it is comparable to the Statistical Test Suite developed by NIST. Additionally, the proposed approach is able to perform randomness analysis even when presented with sequences shorter by several orders of magnitude than required by the NIST suite. Although the Dieharder battery still provides a slightly better randomness analysis, our framework is able to detect non-randomness for stream ciphers with limited number of rounds (Hermes, Fubuki) where both above-mentioned batteries fail.
Abstract: Content protection relies on several security mechanisms: (i) encryption to prevent access to the content during transport, (ii) trusted computation environment to prevent access during decoding, and we can also add (iii) forensic watermarking to deter content re-acquisition at rendering. With the advent of next generation video and the ever increasing popularity of embedded devices for content consumption, there is a need for new content protection solutions that rely less on hardware. In this context, we propose an architecture that combines the ARM TrustZone technology, an hypervised environment built on Genode and a bit stream watermarking algorithm that inserts serialization marks on the fly in an embedded device. As a result, an attacker cannot get access to video assets in clear form and not watermarked. Reported performances measurements indicate that the induced computational overhead is reasonable.
Abstract: Clocks have a small in-built error. As the error is unique, each clock can be identified. This paper explores remote computer identification based on the estimation of clock skew computed from network packets. The previous knowledge of the method is expanded in various ways: (1) we argue about the amount of data that is necessary to get accurate clock skew estimation, (2) the study of different time stamp sources unveils several irregularities that hinders the identification, and (3) the distribution of clock skew in real network makes the precise identification hard or even impossible.
Abstract: An encryption scheme is key-dependent message chosen plaintext attack (KDM-CPA) secure means that it is secure even if an adversary obtains encryptions of messages that depend on the secret key. However, there are not many schemes that are KDM-CPA secure, let alone key-dependent message chosen ciphertext attack (KDM-CCA) secure. So far, only two general constructions, due to Camenisch, Chandran, and Shoup (Eurocrypt 2009), and Hofheinz (Eurocrypt 2013), are known to be KDM-CCA secure in the stand model. Another scheme, a concrete implementation, was recently proposed by Qin, Liu and Huang (ACISP 2013), where a KDM-CCA secure scheme was obtained from the classic Cramer-Shoup (CS) cryptosystem w.r.t. a new family of functions. In this paper, we revisit the KDM-CCA security of the CS-scheme and prove that, in two-user case, the CS-scheme achieves KDM-CCA security w.r.t. richer ensembles, which covers the result of Qin et al.. In addition, we present another proof about the result in (QLH13) by extending our approach used in two-user case to n-user case, which achieves a tighter reduction to the decisional Diffie-Hellman (DDH) assumption.
Abstract: A proof of Data Possession (PDP) allows a client to verify that a remote server is still in possession of a file entrusted to it. One way to design a PDP, is to compute a function depending on a secret and the file. Then, during the verification stage, the client reveals the secret input to the server who recomputes the function and sends the output back to the client. The client can then compare both values to determine if the server is still in possession of the file. The problem with this approach is that once the server knows the secret, it is not useful anymore. In this article, we present two PDP schemes inspired in Multiple-Server Private Information Retrieval (MSPIR) protocols. In a traditional MSPIR protocol, the goal is to retrieve a given block of the file from a group of servers storing identical copies of it, without telling the servers what block was retrieved. In contrast, our goal is to let servers evaluate a function using an input that is not revealed to them. We show that our constructions are secure, practical and that they can complement existing approaches in storage architectures using multiple cloud providers. The amount of transmitted information during the verification stage of the protocols is proportional to the square root of the length of the file.
Abstract: In this paper, we propose a new lightweight L'Ecuyer-based pseudo random number generator (PRNG). We show that our scheme, despite the very simple functions on which it relies on, is strongly secure in the sense that our number sequences pass the state-of-the-art randomness tests and, importantly, an accurate and deep security analysis shows that it is resistant to a number of attacks.
Abstract: In a community cloud, infrastructure is shared among several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.). In such a computing model, the security responsibilities rest mostly with the third-party infrastructure provider. Security violations may occur if local access policies from different organizations are not implemented correctly. Therefore, one of the major concerns for a cloud provider is to formally verify whether security implementation conforms to the local access policies, and ensure that shared resources (hosted in the multi-tenant infrastructure) are accessed by only authorized users from various organizations. In this paper, we propose an automated verification framework to address this issue of policy verification. The framework consists of two models: policy and implementation. An algorithm has been developed to reduce the models into Boolean clauses, and is given as input to zchaff SAT solver for formal verification. Experimental results show the efficacy of proposed approach.
Abstract: When a data holder wants to share databases that contain personal attributes, individual privacy needs to be considered. Existing anonymization techniques, such as l-diversity, remove identifiers and generalize quasi-identifiers (QIDs) from the database to ensure that adversaries cannot specify each individual's sensitive attributes. Usually, the database is anonymized based on one-size-fits-all measures. Therefore, it is possible that several QIDs that a data user focuses on are all generalized, and the anonymized database has no value for the user. Moreover, if a database does not satisfy the eligibility requirement, we cannot anonymize it by existing methods. In this paper, we propose a new technique for l-diversity, which keeps QIDs unchanged and randomizes sensitive attributes of each individual so that data users can analyze it based on QIDs they focus on and does not require the eligibility requirement. Through mathematical analysis and simulations, we will prove that our proposed method for l-diversity can result in a better tradeoff between privacy and utility of the anonymized database.
Abstract: Elliptic curve scalar multiplication ( [k]P where k is an integer and P is a point on the elliptic curve) is widely used in encryption and signature generation. In this paper, we explore a factorization-based approach called Near-Factorization that can be used in conjunction with existing optimization techniques such as Window NAF (Non Adjacent Form). We present a performance model of Near-Factorization and validate model results with those from a simulation. We compare Near-Factorization with wNAF for a range of scalar sizes, window sizes, divisor lengths and Hamming weights of divisor. The use of Near-Factorization with wNAF results in a considerable reduction in the effective Hamming weight of the scalar and a reduction in overall computation cost for Koblitz curves.
Abstract: Perceptual image hashing has received an increased attention as one of the most important components for content based image authentication in recent years. Content based image authentication using perceptual image hashing is mainly classified into four different categories according to the feature extraction scheme. However, all the recently published literature that belongs to the individual category has its own strengths and weaknesses related to the feature extraction scheme. In this regard, this paper proposes a hybrid approach to improve the performance by combining two different categories: low-level image representation and coarse image representation. The proposed method employs a well-known local feature descriptor, the so-called Histogram of Oriented Gradients (HOG), as the feature extraction scheme in conjunction with Image Intensity Random Transformation (IIRT), Successive Mean Quantization Transform (SMQT), and bit-level permutation to construct a secure and robust hash value. To enhance the proposed method, a Key Derivation Function (KDF) and Error Correction Code (ECC) are applied to generate a stable subkey based on the coarse image representation. The derived subkey is utilized as a random seed in IIRT and HOG feature computation. Additionally, the experimental results are presented and compared with two existing algorithms in terms of robustness, discriminability, and security.
Abstract: At CT-RSA 2014, Armknecht and Mikhalev presented a new technique for increasing the throughput of stream ciphers that are based on Feedback Shift Registers (FSRs) which requires practically no additional memory. The authors provided concise sufficient conditions for the applicability of this technique and demonstrated its usefulness on the stream cipher Grain-128. However, as these conditions are quite involved, the authors raised as an open question if and to what extent this technique can be applied to other ciphers as well. In this work, we revisit this technique and examine its applicability to other stream ciphers. On the one hand we show on the example of Grain-128a that the technique can be successfully applied to other ciphers as well. On the other hand we list several stream ciphers where the technique is not applicable for different structural reasons.
Abstract: Providing sound countermeasures against passive side channel attacks has received large interest in open literature. The scheme proposed in [Ishai et al., 2003] secures a computation against a d-probing adversary splitting it into d+1 shares, albeit with a significant performance overhead (5x to 20x). We maintain that it is possible to apply such countermeasures only to a portion of the cipher implementation, retaining the same computational security, backing a widespread intuition present among practitioners. We provide the sketch of a computationally bound attacker model, adapted as an extension of the one in [Ishai et al., 2003], and detail the resistance metric employed to estimate the computational effort of such an attacker, under sensible assumptions on the characteristic of the device leakage (which is, to the current state of the art, still lacking a complete formalization).
Abstract: This paper defines a model of a special type of digital forensics tools, known as digital media preparation forensic tools, using the formal refinement language Event-B. The complexity and criticality of many types of computer and Cyber crime nowadays combined with improper or incorrect use of digital forensic tools calls for the evidence produced by such tools to be able to meet the minimum admissibility standards the legal system requires, in general implying that it must be generated from reliable and robust tools. Despite the fact that some research and effort has been spent on the validation of digital media preparation forensic tools by means of testing (e.g. within NIST), the verification of such tools and the formal specification of their expected behaviour remains largely under-researched. The goal of this work is to provide a formal specification against which the implementations of such tools can be analysed and tested in the future.
Abstract: Recently, several encryption schemes have been presented to Random Linear Network Coding (RLNC). The recent proposed lightweight security system for Network Coding is based upon protecting the Global Encoding Vectors (GEV) and using other vector to ensure the encoding process of RLNC at intermediate nodes. However, the current lightweight security scheme, possess several practical challenges to be deployed in real application. Furthermore, achieving a high security level results on a high computational complexity and adds some communication overhead. In this paper, a new scheme is defined that supports some properties to overcome the drawbacks of the lightweight Security scheme, and can be used for RLNC real-time data exchange. First, the cryptographic primitive (AES in CTR mode) is replaced by another approach that is based on the utilization of a new flexible key-dependent invertible matrix (dynamic diffusion layer). Then, we show that this approach reduces the size of communication overhead of GEV from $2\times h$ to $h$ elements. In addition to that, we also demonstrate that besides the information confidentially, both the packet integrity and the source authentication are attained with minimum computational complexity and memory overhead. Indeed, simulation tests and results of this scheme yield to a conclusion that our new proposed scheme has sufficient security strength and good performance characteristics that permits to ensure an efficient and simple implementation. Thus, facilitate the integration of this system in many applications that treat security as a principal requirement.
Abstract: We present a steganographic protocol based on linear error-block codes. Recent works have showed that these codes allow to increase the number of information carrier bits within a given cover by exploiting multiple bit planes (not only LSB plane) from pixels which would not have a perceptible influence on the cover. We employ a parameter, called heterogeneity, to assess the ability of pixels to be modified without perturbing the cover. The quality of the modified cover is handled by tuning a vector of heterogeneity thresholds which determines the number of bit planes that we are allowed to use for each pixel in the cover.
Abstract: As new security intrusions arise so does the demand for viable intrusion detection systems. These solutions must deal with huge data volumes, high speed network traffics and countervail new and various types of security threats. In this paper we combine existing technologies to construct an Anomaly based Intrusion Detection System. Our approach improves the Support Vector Machine classifier by exploiting the advantages of a new swarm intelligence algorithm inspired by the environment of microbats (Bat Algorithm). The main contribution of our paper is the novel feature selection model based on Binary Bat Algorithm with Lévy flights. To test our model we use the NSL-KDD data set and empirically prove that Lévy flights can upgrade the exploration of standard Binary Bat Algorithm. Furthermore, our approach succeeds to enhance the default SVMclassifier and we obtain good performance measures in terms of accuracy (90.06%), attack detection rate (95.05%) and false alarm rate (4.4%) for unknown attacks.
Abstract: In ubiquitous computing environment it is common that a user owns and uses multiple computing devices, but managing cryptographic keys in those devices is a complicated matter. If certificate-based cryptography (PKI) is used such that each device has independent certificate, then user has to be involved in multiple certificate issuing processes with certification authorities (CA) and has to keep multiple private keys securely. If a single user certificate is copied and shared in multiple user devices, then a single exposure of private key among multiple devices will destroy the secrecy of every devices. Each device has to have import and export function of private key, which will be a major security weakness that attackers will focus on. In this paper we propose a user-controlled personal key management scheme using hybrid approach, in which certificate is used to authenticate a user and self-generated ID keys are used to authenticate user's computing devices. In this scheme user operates a personal key management server (PKMS) which has the role of personal key generation center (KGC). It is equipped with user's certified private key as a master key and is used to issue ID private keys to user's computing devices. Users normally use multiple computing devices equipped with different ID keys and enjoy secure communication with others using ID-based cryptography. We show that the proposed hybrid-style personal key management scheme is efficient in many aspects and reduces user's key management load drastically.
Abstract: Security issues arise permanently in different software products. Making software secure is a challenging endeavour. Static analysis of the source code can help eliminate various security bugs. The better a scanner is, the more bugs can be found and eliminated. The quality of security scanners can be determined by letting them scan code with known vulnerabilities. Thus, it is easy to see how much they have (not) found. We have used the Juliet Test Suite to test various scanners. This test suite contains test cases with a set of security bugs that should be found by security scanners. We have automated the process of scanning the test suite and of comparing the generated results. With one exception, we have only used freely available source code scanners. These scanners were not primarily targeted at security, yielding disappointing results at first sight. We will report on the findings, on the barriers for automatic scanning and comparing, as well as on the detailed results.
Abstract: Snowden's whistleblower from the last year made people more aware of the fact that we are living in the Internet surveillance era. Privacy of Internet communication has been disrupted. In this paper, application for privacy protection in chat communication, named CryptoCloak, is presented. CryptoCloak provides privacy protection for chat communication. Encrypted communication is masked with dynamic cheap chat conversation. Communication made this way is not point of interest for mass surveillance spying engines. For implementation of the CryptoCloak, Facebook Messenger API is used. Diffie-Hellman key exchange is done in clandestine manner - instead of sending uniform sequence of numbers, sentences are sent. Current version provides encryption/decryption mechanism for the chat communication using strong symmetric algorithm AES in CBC mode. 256 bits of Diffie-Hellman exchanged key are used for AES-CBC.
Abstract: Nowadays, users rely on cloud storage as it offers cheap and unlimited data storage that is available for use by multiple devices (e.g. smart phones, notebooks, etc.). Although these cloud storage services offer attractive features, many customers are not adopting them, since data stored in these services is under the control of service providers and this makes it more susceptible to security risks. Therefore, in this paper, we addressed the problem of ensuring data confidentiality against cloud and against accesses beyond authorized rights by designing a secure cloud storage system framework that simultaneously achieves data confidentiality and fine-grained access control on encrypted data. This framework is built on a trusted third party (TTP) service that can be employed either locally on users' machine or premises, or remotely on top of cloud storage services for ensuring data confidentiality. Furthermore, this service combines multi-authority ciphertext policy attribute-based encryption (MA-CP-ABE) and attribute-based Signature (ABS) for achieving many-read-many-write fine-grained data access control on storage services. Last but not least, we validate the effectiveness of our design by carrying out a security analysis.
Abstract: Partial fingerprints are likely to be fragmentary or low quality, which mandates the development of accurate fingerprint verification algorithms. Two fingerprints should be aligned properly, in order to measure the similarity between them. Moreover, the common fingerprint recognition methods (minutiae-based) only use the limited information that is available. This affects the reliability of the output of the fingerprint recognition system, especially when dealing with partial fingerprints. To overcome this drawback, in this research, a region-based fingerprint recognition method is proposed in which the fingerprints are compared in a pixel-wise manner by computing their correlation coefficient. Therefore, all the attributes of the fingerprint contribute in the matching decision. Such a technique is promising to accurately recognise a partial fingerprint as well as a full fingerprint compared to the minutiae-based fingerprint recognition methods which only concentrate on parts of the fingerprint. The proposed method is based on simple but effective metrics that has been defined to compute local similarities which is then combined into a global score and then used to make the match/non-match decision. Extensive experiments over FVC2002 data set has proven the superiority of our method compared to the other well-known techniques reported in literature.
Abstract: The Security for the Future Networks (SecFuNet) project proposes to integrate the secure microcontrollers in order to introduce, among its many services, authentication and authorization functions for Cloud and virtual environments. One of the main goals of SecFuNet is to develop a secure infrastructure for virtualized environments and Clouds in order to provide strong isolation among virtual infrastructures, and guarantee that one virtual machine (VM) should not interfere with others. The goal of this paper is to describe the implementation and the experimentation of the solution for identifying users and nodes in the SecFuNet architecture. In this implementation, we also employ low-cost smartcards. Only authorized users are allowed to create or instantiate virtual environments. Thus, users and hypervisors are equipped with secure elements, used to open TLS secure channels with strong mutual authentication.
Abstract: Designing an embedded system is a complex process that involves working on both hardware and software. The first step in the design process is defining functional and non-functional requirements; among them, it is fundamental to also consider security. We propose an effective way for designers to specify security requirements starting from User Security Requirements. User Security Requirements are high-level requirements related to security attacks that the system should be able to withstand. We also provide a mechanism to automatically translate these User Requirements into System Security Requirements, that include a detailed description of security solutions. For expressing requirements we use Unified Modeling Language (UML); specifically, we create a UML profile to describe user requirements and we use model-to-model transformation to automatically generate system requirements. We show the effectiveness of the modeling scheme and of the translation mechanism by applying our methodology to a case study based on wearable devices for e-health monitoring.
Abstract: Smartphones are overpowering the IT world by rising as a prerequisite for other technologies. Emerging technology paradigms such as Cloud computing, web data services, online banking and many others are revamping them as compatibility to smartphones. Banking is a vital and critical need in daily life. It involves routine financial transactions among sellers, buyers and third parties. Several payment protocols are designed for mobile platforms which involve hardware tokens, PIN, credit cards, ATMs etc. for secure transactions. Many of them are not properly verified and have hidden flaws .Numerous vulnerabilities have been found in existing solutions which raise a big question about the defense capability of smartphones to protect user's data. In this paper we propose a secure payment protocol for smartphones without using any hardware token. It implicates bank as a transparent entity and users rely on a payment gateway to mark a successful transaction. Suggested protocol uses symmetric keys, Digital certificates X.509, and two-factor authentication to make a secure financial deal. To prove the secrecy and authentication properties of the protocol we have formally verified it by AVISPA.
Abstract: Payments through cards have become very popular in today's world. All businesses now have options to receive payments through this instrument, moreover most organizations store card information of its customers in some way to enable easy payments in future. Credit card data is a very sensitive information and its theft is a serious threat to any company. Any organization that stores such data needs to achieve payment card industry (PCI) compliance, which is an intricate process. Recently a new paradigm called "tokenization" has been proposed to solve the problem of storage of payment card information. In this paradigm instead of the real credit card data a token is stored. To our knowledge, a formal cryptographic study of this new paradigm has not yet been done. In this paper we formally define the syntax of a tokenization system, and several notions of security for such systems. Finally, we provide some constructions of tokenizers and analyze their security in the light of our definitions.
Abstract: Recently, two families of ultra-lightweight block ciphers were proposed, SIMON and SPECK, which come in a variety of block and key sizes (Beaulieu et al., 2013). They are designed to offer excellent performance for hardware and software implementations (Beaulieu et al., 2013; Aysu et al., 2014). In this paper, we study the resistance of SIMON-64/128 with respect to algebraic attacks. Its round function has very low Multiplicative Complexity (MC) (Boyar et al., 2000; Boyar and Peralta, 2010) and very low non-linearity (Boyar et al., 2013; Courtois et al., 2011) since the only non-linear component is the bitwise multiplication operation. Such ciphers are expected to be very good candidates to be broken by algebraic attacks and combinations with truncated differentials (additional work by the same authors). We algebraically encode the cipher and then using guess-then-determine techniques, we try to solve the underlying system using either a SAT solver (Bard et al., 2007) or by ElimLin algorithm (Courtois et al., 2012b). We consider several settings where P-C pairs that satisfy certain properties are available, such as low Hamming distance or follow a strong truncated differential property (Knudsen, 1995). We manage to break faster than brute force up to 10(/44) rounds for most cases we have tried. Surprisingly, no key guessing is required if pairs which satisfy a strong truncated differential property are available. This reflects the power of combining truncated differentials with algebraic attacks in ciphers of low non-linearity and shows that such ciphers require a large number of rounds to be secure.
Abstract: Certification has been proved as an essential mechanism for achieving different security properties in new systems. However, it has important advantages; among which we highlighted the increasing in users trust by means of attesting security properties, but it is important to consider that in most of cases the system that is subject of certification is considered to be monolithic, and this feature implies that existing certification schemes do not provide support for dynamic changes of components as required in Cloud Computing running systems. One issue that has special importance of current certification schemes is that these refer to a particular version of the product or system, which derives that changes in the system structure require a process of recertification. This paper presents a solution based on a combination of software certification and hardware-based certification techniques. As a key element in our model we make use of the Trusted Computing functionalities as secure element to provide mechanisms for the hardware certification part. Likewise, our main goal is bringing the gap existing between the software certification and the means for hardware certification, in order to provide a solution for the whole system certification using Trusted Computing technology.
Abstract: The paper presents the results of experimental distribution of encryption keys based on random carrier phase of fading radio signal measured in a multipath environment. The random bits extraction scheme was proposed and tested in practice. The proposed scheme is universal and applicable to measurements digitizing of any observable random variable. Experimental study of spatial correlation of multipath signal phase in the case of transverse spatial diversity is carried out. Experimental estimation of the key generation rate and the probability of its passive interception at different distances between the legal user and potential eavesdropper are also performed. It is shown that the parameters of bit extraction procedure significantly affect on the performance and security of the key distribution process.
Abstract: The ongoing need to protect key nodes of network infrastructure has been a pressing issue since the outburst of modern Internet threats. This paper presents ideas on building a novel network-based intrusion prevention system combining the advantages of different types of latest intrusion detection systems. Special attention is also given to means of traffic data acquisition as well as security policy decision and enforcement possibilities. With regard to recent trends in PaaS and SaaS, common deployment specific for private and public cloud platforms is considered.
nancial industry, and even ordinary people have access to super fast bank transfers and real-time credit card transactions. Bitcoin remains rather the horse carriage of money. In this paper we look at the question of fast transaction acceptance in bitcoin and other crypto currencies. We claim that bitcoin needs to change in order to be able to satisfy the most basic needs of modern users.
Abstract: Privacy of data stored at un-trusted servers is an important problem of today. A solution to this problem can be achieved by encrypting the outsourced data, but simple encryption does not allow efficient query processing. In this paper we propose a novel scheme for encrypting relational databases so that range queries can be efficiently executed on the encrypted data. We formally define the syntax and security of the problem and specify a scheme called ESRQ1. ESRQ1 uses a deterministic encryption scheme along with bitmap indices to encrypt a relational database. We provide details of the functionality of ESRQ1 and prove its security in the specified model.
Abstract: Most of the attacks against the Advanced Encryption Standard based on faults mainly aim at either altering the temporary value of the message or key during the computation. Few other attacks tamper the instruction flow in order to reduce the number of round iterations to one or two. In this work, we extend this idea and present fault attacks against the AES algorithm that exploit the misbehavior of the instruction flow during the last round. In particular, we consider faults that cause the algorithm to skip, repeat or corrupt one of the four AES round functions. In principle, these attacks are applicable against both software and hardware implementations, by targeting the execution of instructions or the control logic. As conclusion countermeasures against fault attacks must also cover the instruction flow and not only the processed data.
Abstract: The paper discusses possibility of secure encryption keys distribution based on stochastic properties of meteor burst radio propagation. Unlike wireless key distribution, this method provides much greater channel length and key distribution distances, which is up to 2000 km. Another important advantage is an ability of meteor burst communications to operate in severe climate, under conditions of polar and other remote areas. The paper also considers various physical factors ensuring stochastic variations in characteristics of received radio signal, which are applicable for the secret key generation. The simulation results revealing the most important randomizing factors within meteor burst channel are presented.
Abstract: In this paper, we advocate the use of code polymorphism as an efficient means to improve security at several levels in electronic devices. We analyse the threats that polymorphism could help thwart, and present the solution that we plan to demonstrate in the scope of a collaborative research project called COGITO. We expect our solution to be effective to improve security, to comply with the computing and memory constraints of embedded devices, and to be easily generalisable to a large set of embedded computing platforms.
Abstract: Providing reliable explanations for the causes of an access response represents an important improvement of applications usability and effectiveness, in a context where users are permitted or denied access to resources. I present an approach composed by two different procedures, both relying on OWL-DL and SWRL Rules, in order to generate policy explanations. The first procedure makes use of OWL Explanation and abductive reasoning. The second uses an algorithm of Association Rule Learning to identifying attributes and states arising together with policy privileges, in an inductive way. The PosSecCo IT Policy language is used in the present paper for representing the policies, but the approach is general enough to be applied in other environments as well.
Abstract: Cloud computing is an emerging IT paradigm proving cost reduction and flexibility benefits. However security and privacy are serious issues challenging its adoption and sustainability in both social and commercial areas. Public clouds, in particular, present a controversial which is brought up by the need to exchange critical and protected data (even sensitive) between heterogeneous domains that are governed by multiple legislation. Access control is one of the essential and traditional security arms of data protection. However, in the context of open and dynamic environments such as clouds, access control becomes more complicated. This is because the security policies, models and related mechanisms have to be defined across various security domains and enforced in an integrated manner as required. Thus, improving the current access control paradigms is crucial in order to ensure privacy compliance in open and heterogeneous environments. In this paper, we propose a framework that is driven by legislation and which aims to assure an access control that preserves privacy while dealing with personal data hosted in public clouds. In addition, the proposed framework deals with the problem of interoperability between heterogeneous policies governing the processing of personal data on a cloud environment. In this regards, the need for access control delegation is also presented and tackled.
Abstract: In this article, it is shown that a large class of truly chaotic Pseudorandom Number Generators can be constructed. The generators are based on iterating Boolean maps, which are computed using balanced Gray codes. The number of such Gray codes gives the size of the class. The construction of such generators is automatic for small number of bits, but remains an open problem when this number becomes large. A running example is used throughout the paper. Finally, first statistical experiments of these generators are presented, they show how efficient and promising the proposed approach seems.
Abstract: Despite the incommensurable effort made from across computer sciences disciplines to provide more secure systems, compromising the security of a system has now become a very common and stark reality for organizations of all sizes and from a variety of sectors. The lax in the technology has often been cited as the salient cause of systems insecurity. In this paper we advocate the need for a Security Assurance (SA) system to be embedded within current IT systems. Such a system has the potential to address one facet of cyber insecurity, which is the exploit of lax within the deployed security and its underlining policy. We discuss the challenges associated to such an SA assessment and present the flavor of its evaluation and monitoring through an initial prototype. By providing indicators on the status of a security matter that is more and more devolved to the provider as it is the case in the cloud, the SA tool can be used as a means of fostering better security transparency between a cloud provider and client.
Abstract: Quick Response (QR) codes, used to store machine readable information, have become very common nowadays and have found many applications in different scenarios. One of such applications is electronic voting systems. Indeed, some electronic voting systems are starting to take advantage of these codes, e.g. to hold the ballots used to vote, or even as a proof of the voting process. Nevertheless, QR codes are susceptible to steganographic techniques to hide information. This steganographic capability enables a covert channel that in electronic voting systems can suppose an important threat. A misbehaving equipment (e.g. infected with malware) can introduce hidden information in the QR code with the aim of breaking voters' privacy or enabling coercion and vote-selling. This paper shows a method for hiding data inside QR codes and an implementation of a QR writer/reader application with steganographic capabilities. The paper analyses different possible attacks to electronic voting systems that leverage the steganographic properties of the QR codes. Finally, it proposes some solutions to detect the mentioned attacks.
Abstract: Role-based access control (RBAC) is the de facto access control model used in current information systems. Cryptographic access control (CAC), on the other hand, is an implementation paradigm intended to enforce AC-policies cryptographically. CAC-methods are also attractive in cloud environments due to their distributed and offline nature of operation. Combining the capabilities of both RBAC and CAC fully seems elusive, though. This paper studies the feasibility of implementing RBAC with respect to write-permissions using a recent type of cryptographic schemes called attribute-based signatures (ABS), which fall under a concept called functional cryptography. We map the functionalities and elements of RBAC to ABS elements and show a sample XACML-based architecture, how signature generation and verification conforming to RBAC-type processes could be implemented. | CommonCrawl |
A Diophantine equation is a polynomial equation whose solutions are restricted to integers. These types of equations are named after the ancient Greek mathematician Diophantus. A linear Diophantine equation is a first-degree equation of this type. Diophantine equations are important when a problem requires a solution in whole amounts.
How many ways are there to make \(\$2.00\) from only nickels and quarters?
the solutions are restricted by the fact that they must be non-negative integers.
The study of problems that require integer solutions is often referred to as Diophantine analysis. Although the practical applications of Diophantine analysis have been somewhat limited in the past, this kind of analysis has become much more important in the digital age. Diophantine analysis is very important in the study of public-key cryptography, for example.
The solutions to a Diophantine equation aren't always simple multiples.
Travis is purchasing beverages for an upcoming party. He has $68 to spend. He can purchase packs of cans for $12, or smaller packs of bottles for $8.00. How many ways are there for him to purchase beverages if he spends all of his money?
In many practical problems, solutions will be limited to non-negative integers. However, this is not necessarily true for all problems.
Jack, Charlie, and Andrew went on an egg hunt today, each of them carrying one basket. 300 eggs were hidden at the beginning of the day. At the end of the day, the numbers of eggs in each of the boys' baskets are three consecutive integers.
In how many ways could this happen?
Clarification: Order doesn't matter. For example, in the order of Charlie, Andrew, and Jack, \((3,2,1)\) and \((2,3,1) \) both count as one way.
Not all linear Diophantine equations have a solution.
You may have observed from the examples above that finding solutions to linear Diophantine equations involves finding an initial solution, and then altering that solution in some way to find the remaining solutions. The process of finding this initial solution isn't always as straightforward as the examples above. Fortunately, there is a formal process to finding an initial solution.
One can determine if solutions exist or not by calculating the GCD of the coefficients of the variables, and then determining if the constant term can be divided by that GCD.
If solutions do exist, then there is an efficient method to find an initial solution. The Euclidean algorithm gives both the GCD of the coefficients and an initial solution.
Use the Euclidean algorithm to compute \(\gcd(a,b)=d\), taking care to record all steps.
Determine if \(d\mid n.\) If not, then there are no solutions.
Reformat the equations from the Euclidean algorithm.
This gives \(x_i=7\) and \(y_i=-29\) as a solution to the equation \(141x_i+34y_i=1\).
In the example above, an initial solution was found to a linear Diophantine equation. This is just one solution of the equation, however. When integer solutions exist to an equation \(ax+by=n,\) there exist infinitely many solutions.
are both positive (or both nonnegative).
Find all integers \(c\) such that the linear Diophantine equation \(52x + 39y = c\) has integer solutions, and for any such \(c,\) find all integer solutions to the equation.
Find the positive integer solutions of the Diophantine equation \(4x + 7y = 97\).
What is the least possible positive integer \(n\) satisfying the congruence above?
An ice cream shop sells 3 flavored scoops: lime, vanilla, and strawberry. Each customer may choose to buy single, double, or triple scoops, and no one orders repeated flavor on the same cone.
For the single scoop, the lime flavor costs 1 dollar each, vanilla 1.5 dollars each, and strawberry 2 dollars each. For double scoops, each order will get a discount of 31 cents off for any combination. For example, the double scoops of lime and strawberry flavors will cost \(1+2-0.31=2.69\) dollars. Finally, for the triple scoops of 3 flavors, it will be discounted to 3.79 dollars.
At the end of the day, 63 lime, 61 vanilla, and 56 strawberry scoops are sold, and the shopkeeper collects 249.75 dollars in total from customers for these sales.
How many customers bought the ice cream? Assume each ice cream is sold to a different person.
has an integer solution \( (x_1, x_2, \ldots, x_k) \) if and only if \(\gcd(a_1,a_2, \ldots, a_k) \) divides \(d\).
As seen above, a general solution to a linear Diophantine equation with two variables has one integral parameter. In general, if it exists, a solution to an equation with \(n\) variables has \(n - 1\) integral parameters.
Consider 3 positive integers \(x, y,\) and \(z\) satisfying the following equation: \(28x+30y+31z=365\).
First, looking closely at the equation, we can notice that the coefficients are the number of days in the months and the right-hand side of the equation is the number of days in a year.
February is the month with 28 days. There are 4 months in the year with 30 days and 7 months consisting of 31 days.
Hence we get a solution \(x=1, y=4\), and \(z=7\). This would give a solution \(x+y+z=12\).
But, can we find another positive integer solution? By using trial and error method, we can easily prove that all positive integer solutions of the above equation are \((x,y,z)=(1,4,7)\) and \((x,y,z)=(2,1,9)\). Then, \(x+y+z\) is always equal to \(12\).
Surprisingly, we can find the value of \(x+y+z\) without solving the above equation.
If your aim is to solve linear congruences rather than equations, then you should check out the Chinese remainder theorem.
The problem of finding the number of ways a set of integers can sum to a certain integer can be solved with a stars and bars approach.
Learn more in our Number Theory course, built by experts for you. | CommonCrawl |
Abstract: We investigate the hard-thresholding method applied to optimal control problems with $L^0(\Omega)$ control cost, which penalizes the measure of the support of the control. As the underlying measure space is non-atomic, arguments of convergence proofs in $l^2$ or $\mathbb R^n$ cannot be applied. Nevertheless, we prove the surprising property that the values of the objective functional are lower semicontinuous along the iterates. That is, the function value in a weak limit point is less or equal than the lim-inf of the function values along the iterates. Under a compactness assumption, we can prove that weak limit points are strong limit points, which enables us to prove certain stationarity conditions for the limit points. Numerical experiments are carried out, which show the performance of the method. These indicates that the method is robust with respect to discretization. In addition, we show that solutions obtained by the thresholding algorithm are superior to solutions of $L^1(\Omega)$-regularized problems. | CommonCrawl |
For greek symbols in Rmarkdown you can use inline formulas, indicated by $ $, i.e. two $ signs. Try this: If the p-value is equal or smaller than the significance level $\alpha$... In finance, the beta (β or beta coefficient) of an investment indicates whether the investment is more or less volatile than the market as a whole. Beta is a measure of the risk arising from exposure to general market movements as opposed to idiosyncratic factors.
βI'm not sure how to type the Beta symbol, but I copy this character from Web Browser the paste it. Hope this help.
In finance, the beta (β or beta coefficient) of an investment indicates whether the investment is more or less volatile than the market as a whole. Beta is a measure of the risk arising from exposure to general market movements as opposed to idiosyncratic factors.
A closed beta test has a limited number of spots open for testing, while an open beta has either an unlimited number of spots (i.e. anyone who wants to can participate) or a very large number of spots in cases where opening it up to everyone is impractical. | CommonCrawl |
I did Google and found out that Quicksort is better then Mergesort, but my question is which is faster among both?
In Merge sort Algorithm when I took input array of size 2 and I got 4 function calls as including original function call with which I call MS algorithm i.e. MS (1,2) and which in turn calls two recursive function calls to merge ... function calls. So, how can I analyze the total number of function calls when input array size is n? thank you!
why not merge sort?we don't swap in merge sort,we just create auxillary arrays and merge them by changing elements in the original array.should we consider that as a swap?
Where in a max-heap might the smallest element reside, assuming that all elements are distinct ?
Can Merge Sort Time Complexity be O(n^2) in any condition?
Consider a new sorting algorithm similar to the BubbleSort algorithm, called RumbleSort. Given an array as input, RumbleSort attempts to sort the array and produces a sorted array as output. Here's the pseudo-code for RumbleSort. With regards to the above RumbleSort ... algorithm will work correctly for a given input is $\mathcal Ο(n^2)$ Which of the above statements is/are true?
Consider the following array with 7 elements for insertion sort? 25, 15, 30, 9, 99, 20, 26 In how many passes, the given sequence will be sorted? (a) 4 pass (b) 5 pass (c) 6 pass (d) More than 6 pass Answer is 6 passes. Can anyone explain it step by step.
An array A of size n is known to be sorted except for the first k elements and the last k elements, where K is a constant. Which of the following algorithms will be the best choice for sorting the array A? A.) quick sort B.) insertion sort C.) selection sort D.) bubble sort I can't understand how can insertion sort be better in this case?
Can anyone help me to understand this problem….??
Is there any standard way to sort in Quicksort or what all matters is PIVOT getting placed at its correct position thats it? I mean if only pivot condition then 3!*3! for both left and right elements but if any standard then each ... after 1st pass the array will remain as it is and only those elements compared with the minimum will be getting swapped.
Highest best case implies worst case?
Which of the below given sorting techniques has highest best-case runtime complexity. (A) Quick sort (B) Selection sort (C) Insertion sort (D) Bubble sort Answer: (B) Explanation: Quick sort best case time complexity is Ο(n logn) Selection sort ... -12/ I did not understand this as best case time should be O(n) sorting method what does highest best cases mean?
Please correct if any of the point is wrong : Quicksort: 1.Need more random accesses 2 Used when Random access is fast (hence preferred on array and not on Linked List) 2 No extra space needed ==> Inplace 4 Not a stable sorting algorithm ... : Quicksort in particular exhibits good cache locality and this makes it faster than merge sort in many cases like in virtual memory environment.
Which of the following sorting algorithm represented by above code?
An array of size n is known to be sorted except for the 1st k elements and the last k elements, where k is a constant. which of the following algorithm is the best choice for sorting the array A? Quick Sort or Insertion Sort? given answer is the insertion ... k), and it will take O(klogk) in average case and O(k^2) in the worst case. what's wrong in that?
explain how to solve the above question !
The unusual $\Theta(n^2)$ implementation of Insertion Sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. If, instead, we use binary search to identify the ... case will be O(nlogn) because here no matter what the binary search will be performed for every element. Can someone confirm?
Is Quick sort an adaptive sorting Algorithm? I think no. Because as per the definition given in the Wikipedia is that A adaptive sorting Algorithm is one who takes the advantage of preorderedness of the input. But in case of Quick sort it act as disadvantage.
What is the time complexity to find the Kth largest element in a Min-Heap? Or equivalently, What is the time complexity to find Kth smallest element in Max-Heap? | CommonCrawl |
According to Wikipedia (https://en.wikipedia.org/wiki/Conjugate_prior) the gamma distribution is a conjugate prior for the exponential distribution (with unknown rate-parameter, $\lambda$, and hyperparameters $\alpha$ and $\beta$). Moreover the posterior predictive is the Lomax (a.k.a. Pareto type II) distribution.
While I have no doubt that these results are correct I have not been able to find any proof leading to the Lomax distribution (the part concerning the gamma distribution is easy to find). I would appreciate if someone would share a reference.
It's easy to show if you have an integer value for $\alpha$ in the prior.
Which matches the density given here with $\lambda = \beta^*$ and $\alpha = \alpha^*$.
Not the answer you're looking for? Browse other questions tagged statistics bayesian exponential-distribution gamma-distribution or ask your own question.
Can the parameter of prior probability depends on data? | CommonCrawl |
Abstract: The paper is devoted to the applications of the theory of dynamical systems to the theory of transport phenomena in metals in the presence of strong magnetic fields. More precisely, we consider the connection between the geometry of the trajectories of dynamical systems arising at the Fermi surface in the presence of an external magnetic field and the behavior of the conductivity tensor in a metal in the limit $\omega _B\tau \to \infty $. We describe the history of the question and investigate special features of such behavior in the case of the appearance of trajectories of the most complex type on the Fermi surface of a metal. | CommonCrawl |
D. Stone, Sudakin, D. L., and Jenkins, J. J., "Longitudinal trends in organophosphate incidents reported to the National Pesticide Information Center, 1995–2007", Environmental Health, vol. 8, p. 18, 2009.
K. E. Warner and Jenkins, J. J., "Effects of 17$\alpha$-ethinylestradiol and bisphenol a on vertebral development in the fathead minnow (Pimephales Promelas)", Environmental Toxicology and Chemistry, vol. 26, pp. 732–737, 2007.
J. F. Sandahl, Baldwin, D. H., Jenkins, J. J., and Scholz, N. L., "A sensory system at the interface between urban stormwater runoff and salmon survival", Environmental Science & Technology, vol. 41, pp. 2998–3004, 2007.
J. F. Sandahl, Baldwin, D. H., Jenkins, J. J., and Scholz, N. L., "Comparative thresholds for acetylcholinesterase inhibition and behavioral impairment in coho salmon exposed to chlorpyrifos.", Environ Toxicol Chem, vol. 24, no. 1, pp. 136-45, 2005.
J. F. Sandahl, Baldwin, D. H., Jenkins, J. J., and Scholz, N. L., "Odor-evoked field potentials as indicators of sublethal neurotoxicity in juvenile coho salmon (Oncorhynchus kisutch) exposed to copper, chlorpyrifos, or esfenvalerate", Canadian Journal of Fisheries and Aquatic Sciences, vol. 61, pp. 404–413, 2004.
D. B. Buchwalter, Jenkins, J. J., and Curtis, L. R., "Temperature influences on water permeability and chlorpyrifos uptake in aquatic insects with differing respiratory strategies.", Environ Toxicol Chem, vol. 22, no. 11, pp. 2806-12, 2003.
J. F. Sandahl and Jenkins, J. J., "Pacific steelhead (Oncorhynchus mykiss) exposed to chlorpyrifos: benchmark concentration estimates for acetylcholinesterase inhibition.", Environ Toxicol Chem, vol. 21, no. 11, pp. 2452-8, 2002.
B. J. Bailey and Jenkins, J. J., "Association of azinphos-methyl with rat erythrocytes and hemoglobin", Archives of toxicology, vol. 74, pp. 322–328, 2000. | CommonCrawl |
In the 2012 Olympics Usain Bolt won the 100 metres gold medal with a time of $9.63$ seconds.
Asafa Powell was participating in that same race.
He achieved a time of $11.99 \ \mathrm s = 1.245 \times 9.63 \ \mathrm s$.
So Bolt's average speed was $1.245$ times the average speed of Powell.
Hence, at some point, Bolt was actually running at a speed exactly $1.245$ times that of Powell's. | CommonCrawl |
which was found in this question.
In the considered python code the Newton-Krylov iterations are applied and perform quite nice for $\alpha=10$. However this method fails when $\alpha$ is changed from 10 to 15 or more. What might be the reason for this behaviour and is it possible to improve the situation?
Any references are appreciated, thank you.
This is just some random equation. There's no physical or otherwise background for it.
This information makes further discussion of the problem much less interesting, so I's rather close this question.
You can see that integrating it over any domain will never give you zero. If the diffusion of $P$ ($\nabla^2 P$) is not large enough to counteract this limit then you'll likely have problems.
Generally speaking, poor convergence (or no convergence at all) of an iterative method that is solving a system of linear algebraic equations is attributed to the fact that the corresponding matrix is ill-conditioned. In mechanical problems, that may be observed if the model is "loose" (no sufficient boundary conditions applied, or if the model consists of materials with very different elastic properties, like, say, steel and foam). For heat transfer problems, that would probably correspond to cases when the physical model consists of different materials with vastly different heat transfer coefficients.
Not the answer you're looking for? Browse other questions tagged reference-request scipy newton-method or ask your own question. | CommonCrawl |
Volume 23, Number 11/12 (2010), 1151-1157.
We consider an estimate of the life span of solutions on a semilinear heat equation. In one dimension, we show that the life span may be estimated from above in terms of the average of two limits as $x\to\pm\infty$ of the initial data. In general dimensions, under some monotonicity conditions for initial data, an explicit representation of the uniform norm of the life span of solutions is obtained.
Differential Integral Equations, Volume 23, Number 11/12 (2010), 1151-1157. | CommonCrawl |
Аннотация: We consider the Bayesian problem of estimating the success probability in a series of conditionally independent trials with binary outcomes. We study the asymptotic behaviour of the weighted differential entropy for posterior probability density function conditional on $x$ successes after $n$ conditionally independent trials when $n\to\infty$. Suppose that one is interested to know whether the coin is approximately fair with a high precision and for large $N$ is interested in the true frequency. In other words, the statistical decision is particularly sensitive in a small neighbourhood of the particular value $\gamma=1/2$. For this aim the concept of the weighted differential entropy introduced in is used when it is necessary to emphasize the frequency $\gamma$. It was found that the weight in suggested form does not change the asymptotic form of Shannon, Renyi, Tsallis and Fisher entropies, but changes the constants. The leading term in weighted Fisher Information is changed by some constant which depends on the distance between the true frequency and the value we want to emphasize.
Ключевые слова и фразы: weighted differential entropy, Bernoulli random variable, Renyi entropy, Tsallis entropy, Fisher information. | CommonCrawl |
How do you calculate cycles in a symetry group of 5x5 square?
I need to find all the different colorings of the 5x5 square in $n$ colors in a symmetry group using Burnside's lemma. I get that if $c$ is the number of colorings $c=1/|G|*\sum|St(g)|$ And $St(g)$ is $n^t$ where $t$ is the number of cycles that g creates. Am I right that $|G|=8$? How do I calculate the number of cycles?
I'm not $100\%$ sure precisely what this question is asking regarding the stated problem so I'll just go through the problem itself.
Firstly, how you answer the question depends on what symmetries you want the $5\times 5$ square to have. If just rotations then $|G_1|= 4$, if rotations and reflections then $|G_2|=8$. The groups act on the $25$ cells of the square so it is appropriate to label cells in some way. I'll use the following labels but it doesn't really matter how you do it as long as cells are distinct.
meaning: "cell $1$ is replaced by cell $5$, cell $5$ by cell $25$, cell $25$ by cell $21$ and so forth". If we use $z_k$ to represent a cycle of length $k$ then this example permutation is of the form $z_4^6z_1^1$.
Not the answer you're looking for? Browse other questions tagged abstract-algebra combinatorics group-theory permutations or ask your own question.
Is this graph coloring problem solved correctly?
How many of all cube's edges 3-colorings have exactly 4 edges for each color?
How many ways can you color the edges of a square with 4 colors? | CommonCrawl |
A second-year graduate student, can be contacted through zzymax1996[dot]gmail.com .
13 Is the unit group of any finitely generated reduced $\Bbb Z$ algebra finitely generated?
10 When does there exist a section of $GL_n(\mathbb Z_p) \rightarrow GL_n(\mathbb F_p)$?
9 Can Yoneda lemma for smooth projective varieties only use curves? | CommonCrawl |
It is a well-established heuristic that most orbits for Bianchi 8 and Bianchi 9 cosmologies are at late times well approximated by a sequence of Bianchi 2 orbits. I will use this heuristic to construct an approximate Poincare-map $\Phi_0$ and will show that this map approximates the actual return-map in the C^0-norm. Since the Bianchi ODE is polynomial, we can complexify it and will prove that (in a reasonable domain) the same estimates hold, thus yielding $C^\infty$-estimates via Cauchy's integral formula. I will present (and roughly sketch the proof of) two corollaries of these estimates, which can be summarised by the following: 1. Stable foliation. For almost every base-point, there is an analytic (in the interior) codimension 1 stable manifold attached. 2. Positive measure. The union of these attached stable manifolds has positive measure.
We present a notion of entropy for domains in Eulcidean space which modifies Perelman's entropy introduced for closed Riemannian manifolds. We'll discuss basic properties of this quantity and explain how it is related to control of local volume ratios for the domains under consideration and how it may prevent local volume collapse for families of evolving domains. If time permits we will also show a natural connection between the entropy and Harnack inequalities for the backward heat equation.
We introduce the MAP-kinase cascade, a pattern of chemical reactions which is an important element of many signaling pathways in cells. This process may be modelled in different ways. A common feature of models/numerical simulations/experiments is the existence of periodic orbits due to relaxation oscillation. We will present a rigorous proof of this phenomenon for a model with feedback control, due to Gedeon and Sontag; express some critique of their model and results; and finally discuss the model we propose to study.
I will report on an example by Ninomiya et al. where an ode system with a global attractor combined with a diffusion term gives surprisingly rise to blow up solutions. This phenomenon reminds of Turing instability where the diffusion destabilizes a stable ode equilibrium.
I will discuss the blow-up behaviour of the one-dimensional heat equation with quadratic nonlinearity in complex time. The first talk will give an introduction to the problems of: 1. Existence and uniqueness of solutions 2. Global behaviour 3. Analytic continuation beyond the blow-up time.
We define the Ricci flow on R^n and show that certain warped product solutions of Ricci flow are equivalent to solutions of a system of PDE�s. Next we explain how one can get estimates for geometric quantities by means of a maximum principle. We sketch what collapsing of such a warped product solution means. Finally we indicate how one can obtain Gaussian estimates for Ricci flow by considering the example of the heat equation on R^n.
We consider elliptic differential-difference equations with several nonnegative difference operators. The interest in such operators is due to their fundamentally new properties compared with even strongly elliptic differential-difference operators as well as due to applications of the obtained results to certain nonlocal problems arising in the plasma theory. We obtain a priori estimates of solutions. In addition, using these estimates, we can show that the considered operator is sectorial, construct its Friedrichs extension, and prove a theorem on the smoothness of solutions.
I will give an introduction to the theory of Optimal Transport. This notion defines a natural metric - the Wasserstein distance - in the space of probability measures. I will show that certain types of PDEs can be viewed as gradient flows with respect to that metric and how this interpretation can be used to establish estimates for convergence rates in these equations.
We consider one of the possible generalizations of the notion of shadowing property to actions of nonabelian groups. We consider how the classical shadowing lemma can be generalized for this case. An important example will be Baumslag-Solitar group.
The BKL-Conjecture states that the approach to the initial singularity is vacuum-dominated, local and oscillatory. The highly symmetric Bianchi cosmologies play an important role in this BKL-picture, as they are believed to capture the essential dynamics of more general solutions. A detailed study of Takens Linearization Theorem and the Non-Resonance-Conditions lead us to a new result in Bianchi class A: We are able to show, for the first time, that for admissible periodic heteroclinic chains in Bianchi IX there exisist C1- stable - manifolds of orbits that follow these chains towards the big bang. We also study Bianchi models of class B, where no rigorous results exist to date. We find an example for a periodic heteroclinic chain that allows Takens Linearization at all Base points and give some arguments why it qualifies as a candidate for proving the first rigorous convergence theorem in class B. We conclude with an outlook on future research on the chaotic dynamics of the Einstein Equations towards the big bang - in order to shed a little more light on our "tumbling universe" at birth.
Chimera states are coherence-incoherence patterns observed in homogeneous discrete oscillatory media with non-local coupling. Despite their nontrivial dynamical nature, such patterns can be effectively analyzed in terms of the thermodynamic limit formalism. In particular, using statistical physics concept of local mean field and the Ott-Antonsen invariant manifold reduction, one can explain typical bifurcation scenarios leading to the appearance of chimera states and provide their reasonable classification.
In Bianchi 8 and 9 cosmologies, estimates for the transit near the Kasner circle of equilibria are essential for questions regarding long-time dynamics. In the fall 2012, I presented new estimates on the transit in a complexified version of the Bianchi differential equations (thus allowing to obtain estimates on derivatives by Cauchy integrals). This talk will focus on the technical details of the proof of these estimates, using a perturbative ansatz, elementary but lengthy estimates on various integral operators and a variant of Schauder's fixed point theorem.
In this talk, I will briefly explain the phenomena such as the Belousov-Zhabotinsky reaction to show how spiral patterns arise and why they are important and interesting. Then we may ask: What are the corresponding mathematical models for spiral patterns? The most intuitive way is perhaps the reaction-diffusion systems. I will show how to derive the systems from the special Euclidean symmetry SE(2). Another approach is the kinematic model. It regards a spiral as a curvature flow along the normal direction of a given planar curve. At last, I will focus on the kinematic model and show how to prove the existence of rotating spirals.
Many processes in living cells are controlled by biochemical substances regulating active stresses. The cytoplasm is an active material with both viscoelastic and liquid properties. First, we incorporate the active stress into a two-phase model of the cytoplasm which accounts for the spatiotemporal dynamics of the cytoskeleton and the cytosol. The cytoskeleton is described as a solid matrix that together with the cytosol as interstitial fluid constitutes a poroelastic material. We find different forms of mechanochemical waves including traveling, standing and rotating waves by employing linear stability analysis and numerical simulations in one and two spatial dimensions. In a second step, we expand the chemo-mechanical model in order to model the manifold contraction patterns observed experimentally in protoplasmic droplets of Physarum polycephalum. To achieve this, we combine a biophysically realistic model of a calcium oscillator with the poroelastic model derived in the first part of the talk and assume that the active tension is regulated by calcium. With the help of two-dimensional simulations the model is shown to reproduce the contraction patterns observed in protoplasmic droplets as well as a number of other traveling and standing wave patterns. Joint work with Markus Radszuweit (PTB, TU Berlin), S. Alonso (PTB), H. Engel (TU Berlin). | CommonCrawl |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.