url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://academy.vertabelo.com/course/postgresql-queries-basics/single-table/logic/between | Best April deals - hours only!Up to 80% off on all courses and bundles.-Close
See the whole table
Select some columns
Filtering rows
Logic
11. The BETWEEN operator
Text patterns
To be or NULL to be
A little bit of mathematics
Let's practice
Instruction
Good. Now, if you want to find users whose age is between 13 and 70, you can of course use the previous example:
SELECT id, name
FROM user
WHERE age <= 70
AND age >= 13;
But there is also another way of writing the example above. Take a look:
SELECT id, name
FROM user
WHERE age BETWEEN 13 AND 70;
We introduced a new keyword BETWEEN which simply means that we look for rows with the age column set to be anything between 13 and 70, including these values.
Exercise
Select the vin, brand, and model columns of all cars which were produced between 1995 and 2005.
Stuck? Here's a hint!
Type:
SELECT
vin,
brand,
model
FROM car
WHERE production_year BETWEEN 1995 AND 2005; | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1734258234500885, "perplexity": 3291.781695080702}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737019.4/warc/CC-MAIN-20200806180859-20200806210859-00207.warc.gz"} |
https://chem.libretexts.org/Courses/Eastern_Wyoming_College/EWC%3A_Introductory_Chemistry_(Budhi)/13%3A_Solutions/13.07%3A_Solution_Dilution | # 13.7: Solution Dilution
Skills to Develop
• Explain how concentrations can be changed in the lab
• Understand how stock solutions are used in the laboratory
We are often concerned with how much solute is dissolved in a given amount of solution. We will begin our discussion of solution concentration with two related and relative terms - dilute and concentrated.
• dilute solution is one in which there is a relatively small amount of solute dissolved in the solution.
• concentrated solution contains a relatively large amount of solute.
These two terms do not provide any quantitative information (actual numbers) - but they are often useful in comparing solutions in a more general sense. These terms also do not tell us whether or not the solution is saturated or unsaturated, or whether the solution is "strong" or "weak". These last two terms will have special meanings when we discuss acids and bases, so be careful not to confuse these terms.
### Stock Solutions
It is often necessary to have a solution whose concentration is very precisely known. Solutions containing a precise mass of solute in a precise volume of solution are called stock (or standard) solutions. To prepare a standard solution a piece of lab equipment called a volumetric flask should be used. These flasks range in size from 10 mL to 2000 mL are are carefully calibrated to a single volume. On the narrow stem is a calibration mark. The precise mass of solute is dissolved in a bit of the solvent and this is added to the flask. Then enough solvent is added to the flask until the level reaches the calibration mark.
Often it is convenient to prepare a series of solutions of known concentrations by first preparing a single stock solution as described in the previous section. Aliquots (carefully measured volumes) of the stock solution can then be diluted to any desired volume. In other cases it may be inconvenient to weigh accurately a small enough mass of sample to prepare a small volume of a dilute solution. Each of these situations requires that a solution be diluted to obtain the desired concentration.
### Dilutions of Stock (or Standard) Solutions
Imagine we have a salt water solution with a certain concentration. That means we have a certain amount of salt (a certain mass or a certain number of moles) dissolved in a certain volume of solution. Next we willl dilute this solution - we do that by adding more water, not more salt:
$$\rightarrow$$
Before Dilution After Dilution
The molarity of solution 1 is
$M_1 = \dfrac{\text{moles}_1}{\text{liter}_1}$
and the molarity of solution 2 is
$M_2 = \dfrac{\text{moles}_2}{\text{liter}_2}$
rearrange the equations to find moles:
$\text{moles}_1 = M_1 \text{liter}_1$
and
$\text{moles}_2 = M_2 \text{liter}_2$
What stayed the same and what changed between the two solutions? By adding more water, we changed the volume of the solution. Doing so also changed it's concentration. However, the number of moles of solute did not change. So,
$moles_1 = moles_2$
Therefore
$\boxed{M_1V_1= M_2V_2 } \label{diluteEq}$
where
• $$M_1$$ and $$M_2$$ are the concentrations of the original and diluted solutions and
• $$V_1$$ and $$V_2$$ are the volumes of the two solutions
Preparing dilutions is a common activity in the chemistry lab and elsewhere. Once you understand the above relationship, the calculations are easy to do.
Suppose that you have $$100. \: \text{mL}$$ of a $$2.0 \: \text{M}$$ solution of $$\ce{HCl}$$. You dilute the solution by adding enough water to make the solution volume $$500. \: \text{mL}$$. The new molarity can easily be calculated by using the above equation and solving for $$M_2$$.
$M_2 = \frac{M_1 \times V_1}{V_2} = \frac{2.0 \: \text{M} \times 100. \: \text{mL}}{500. \: \text{mL}} = 0.40 \: \text{M} \: \ce{HCl}$
The solution has been diluted by one-fifth since the new volume is five times as great as the original volume. Consequently, the molarity is one-fifth of its original value.
Another common dilution problem involves deciding how much of a highly concentrated solution is required to make a desired quantity of solution of lesser concentration. The highly concentrated solution is typically referred to as the stock solution.
Example $$\PageIndex{1}$$: Diluting NITRIC ACID
Nitric acid $$\left( \ce{HNO_3} \right)$$ is a powerful and corrosive acid. When ordered from a chemical supply company, its molarity is $$16 \: \text{M}$$. How much of the stock solution of nitric acid needs to be used to make $$8.00 \: \text{L}$$ of a $$0.50 \: \text{M}$$ solution?
SOLUTION
Steps for Problem Solving
Identify the "given"information and what the problem is asking you to "find."
Given:
M1, Stock $$\ce{HNO_3} = 16 \: \text{M}$$
$$V_2 = 8.00 \: \text{L}$$
$$M_2 = 0.50 \: \text{M}$$
Find: Volume stock $$\ce{HNO_3} \left( V_1 \right) = ? \: \text{L}$$
List other known quantities
none
Plan the problem
First, rearrange the equation algebraically to solve for $$V_1$$.
$V_1 = \frac{M_2 \times V_2}{M_1}$
Calculate and cancel units
Now substitute the known quantities into the equation and solve.
$V_1 = \frac{0.50 \: \text{M} \times 8.00 \: \text{L}}{16 \: \text{M}} = 0.25 \: \text{L} = 250 \: \text{mL}$
Think about your result. $$250 \: \text{mL}$$ of the stock $$\ce{HNO_3}$$ needs to be diluted with water to a final volume of $$8.00 \: \text{L}$$. The dilution is by a factor of 32 to go from $$16 \: \text{M}$$ to $$0.5 \: \text{M}$$.
Exercise $$\PageIndex{1}$$
A 0.885 M solution of KBr whose initial volume is 76.5 mL has more water added until its concentration is 0.500 M. What is the new volume of the solution? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8149338960647583, "perplexity": 745.1595362403518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578742415.81/warc/CC-MAIN-20190425213812-20190425235812-00226.warc.gz"} |
https://www.computer.org/csdl/trans/lt/2008/04/tlt2008040235.html | # Video Collaboratories for Research and Education: An Analysis of Collaboration Design Patterns
Roy Pea, IEEE
Robb Lindgren, IEEE
Pages: pp. 235-247
Abstract—Web-based video collaboration environments have transformative potentials for video-enhanced education and for video-based research studies. We first describe DIVER, a platform designed to solve a set of core challenges we have identified in supporting video collaboratories. We then characterize five Collaboration Design Patterns (CDPs) that emerged from numerous collaborative groups who appropriated DIVER for their video-based practices. Collaboration Design Patterns (CDPs) are ways of characterizing interaction patterns in the uses of collaboration technology. Finally, we propose a three-dimensional design matrix for incorporating these observed patterns. This representation can serve heuristically in making design suggestions for expanding video collaboratory functionalities and for supporting a broader constellation of user groups than those spanned by our observed CDPs.
Index Terms—Video, collaboration, CSCL, collaboratory, design patterns, Web 2.0, metadata, video analysis.
## Introduction
We argue in this paper that the proliferation of digital video recording, computing, and Internet communications in the contexts of social sciences research and learning technologies has opened up dramatic new possibilities for creating "video collaboratories." Before describing our vision for video collaboratories and our experiences in designing and implementing the DIVER platform for enabling video collaboration, we briefly sketch the historical developments in video use leading to the opportunities at hand.
Throughout the 20th century, film (and, later, video) technology was an influential medium for capturing rich multimedia records of the physical and social worlds for educational purposes [ 1] and for research uses in the social sciences [ 2]. While K-20 education is still largely dominated by textual representations of information and uses of static graphics and diagrams, during the 21st century the education and research communities have more regularly exploited video technologies, and in increasingly innovative ways. For example, with the development of more learner-centered pedagogy, uses of video are expanding from teachers simply showing videos to students to approaches where learners interact with, create, or comment on video resources as part of their knowledge-building activities [ 3], [ 4], [ 5], [ 6]. In research, with the proliferation of inexpensive digital consumer video cameras and software for video editing and analysis, individual researchers and research teams are capturing more data for studying the contextual details of learning and teaching processes, and the learning sciences community has begun experimenting with collaborative research infrastructures surrounding video data sets [ 7], [ 8], [ 9], [ 10].
We view the Web 2.0 participatory media culture illustrated by media-sharing community sites [ 11] as exemplifying how new forms of collaboration and communication have important transformative potentials for more deeply engaging the learner in authentic forms of learning and assessment that get closer to the experiences of worldly participation rather than more traditional decontextualized classroom practices. Video representations provide a medium of great importance in these transformations in capturing the everyday interactions between people in their physical environments during their engagements in cultural practices and using the technologies that normally accompany them in their movement across different contexts. For these reasons, video is important both as a medium for studies of learning and human interaction and for educational interventions. In research, the benefits of video have been well evidenced. Learning scientists have paid increasing attention over the past two decades to examining human activities in naturalistic sociocultural contexts, with an expansion of focus from viewing learning principally as an internal cognitive process toward a view of learning that is also constituted as a complex social phenomenon involving multiple agents, symbolic representations, and environmental features and tools to make sense of the world and one another [ 12], [ 13].
This expansion in the central focus for studies of learning, thinking, and human practices was deeply influenced by contributions involving close analyses of video and audio recordings from conversation analysis, sociolinguistic studies of classroom discourse, anthropological and ethnographic inquiries of learning in formal and informal settings, and studies of socially significant nonverbal behaviors such as "body language" or kinesics, gesture, and gaze patterns. This orientation to understanding learning in situ has led researchers to search for tools allowing for the capture of the complexity of real life learning situations, where multiple simultaneous "channels" of interaction are potentially relevant to achieving a deeper understanding of learning behavior [ 2]. Uses of film and audio-video recordings have been essential in allowing for the repeated and detailed analyses that researchers have used to develop new insights about learning and cultural practices [ 14], [ 15]. In research labs throughout departments of psychology, education, sociology, linguistics, communication, (cultural) anthropology, and human-computer in-teraction, researchers work individually or in small collaborative teams—often across disciplines—for the distinctive insights that can be brought to the interpretation and explanation of human activities using video analysis. Yet, there has been relatively little study of how distributed groups make digital video analysis into a collaborative enterprise, nor have there been tools available that effectively structure and harvest collective insights.
We are inspired by the remarkable possibilities for establishing "video collaboratories" for research and for educational purposes [ 7], [ 16]. In research-oriented video collaboratories, scientists will work together to share video data sets, metadata schemes, analysis tools, coding systems, advice and other resources, and build video analyses together, in order to advance the collective understanding of the behaviors represented in digital video data. Virtual repositories with video files and associated metadata will be stored and accessed across many thousands of federated computer servers. A large variety of types of interactions are increasingly captured in video data, with important contexts including K-20 learning—as in ratio and proportion in middle school mathematics or college reasoning about mechanics, parent-child or peer-peer situations in informal learning, surgery and hospital emergency rooms and medical education, aircraft cockpits or other life-critical control centers, focus group meetings or corporate workgroups, deaf sign language communications, and uses of various products in their everyday environments to help guide new design (including cars, computers, cellphones, household appliances, medical devices), and so on. Corresponding opportunities exist for developing education-centered video collaboratories for the purposes of technology-enhanced learning and teaching activities that build knowledge exploiting the fertile properties we have mentioned of audio-video media. It is our belief that enabling scientific and educational communities to develop flexible and sustained interactions around video analysis and interpretation will help accelerate advances across a range of disciplines, as the development of their collective intelligence is facilitated.
We recognize how this vision of widespread digital video collaboratories used throughout communities for research and for education presents numerous challenges. The process of elucidating and addressing these challenges can be aided considerably by exploring emerging efforts to support collaborative video practices. In this paper, we describe the features of a particular digital video collaboratory known as DIVER. Using the large volume of data that we have collected from DIVER users, we are able to describe the substantial challenges associated with establishing collaboration around digital video using examples from real-world research and educational practices. This data set also permits us to extrapolate future collaboration possibilities and the new challenges they create. We end by presenting dimensions for organizing our vision of digital video collaboraties that we hope will provide entry points for researchers and designers to engage in its further realization.
## DIVER: Digital Interactive Video Explorationand Reflection
### 2.1 The Need for Supporting Video Conversations
We can distinguish three genres of video in collaboration. The first is videoconferencing, which establishes synchronous virtual presence (ranging from Skype/iChat video on personal computers to dedicated room-based videoconferencing systems such as HP's Halo)—where video is the medium, as collaboration occurs via video. The second is video cocreation (e.g., Kaltura, Mediasilo)—where video is the objective, and the collaboration is about making video. The third is video conversations—where video is the content, and the collaboration is about the video. We feel that video conversations are a vital video genre for learning and education because conversational contributions about videos often carry content as or more important than the videos themselves—the range of interpretations and connections made by different people, which provides new points of view [ 8], [ 9] and generates important conceptual diversity [ 17]. For decades, video has been broadcast-centric—consider TV, K-12 education or corporate training films, and e-learning video. But with the growth of virtual teams, we need a multimediated collaboration infrastructure for sharing meaning and iterative knowledge building across multiple cultures and perspectives. We need a video infrastructure that is more interaction-centric—for people to communicate deeply, precisely, and cumulatively about the video content.
In our vision of video collaboratories, effectively supporting video conversations requires more than the capabilities of videoconferencing and net meetings. One requirement concerns a method for pointing to and annotating parts of videos—analogous to footnoting for text—where the scope of what one is referring to can be made readily apparent. We are beginning to see this capability emerge with interactive digital video applications online. In June 2008, YouTube enabled users to mark a spotlight region and create a pop-up annotation with a start and end time in the video stream. Flash note overlays on top of video streams are also provided in the popular Japanese site Nico Nico Douga, launched in December 2006, where users can post a chat message at a specific moment in the video and other chat messages other users have entered at that time point in the video stream together across the video as it plays. Similar capabilities of "deep tagging" of video were illustrated in the past few years by BubblePly, Click.tv, Eyespot, Gotoit, Jumpcut, Motionbox, Mojiti (acquired by News Corporation/CBS joint venture Hulu), Veotag, and Viddler.
While the virtual pointing requirement is necessary for supporting online video conversations, it is not sufficient. It is important to distinguish between simple annotation and conversation—the former can be accomplished with tools for coupling text and visual referents, while the latter requires additional mechanisms for managing conversational turn-taking and ensuring multiparty engagement with the target content. While there are numerous software environments currently supporting video annotation, the number of platforms that support video conversations is much smaller. 1 We focus here on a software platform called DIVER that was developed in our Stanford lab with the objective of supporting video conversations. In addition to providing a unique method for pointing and annotating, DIVER also possesses functionality for facilitating and integrating multiuser contributions.
### 2.2 DIVER as an Example of a Video Conversations Platform
DIVER is a software environment first developed as a desktop software system for exploring research uses of panoramic video records that encompass 360-degree imagery from a dynamic visual environment such as a classroom or a professional meeting [ 19]. The Web version of the DIVER platform in development and use since 2004 allows a user to control a "virtual camera window" overlaid on a standard video record streamed through a Web browser such that the user can "point" to the parts of the video they wish to highlight ( Fig. 1). The user can then associate text annotations with the segments of the video being referenced and publish these annotations online so that others can experience the user's perspective and respond with comments of their own. In this way, DIVER enables creating an infinite number of new digital video clips and remix compilations from a single source video recording. As we have modified DIVER to allow distributed access for viewing, annotation, commentary, and remixing, our focus has shifted to supporting collaborative video analysis and the emerging prospects for digital video collaboratories. DIVER and its evolving capabilities have been put to work in support of collaborative video analysis for a diverse range of research and educational activities, which we characterize in the next section.
Figure Fig. 1. The DIVER user interface. The rectangle inside the video window represents a virtual viewfinder that is controlled by the user's mouse. Users essentially make a movie inside the source movie by recording these mouse movements.
We refer to the central work product in DIVER as a "dive" (as in "dive into the video"). A dive consists of a set of XML metadata pointers to segments of digital video stored in a database and their associated text annotations. In authoring dives on streaming videos via any Web browser, a user is directing the attention of others who view the dive to see what the author sees; it is a process we call "guided noticing" [ 16], [ 19]. To author a dive with DIVER, a user logs in and chooses any video record in the searchable database that they have permission to access (according to the groups to which they belong). A dive can be constructed by a single user or by multiple users, each of whom contributes their own interpretations that may build on the interpretations of the other users.
By clicking the "Mark" button in the DIVER interface (see Fig. 1), the user saves a reference to a specific point in space and time in the video. The mark is represented by a thumbnail image of the video record within the DIVER "worksheet" on the right side of the interface. Once the mark is added to the worksheet, the user can annotate that mark by entering text in the associated panel. A panel is also created by clicking on the "Record" button, an action that creates a pointer to a temporal video segment and the video content encompassed within the virtual viewfinder path during that segment. Like a mark, a recorded clip can be annotated by adding text within its associated DIVER worksheet panel. The DIVER user can replay the recorded video path movie or access the point in the video refer-enced in a mark by clicking on its thumbnail image. DIVER is unique among deep video tagging technologies in enabling users to create and annotate panned and zoomed path movies within streaming Web video.
In addition to asynchronous collaborative video analysis using DIVER, multiple users can simultaneously access a dive, with each user able to add new panels or make comments on a panel that another user has created. Users are notified in real time when another user has made a contribution. Thus users may be either face-to-face in a meeting room or connected to the same Webpage remotely via networking as they build a collaborative video analysis. There is no need for the users to be watching the same portions of the video at the same time. As the video is streamed through their browsers, users may mark and record and comment at their own pace and according to their own interests.
The DIVER user may also create a compilation or "remix" of the contents of the dive as a stand-alone presentation and share this with collaborators by sending email with a live URL link that will open a DIVER server page (see Fig. 2) and display a DIVER player for viewing the dive including its annotations.
Figure Fig. 2. DIVER remix player for viewing dive content outside of the authoring environment (on a Webpage or blog).
The current architecture of WebDIVER has three parts: 1) a video transcoding server (Windows XP and FFMPEG), 2) application servers that are WAMP XP-based (Windows XP, Apache Web Server, MySQL database for FLV-format videos, thumbnail images, XML files and logs, and PHP), and 3) the streaming media server (Flash Media Server 2, Windows Server 2003). Progress is underway in moving WebDIVER into Amazon Elastic Compute Cloud (Amazon EC2), using Amazon's S3 storage service, converting to Linux, and replacing Adobe Flash Media Server with open-source Red5 so that we can have a 100-percent open source DIVER.
### 2.3 Sociotechnical Design Challenges for Video Collaboratories
The Web version of DIVER was designed specifically to address a set of core problems in supporting the activities of participants involved in distributed video collaborations [ 20]. These problems have surfaced in various multi-institutional workshops convening video researchers [ 21], [ 22], [ 23], [ 15], and many of the concerns relate to the fundamental problem of coordination of attention, interpretation, and action between multiple persons. Clark [ 24], in characterizing non-technology-mediated communication, described as "common ground" what people seek to achieve as they coordinate what they are attending to and/or referring to, so that when comments are made, what these comments refer to can be appropriately inferred or elaborated. We expand the common ground aspect of communicative coordination to the need to refer to specific states of information display in using computer tools, including digital video in collaborative systems.
Of course, a video conversation system such as DIVER does not fully resolve the common ground problem anymore than pointing does in the physical world. While pointing is a useful way of calling attention to something, as Clark and others, such as Quine [ 25], have pointed out, there can still be referential ambiguity present in a pointing act even for face-to-face discourse (e.g., is she referring to the car as an object or the color of the car?). However, it is possible to design a software platform with functionality that makes it possible for participants to negotiate the identity of a referent and its meaning over progressive conversational turns. Marking points of interest in a video using DIVER's virtual viewfinder and creating persistent references to video events are important steps forward in addressing these coordination challenges. A user who offers an interpretation on a movie moment in a DIVER worksheet can be confident that the referent of their comments—at least, in terms of spatial and temporal position—will be unambiguous to others who view the dive. This "guided noticing" feature provides the basis for subsequent interpretive work by group members to align their loci of interpretive attention (e.g., What is this? What is happening here?). In the next section, we examine the effects of introducing this capability to users with varying interests and goals for working with video.
We summarize here the core sociotechnical design challenges we addressed in the design of DIVER [ 20]:
1. The problem of reference. When analyzing video, how does one create a lasting reference to a point in space and time in the dynamic, time-based medium of a video record (a "video footnote")?
2. The problem of attentional alignment, or coreference. How does one know that what one is referring to and annotating in a video record is recognized by one's audience? Such coordination is important because dialog about a video referent can lead to conversational troubles if one's audience has a different referent in mind. This process of deixis requires a shared context.
3. The problem of effective search, retrieval, and experiencing of collaborative video work. If we can solve the problems of allowing users to point to video and express an interpretation and to establish attentional alignment, new digital objects proliferate beyond the video files themselves ("dives") that need to be searchable and readily yield the users' experience of those video moments that matter to them.
4. The problem of permissions. How does one make the sharing of video and its associated interpretations more available while maintaining control over sensitive data that should have protection for human subjects or digital rights management?
5. Integrating the insights of a collective group. How can we support harvesting and synthesizing the collec-tive intelligence of a group collaboratively analyzing video?
6. The problem of establishing coherent multiparty video-anchored discourse. Consider a face-to-face conversational interaction, as in a seminar, where the rules of discourse that sustain turn-taking and sense-making as people converse are familiar. In an academic setting, video, film, and audio recordings, as well as paper, may be used. Traditionally, these are used asymmetrically, as the facilitator/instructor prepares these records to make a point or to serve as an anchor for discussion and controls their play during the discourse. Computer-facilitated meetings for doing video analysis—where each participant has a computer and is network-connected with other participants and to external networks for information access—bring new challenges in terms of managing a coherent group discourse.
## COLLABORATION PATTERNSin DIVER
We now investigate what collaboration design patterns are used by groups of educators and researchers when they have access to DIVER's Web-based video collaboration platform. The DIVER software platform allows users to flexibly exchange analyses and interpretations of a video record with relatively few constraints on their collaboration style or purpose. Once the DIVER software was Web-enabled and made publicly accessible in 2004, we took the approach of supporting any user with a need for engaging in video exploration as part of their professional, educational, or recreational practices. We sought to accommodate a range of video activities, but rarely did we actively recruit users or make customizations to support specific applications. The result has been a large and diverse international user base. There have been approximately 3,000 unique users who have registered with DIVER and approximately 200 private user groups that have been created to support the activities of a particular project, event, or organization. Since DIVER was created in an academic setting, the majority of users are affiliated with schools and universities; however, there have also been users from the private sector, and even within a single setting, we have found DIVER to be used for a number of different purposes.
To conduct this analysis, we examined every dive that was created by each of the user groups and characterized the way that each group had organized their discourse: what interface features were used most frequently, what annotation conventions the group adopted, how conversational turn-taking was managed, etc. These characteriza-tions were agnostic as to who the particular participants were in the collaboration (e.g., high school students or re-searchers), and we regularly observed behaviors that transcended such user categories. In reviewing these characterizations, we were able to extract a small set of distinct patterns that describe the ways groups collaborated to achieve their particular video learning objectives. We think of these patterns as akin to Collaborative Design Pat-terns (CDPs) [ 26]—a conceptual tool for thinking about typical learning situations. In particular, CDPs have been employed in the literature on computer-supported collaborative learning to characterize interaction patterns around the use of learning technologies such as handheld computers [ 27]. Collaboration patterns we observed with DIVER were emergent, and not behaviors that we prescribed or that were dictated by the software, yet they were still comprised of the standard CDP elements (e.g., a problem, a context, a solution approach, etc.). Con-ceptualizing our observed patterns as CDPs is useful because it will allow us to make design suggestions for supporting specific kinds of user groups and to elicit patterns of behavior that have shown themselves to be particularly effective for such groups.
Here, we briefly describe the five most notable collaboration patterns we observed in our analyses of group activity in DIVER. Our data set encompasses numerous instances of each of these patterns, but we will describe each with a single example to elucidate its key features.
### 3.1 Collective Interpretation
Making interpretations of human behavior—particularly of learning—is a complex enterprise that benefits from multiple perspectives [ 28], [ 29]. The importance of collecting multiple interpretations is known instinctively by most, and so, when an individual sets out to conduct an analysis of a video event, he or she will often recruit the input of others who may bring novel insights based on their differences in knowledge and experience. The coordination of multiple perspectives, however, has long been a challenge for video analysts, especially in the days of VHS tapes, where a single individual had playback control and had to somehow "harvest" the insights of a room full of contributors [ 14]. The DIVER platform has affordances for sharing and comparing insights in a more coordinated manner, and we have observed that numerous groups use DIVER in an attempt to achieve interpretive consensus or to refine points of contention.
An example of this pattern of activity occurred within the context of a teacher credential program at a large east coast university. Students in a course on teaching and learning were provided with sample video of a science teacher working with her students to help them understand animal classification systems. The student-teachers who watched the video were tasked with describing how the instructional approach of the teacher in the video interacted with her students' learning processes. By giving each student-teacher independent access to the focal event in the DIVER system, they were able to anchor their interpretations directly with video data rather than having to rely on their memory of what they had seen. A public record of these interpretations is maintained so that another contributor can offer a response hours or days later.
At one point in the science classroom video, the teacher asks one of her students about what they have learned from an animal classification activity. In the video, the student says: "It helped me realize what things have in common and what they don't have in common." One of the student-teachers (ST1) selected this clip from the video and moved it to the DIVER worksheet, which led to this exchange with another student-teacher (ST2):
ST1: This student found the activity helpful in terms of understanding what an evolutionary biologist does. He seems to have gotten the big picture.
ST2: I don't know if I agree that he has gotten the big picture. Though I admit that I could be thrown off by his affect, which is pretty flat, it seems like he is just saying the most basic things—we were putting out species in different categories, they [biologists] categorize certain species or animals, I found it helpful because it helped me realize what things have in common and what don't. Is this the depth you were hoping for? How could we move him to the next level?
Importantly, the contribution of ST2 likely changed the interpretation of this student's reflection, not just in the mind of ST1, but for all group participants who are now thinking about how the instructional intervention had failed to achieve deep student understanding. Had ST2 simply stated: "I disagree that the student has gotten the big pic-ture" in an unanchored group discussion of the event, there would have been a greater chance that interpretive dis-agreement would have persisted. On the contrary, ST2 grounded her assertions in the actual classroom events, effectively shifting the analytical focus for subsequent interpretations.
We observe this pattern of collaboration in DIVER frequently. It typically emerges from groups who have the open-ended task of determining "what is going on" in a video event, ranging from field data collected by a research team to a funny video that someone posts on YouTube. The pattern often starts with one person taking a pass at mak-ing their own interpretation, and then others chime in with support or criticism. DIVER gives this discourse structure, such that an outside observer can fairly easily view a dive and ascertain the general consensus.
### 3.2 Distributed Design
There can be considerable value in collecting and considering video of the user experience in the design process [ 30], but the logistics of incorporating this video into a design team's workflow can be tricky. We have encountered a number of well-intentioned design groups who collect hundreds of hours of video of users, only for this video to sit unwatched on a shelf of DVDs. Video can capture notable affordances or systematic failures in the user experience, but identifying these trends and communicating them to the rest of the team at critical points in the design process is an arduous, time-consuming task. It is not typically feasible for an entire design team to convene and spend hours watching videos of user testing. We have, however, observed that the members of a number of different design initiatives use DIVER as a tool for efficiently integrating video insights into their collaborative thinking and design reviews.
The distributed design pattern was observed in a small research team at a west coast university working on a prototype touch-screen interface for preschool children. The researchers were trying to create a tool that allowed children to construct original stories using stock video footage (e.g., baby animals in the wild). Access by the design team to children of this age was limited, but there was a real need for the team to understand how the children would react to this kind of interface and whether it was possible to scaffold their developing storytelling abilities with a novel technology. The design team arranged for a few pilot user sessions to be conducted at a local nursery school, but neither the lead researcher on the project nor the software engineer who was responsible for implementing interface changes were able to attend. The solution was for two graduate students to run the pilot sessions and immediately upload the session videos to DIVER so that the entire team could quickly formulate design modifications that could be implemented for the next iteration ( Fig. 3).
Figure Fig. 3. DIVER worksheet showing an instance of distributed design.
In this instance, one team member took a first pass at segmenting and highlighting the important events in the video. This allowed the rest of the team to focus in on the moments that had potential design implications. In several instances, a team member would make a design recommendation based on something that they observed in the video, and one of the team members present at the session would respond with a clarification of what had occurred and possible alternative design schemes. The discourse in DIVER quickly transitioned from an investigation to a design plan that was adopted by the software engineer and implemented on a short timeline.
As a pragmatic matter, design teams in all areas of development will typically rely on data summaries and aggregations of user-test findings to inform the design process. While understandable, this practice can distance designers from the valuable insights to be derived from observing key moments in video recordings of users. A system for supporting asynchronous references of specific space-time segments of video—such as the one found in DIVER—seems to alleviate some of the coordinative barriers and permit the integration of user data with the design process. Recognizing this potential, a number of DIVER user groups found success generating new designs for learning and education through their discussions of video-recorded human interactions.
### 3.3 Performance Feedback
A challenge of giving people feedback on a performance, whether a class presentation or some display of artistic skill, is that the feedback offered is typically separated from the performance itself, such as a verbal evaluation given a week later or a review that has been written up in a newspaper. The effectiveness of the feedback in these cases relies on the memory of the performance both for the persons giving the review, as well as the person receiving the feedback. If, for example, a student does not recall misquoting a philosopher in their presentation for a law school course, they are not likely to be receptive to having this pointed out by their professor or classmates. Associating performance feedback with actual segments or images from the performance has an intuitive utility, but the approach to implementing a feedback system that effectively makes these associations and integrates input from multiple individuals is less clear. Using DIVER as a means to deliver feedback on a video-recorded performance was one of the most common collaboration activities that we observed among our users. We were particularly impressed by the range of performance types (e.g., films produced by undergraduates, K-12 student-teaching) for which users attempted to use DIVER for conveying suggestions and making specific criticisms.
A good example of the performance feedback pattern is its use in a prominent US medical school where an effort was made to improve the manner by which students communicated important health information to patients. Medical students interning at a hospital were asked to record their consultation sessions with patients and submit these recordings to their mentors using DIVER. Experienced medical professionals offered these students constructive feedback on how they were communicating with their patients and provided suggestions for improvement. In some cases, there was also the opportunity for the students to respond to the feedback with questions or points of clarification. In one instance, a student uploaded her interview with a patient who came in with various ailments including abdominal pain. She asked numerous questions in her attempt to narrow down the underlying problem. In this dive, she did some self-evaluation—marking segments and making comments where she believes that she could have done something better. Additionally, her mentor (M) watched the video and marked his own segments upon which he based his evaluation. He marked one moment in the video in particular and made the following comment:
M: Good check here on the timing of her GI symptoms in the bigger picture—you are now making sure you know where these symptoms fit into the time course of this complicated history.
Note that the evaluator used the word "here" rather than having to describe the referent event in detail, as one would likely have to do if they were delivering an entirely written or verbal assessment. Targeted feedback of this kind should help to minimize misunderstandings or generalizations. People will sometimes overreact to negative feedback on their performance, concluding hastily that "she hated it" or "I can't do anything right," but with feedback that is linked to specific behaviors, there is a better chance that the evaluations will be received constructively. In a related vein, a recent paper described productive uses of DIVER for changing the paradigm of communication skills teaching in oncology to a more precise performance feedback system, rather than one principally based on observation [ 31].
Additionally, we observed users administering feedback as a group, such as when an entire class was asked to respond to a student presentation. In this case, the students were aware of and were able to coordinate their feedback with that offered by their classmates, leading to less redundancy. Users in such a design pattern also have the opportunity to mediate each other's feedback, perhaps by giving support for another's comments with additional ideas for improvement or by softening criticism that may be seen by some as overly harsh.
### 3.4 Distributed Data Coding
Some attempts at making interpretations of video recordings of human activity are more structured than those that we observed in the distributed interpretation pattern. In research settings in particular, the categories of activity that are of interest are often clear, while it is the identification of those categories and formal labeling of events in the data that must be negotiated. In their experience, coding video for the TIMSS study—a cross-cultural study of math and science classrooms—Stigler et al. [ 32] reflect on two lessons they learned about coding video data: 1) the videos must be segmented in meaningful ways to simplify and promote consistency in coding and 2) the construction and application of these codes requires input from multiple individuals with differing expertise. Both of these issues can be facilitated by a system like DIVER that structures video discourse around segments selected by any number of participants. While only a few user groups implemented a formal coding scheme in DIVER, their interaction style took on a distinct pattern that is useful to consider for thinking about possible applications and design needs.
One group of users that utilized DIVER to manage their distributed coding work was a team of seven researchers working as part of a large center dedicated to the scientific study of learning in both formal and informal environments. This team conducted over 20 interviews with families in the San Francisco Bay area on how uses of mathematics arise in their home life. The research group was interested in how families of diverse backgrounds organize their mathematical practices and to analyze how differences in social conditions and resources in the home support these practices. The videos from all of the interviews were uploaded into DIVER, but for all of the two-hour videos to have been logged and coded by the entire group would have been overly burdensome and ultimately counter-productive. Instead, the group reached consensus on a coding scheme by reviewing a subset of the interviews as a group, and then assigned two different team members to code each of the remaining videos. The codes that they agreed upon were labels for specific instances that they were interested in studying as part of their analysis, such as "gesture"—which denoted a family member had made some physical gesture to illustrate a mathematical concept, or "values"—which was used when someone made an expression of their family's values when discussing a mathematical problem that arose in their home life. The consistent use of these codes in DIVER was particularly useful because the software allows users to search for keywords in the annotations of the video segments. The results of a search are frames from multiple dives where the word or code was used, meaning that this research team was able to quickly assemble all the instances of a particular code that had occurred across all the interviews, regardless of who had assigned the code. This facilitated the process of making generalizations across families and drawing conclusions that addressed their hypotheses.
While other tools for video analysis and coding exist (such as commercial systems ATLAS.ti, NVivo, and Studio-Code), most are not available online nor do they support multiple users in collaborative analyses. In its current state, DIVER is probably not sufficient for largescale video coding—the research group described above, for exam-ple, supplemented their analysis with a FileMaker Pro database that held multiple data fields and detailed code specifications—but more sophisticated coding capabilities are not difficult to add and are currently in development. What is notable about the distributed coding pattern that we observed in existing DIVER users is that the key coding needs of segmenting and supporting multiple perspectives [ 32] were both supported and utilized.
### 3.5 Video-Based Prompting
Most of the collaboration patterns discussed so far have consisted of fairly open-ended interpretive work. Even the distributed coding pattern, though using a set of predefined categories, still required video segmentation and the reconciliation of coded events where there was disagreement. Some of the activity that we observed in DIVER, how-ever, was far more constrained. We observed such a pattern most frequently in formal educational contexts such as classrooms where instructors had specific questions about a video event that they wanted their students to try and answer. For example, a film studies professor had a set of questions that he wished to pose to the students in his class about classic films like Godard's Breathless. He used DIVER to distribute these questions because he wanted his questions and his students' answers to be anchored by clips from the actual film. In some of the DIVER user groups, there was an instructor or facilitator that initiated a dive by capturing a clip from a larger video source and using the worksheet to pose a question about some aspect of the clip.
An especially interesting application of DIVER and a good example of the video-based prompting pattern was an undergraduate Japanese language course at a west coast university. The instructor of this course had collected a number of videos of informal spoken Japanese from various sources such as interviews and television programs. These videos demonstrated certain styles and forms of the language that the instructor wished her students to ex-perience and reflect upon. These videos were uploaded into DIVER and the instructor created homework assignments where students would have to respond to four or five questions, each associated with a video segment selected by the instructor. In this class, the students were all able to see each other's responses, which turned out to be a benefit in the eyes of the instructor because it stimulated the students to think deeply and try and make an original contribution to the analysis. Fig. 4 is a screenshot of the worksheet used for one of these homework assignments.
Figure Fig. 4. A DIVER worksheet used in a Japanese language course. The instructor has posted questions for her students about informal language conventions used in the video.
Instructors and facilitators of various kinds will frequently use visual aids such as video for encouraging critical thinking by their students. A Web services platform like DIVER allows students and participants to take these visual aids home with them, make independent interpretations, and then contribute their input in a structured forum. The important feature of this pattern is that any interpretation is necessarily associated with video evidence of the target phenomena. This practice has potential long-term benefits in that it teaches people to adequately support their arguments by making direct links to available data.
## MAPPINGTHE SPACEOF COLLABORATIVE VIDEO ANALYSIS TASKS
There are numerous learning situations that can be enhanced with video analysis, resulting in different patterns of collaboration, each with different needs in terms of interface supports and structure. DIVER's open-ended platform allowed many of these patterns to emerge organically, but it was clear that several of these collaborative practices could be enhanced with additional features and capabilities. While DIVER has taken modest steps forward on this front—adding coding, transcription, and clip trimming functionality, for example—we recognize that there is still much progress to be made to fully exploit the learning potential of digital video for researchers and for education.
In order for the community of learning technology researchers and developers to adequately address this design problem, we have attempted to map out the space of collaborative video practices. In extracting the five collaboration patterns from our data set, we recognized a few salient dimensions that we can abstract from these practices for defining this space. These dimensions capture all of the practices we observed, but more importantly, they suggest a number of other practices that we did not observe. Perhaps due to limitations in DIVER or video technologies in general, or simply incidental to the needs of the DIVER user community to date, there were several possible and promising collaboration activities that have not yet manifested themselves in mainstream video practices. By specifying the features of this space in its entirety, our hope is that we can more fully address the design needs of existing practices as well as cultivate fledgling practices that could have a significant impact on the culture of learning technology. The three dimensions that define this space are the style of discourse, relationship to the source material, and target outcome.
### 4.1 Discourse Style
In discussing and interpreting the events within a video, the needs of some groups are best met by employing a more informal structure with fewer constraints on participation. Exploratory analyses or discourse around video for purely social purposes are likely to adopt this more conversational style. Other groups, however, such as in courses, have specific aims for their collaboration and participatory roles that must be maintained during the course of discussing the video. These groups will adopt a more formal discourse style that may involve prompting participants for desired input or limiting contributions to specific times or locations within the video.
### 4.2 Relationship to Source Material
Digital video technologies have made tremendous advances over the last decade, particularly in the capacity for capturing, uploading, and sharing video at rapid speeds and with relatively little effort. The ubiquity and ease of transmission of digital video has changed the traditional relationships one has with the video they watch for recrea-tion or utilize in a professional capacity. On one end of this dimension is an insider relationship, meaning that the video used for the collaborative activity is video with which the group has strong familiarity. It could be video that someone in the group recorded or video in which the group members themselves are featured. With insider videos, the group often has some degree of control over how the video was recorded and for what purpose. If a group has an outsider relationship with a source video, they typically have less control over factors such as editing style and camera orientation. These are videos that may have been obtained from a secondary source or were selected from an archive collection. It is less likely that groups discussing outsider videos will possess background information or be able to fill in comprehension gaps stemming from contextual features and events that were not recorded.
### 4.3 Target Outcome
Any group of individuals that sets out to explore a video record asks themselves: "What do we want to get out of this?" As should be apparent from our review of collaborative video activities in DIVER, there are numerous possible answers to this question. We have identified four general types of outcomes that, while not exhaustive or mutually exclusive, describe the bulk of possible collaboration objectives: design, synthesis/pattern finding, evaluation, and analysis/interpretation. Groups are working toward design outcomes when they discuss video with the aim of conceiving or improving upon some product, process, or organizational scheme. Synthesis or pattern-finding out-comes come from attempts to reach consensus on "the big picture" and recognize important trends and commonal-ities. Evaluation outcomes are critiques of video products and recorded events with the corresponding aim of either delivering feedback to a specific group or for demonstrating critical competence (as in a K-12 teacher performance assessment). Unlike synthesis outcomes, groups with an analysis or interpretation objective are trying to "break things down" and typically are attempting to understand what is happening behind the scenes or in the minds of the actors that is causing the events seen in the video.
Explicating these dimensions results in a matrix of collaboration activities represented in Fig. 5. We caution that the states of each dimension that we have identified here do not have "hard" boundaries—it is perfectly feasi-ble that a collaboration activity around video could straddle the line between a formal and informal discourse style, for example. However, we believe that this representation provides a useful depiction of the range of possibilities and the various needs of groups that fall within this space. To clarify these possibilities and needs even further, we have offered examples of activities that embody the defining characteristics of each cell. Some of these examples are actual activities that we have observed in DIVER and some are hypothetical activities that share the same character-istics of those that we have observed. Other examples in this representation (in gray typeface) are hypothetical activities with characteristics that we have not yet observed directly in DIVER but are possible presuming that the right configuration of situational variables exists. All the examples offered in Fig. 5 are situated within a particular user group (teenagers, industry professionals, etc.), but again, we reiterate our conviction that these activity patterns could be implemented in any number of educational and research settings.
Figure Fig. 5. A representation of possible design patterns based on three dimensions of collaborative activity: target outcome, discourse style, and relationship to the source material. Cells with black print indicate collaboration patterns that we have observed on the DIVER platform. Cells with grey print describe hypothetical scenarios that have not yet been observed but are plausible given the appropriate user group and software environment.
These activities should be of particular interest to designers and educators because they present powerful learning opportunities with digital video that simply have not been realized with current toolsets. In the meantime, there is still a great deal of work that can be done to address the activity patterns that we have observed, whether they are minor modifications to a software platform like DIVER or the development of an entirely new technology that targets a specific subset of the space we have defined here.
## DESIGNINGFOR VIDEO-BASED COLLABORATIONS
When a group or a team is planning to embark on a project using video as an "anchor" [ 33] for their collaborative activity, they need to have in mind the target outcomes of their work, and as we have seen, such envisioned outcomes may involve design, synthesis/pattern finding, evaluation, and analysis/interpretation. Now that we have a handle on not only actual design patterns from uses of the DIVER video platform, but a three-dimensional heuristic matrix for generating possible design patterns which vary in terms of target outcome, relationship to video sources (insider/outsider), and discourse style (formal/informal), we can visit the question of what forms of supportive structures might serve as useful design scaffolding for the activities of video-based collaborative groups.
### 5.1 Designs for Target Outcomes
We can envision a number of support structures for groups whose objective is to produce one of the four outcomes we have identified in the matrix. For groups that are aiming at design, it would be useful to have platform capabilities that support the processes of iteration and revision. Videos that capture product development at different stages, for example, could be tagged as such in a video database. Support for design argumentation [ 34] could be integrated, linking to video evidence. A collaboration platform for supporting design could also include features for tracking progress, such as milestone completion markers that are anchored by video of a successful user-test. Additionally, video could be accompanied by a "design canvas" that allows users to make free-form sketches or models that are shared with the group. Rather than the worksheet in DIVER, for example, users could have a sketch space where they could manipulate images captured on the video to convey new ideas.
For synthesis and pattern-finding objectives, there is a need for tools that aid in collecting and building relationships between instances of a focal interest. With rapid advances underway in computer vision technology [ 35], it is becoming feasible that recognition algorithms could be used to automatically identify key objects, faces, head poses, and events in video (e.g., hand-raising in a classroom discourse, uses of math manipulatives, etc.) and generate tagging metadata for videorecords. Modules for a video research platform that could reliably flag such moments of analytic content automatically and make them readily available for subsequent analyses would save immense time and effort.
Evaluation outcomes could be aided with tools that minimize redundancy and make the provision of feedback more efficient. Rather than free-form text input, it may be appropriate to provide a template for a rating system or checklist schemas. To avoid the recurrence of the same comment, users could be enabled with a way to show support for an existing comment or poll functionality—a button that says something to the effect of "I agree with that." Another need that we have observed in groups doing evaluation is the integration of documents and other materials associated with a performance. Someone being evaluated as part of the teacher credentialing process, for example, may be asked to submit lesson plans and examples of student work in addition to video of their lesson. A platform that allows evaluators to dive into and annotate these supplemental documents and connect them to the video event could provide for a more comprehensive workflow for assessment processes.
In analysis activities, participants are typically looking for ways to contribute the most insightful information in the most efficient manner possible. While text can be a good way of sharing interpretations, there are some scenarios where voice annotations would be more effective at conveying a nuanced perspective [ 18]. There are also some situations where an analysis is best supported with an existing document or representation. In this case, having the ability to simply link a video event to a URL or to embed an object in the analysis would be advantageous. Finally, some analyses require one to look at an event from multiple vantage points. Having the ability to easily synchronize multiple-video sources and view simultaneous playback is a feature requested by a number of our user groups. Researchers of human-computer interaction, for example, may want to study how someone uses a new car prototype using video streams of the front-view, the rear-view, and the driver's face and posture.
### 5.2 Designs for Relationships to Video Sources
Both insider and outsider relationships with source video would likely benefit from different metadata capabilities. For insiders who have shot their own video and wish to share it with a small group or broadcast it to the whole world, it is important to be able to control the information, or metadata, that people have about that video. Where was the video recorded? Who is featured in the video? For what purpose was it recorded? This is also a potential opportunity to specify how the video can and should be used (e.g., granting Creative Commons licensure). This is true not only for the video itself, but for the analysis that one performs on the video. These analyses can themselves be works of intellect and artistic expression, and so attribution of this work is important.
For outsiders, it would be highly valuable to have access to any metadata that was provided for video a group is using. Geographic data about the video, for example, could facilitate mashups of video and mapping tools such as Google Earth. If a group was working with a large archive of video, it could be useful to have a platform that uses timestamp data to organize and create visualizations based on when the video was recorded.
### 5.3 Designs for Discourse Style
If the desired type of formal discourse for a certain group is known in advance, it would be possible for a video collaboration platform to structure this type of discourse using templates or other interface constraints to "script" activities [ 36]. Some forms of collaboration have explicit rules for how these activities should be conducted, and there is potential for the interface to assist in regulating this activity as a form of distributed intelligence [ 37], [ 38]. Besides templates, this effect can be achieved by assigning participants different roles with associated permissions and capabilities, or there can be features that support divisions of labor by directing participants to work on different parts of the video task.
Groups that desire a more informal or conversational discourse style will likely desire fewer constraints and more flexibility for communicating and constructing new insights. This may include features that allow for more social interaction such as chat or networking capabilities (e.g., you may want to connect with someone if you know that they have extensive experience doing a certain type of video analysis). It may also include tools for doing more free-form interpretive work, such as "build-as-you-go" coding schemes. These types of capabilities may allow for the emergence of novel designs and interpretations that would not develop in a more constrained setting.
## CONCLUSION
In this paper, we have articulated a vision for the generative promise of video research and education platforms for supporting the work practices of collaborative groups. In particular, the DIVER video platform embodies a new kind of communication infrastructure for video conversations by providing persistent and searchable records of video pointing activities by participants to specific time and space moments in video so as to develop "common ground" in technology-mediated conversations among distributed teams. Rather than speculate about the opportunities for using the DIVER platform, we empirically examined how roughly 200 globally distributed groups appropriated the DIVER technology to serve their needs. We found five dominant collaborative design patterns: collective interpretation, distributed design, performance feedback, distributed data coding, and video-based prompting. Abstracting from the features of these collaboration patterns, we were able to identity a set of three dimensions that we argue provide a tripartite design space for video collaboration groups: discourse style (formal/informal), relationship to video source (insider/outsider), and target outcome (design, synthesis, evaluation, and analysis). These dimensions were used heuristically to articulate a three-dimensional matrix for conceptualizing video collaborative groups, and then used to spawn concepts for dimension couplings unrealized in DIVER collaborative groups to date, and to recommend new socio-technical designs for better serving the design space of these groups. We invite the learning technologies community to refine and advance our conceptualizations of collaboration design patterns for video platform uses in research and education and the creation of systems that support the important needs for video conversations in the work practices of educators and researchers.
## ACKNOWLEDGMENTS
The authors give special thanks to DIVER software engineer Joe Rosen for his exceptional contributions to every aspect of the DIVER Project since 2002, and Michael Mills, Kenneth Dauber, and Eric Hoffert for early DIVER design innovations. DIVER, WebDIVER, and Guided Noticing are trademarks of Stanford University for DIVER software and services with patents awarded and pending. The authors are grateful for support for The DIVER Project in grants from the US National Science Foundation (#0216334, #0234456, #0326497, #0354453) and the Hewlett Foundation.
## REFERENCES
• 1. P. Saettler, The Evolution of American Educational Technology. Information Age Publishing, 2004.
• 2. Video Research in the Learning Sciences, R. Goldman, R.D. Pea, B. Barron, and S. Derry, eds. Lawrence Erlbaum Assoc., 2007.
• 3. T. Chambel, C. Zahn, and M. Finke, "Hypervideo and Cognition: Designing Video-Based Hypermedia for Individual Learning and Collaborative Knowledge Building," Cognitively Informed Systems: Utilizing Practical Approaches to Enrich Information Presentation and Transfer, E.M. Alkhalifa, ed., pp. 26-49, Idea Group, 2006.
• 4. R. Goldman, "Video Perspectivity Meets Wild and Crazy Teens: A Design Ethnography," Cambridge J. Education, vol. 34, no. 2, pp. 157-178, June 2004.
• 5. D. Schwartz, and K. Hartman, "It's Not Television Anymore: Designing Digital Video for Learning and Assessment," Video Research in the Learning Sciences, R. Goldman, R. Pea, B. Barron, and S.J. Derry, eds., pp. 335-348, Erlbaum, 2007.
• 6. C. Zahn, R. Pea, F.W. Hesse, M. Mills, M. Finke, and J. Rosen, "Advanced Digital Video Technologies to Support Collaborative Learning in School Education and Beyond," Computer Supported Collaborative Learning 2005: The Next 10 Years, T. Koschmann, D. Suthers, and T-W. Chan, eds., pp. 737-742, Erlbaum, June 2005.
• 7. R. Pea, and E. Hoffert, "Video Workflow in the Learning Sciences: Prospects of Emerging Technologies for Augmenting Work Practices," Video Research in the Learning Sciences, R. Goldman, R. Pea, B. Barron, and S.J. Derry, eds., pp. 427-460, Erlbaum, 2007.
• 8. B. MacWhinney, "A Transcript-Video Database for Collaborative Commentary in the Learning Sciences," Video Research in the Learning Sciences, R. Goldman, R. Pea, B. Barron, and S.J. Derry, eds., pp. 537-546, Erlbaum, 2007.
• 9. R. Goldman, "Orion, an Online Collaborative Digital Video Data Analysis Tool: Changing Our Perspectives as an Interpretive Community," Video Research in the Learning Sciences, R. Goldman, R. Pea, B. Barron, and S.J. Derry, eds., pp. 507-520, Erlbaum, 2007.
• 10. R.M. Baecker, D. Fono, and P. Wolf, "Towards a Video Collaboratory," Video Research in the Learning Sciences, R. Goldman, R. Pea, B. Barron, and S.J. Derry, eds., pp. 461-478, Erlbaum, 2007.
• 11. H. Jenkins, K. Clinton, R. Purushotma, A.J. Robinson, and M. Weigel, Confronting the Challenges of Participatory Culture: Media Education for the 21st Century, http://tinyurl.com/y7w3c9. The MacArthur Foundation, 2006.
• 12. How People Learn: Brain, Mind, Experience, and School (Expanded Edition), J.D. Bransford, A.L. Brown, and R.R. Cocking, eds. National Academy Press, 2000.
• 13. J. Bransford, N. Vye, R. Stevens, P. Kuhl, D. Schwartz, P. Bell, A. Meltzoff, B. Barron, R. Pea, B. Reeves, J. Roschelle, and N. Sabelli, "Learning Theories and Education: Toward a Decade of Synergy," Handbook of Educational Psychology, second ed., P. Alexander and P. Winne, eds., pp. 209-244, Erlbaum, 2006.
• 14. B. Jordan, and A. Henderson, "Interaction Analysis: Foundations and Practice," J. Learning Sciences, vol. 4, no. 1, pp. 39-103, Jan. 1995.
• 15. S. Derry, R. Pea, B. Barron, R.A. Engle, F. Erickson, R. Goldman, R. Hall, J. Lemke, M.G. Sherin, B.L. Sherin, and T. Koschmann, "Guidelines for Conducting Video Research in the Learning Sciences," J. Learning Sciences, 2009 (in press).
• 16. R. Pea, "Video-as-Data and Digital Video Manipulation Techniques for Transforming Learning Sciences Research, Education, and Other Cultural Practices," The Int'l Handbook of Virtual Learning Environments, J. Weiss, J. Nolan, J. Hunsinger, and P. Trifonas, eds., pp. 1321-1393, Springer, 2006.
• 17. S.E. Page, The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton Univ. Press, 2007.
• 18. R. Stevens, "Capturing Ideas in Digital Things: A New Twist on the Old Problem of Inert Knowledge," Video Research in the Learning Sciences, R. Goldman, R. Pea, B. Barron, and S.J. Derry, eds., pp. 547-563, Erlbaum, 2007.
• 19. R. Pea, M. Mills, J. Rosen, K. Dauber, W. Effelsberg, and E. Hoffert, "The DIVER $^{\rm TM}$ Project: Interactive Digital Video Repurposing," IEEE Multimedia, vol. 11, pp. 54-61, 2004.
• 20. R. Pea, R. Lindgren, and J. Rosen, "Cognitive Technologies for Establishing, Sharing and Comparing Perspectives on Video over Computer Networks," Social Science Information, vol. 47, no. 3, pp. 355-372, Sept. 2008.
• 21. M. Lampert, and J. Hawkins, "New Technologies for the Study of Teaching," Report to the US National Science Foundation from a workshop held 9-11 June 1998, NSF Grant #REC-9802685, June 1998.
• 22. B. MacWhinney, and C. Snow, "Multimodal Studies of Instructional Discourse," Report to the US Nat'l Science Foundation, http://www.nsf.gov/sbe/tcw/events991203w, 1999
• 23. R.D. Pea, and K. Hay, "CILT Workshop on Digital Video Inquiry in Learning and Education," Report to the US Nat'l Science Foundation based on NSF #0124012, Stanford Center for Innovations in Learning, 2003.
• 24. H.H. Clark, Using Language. Cambridge Univ. Press, 1996.
• 25. W.V.O. Quine, "The Inscrutability of Reference," Semantics: An Interdisciplinary Reader, D. Steinberg and L. Jacobovits, eds., pp. 142-154, Cambridge Univ. Press, 1971.
• 26. C. DiGiano, L. Yarnall, C. Patton, J. Roschelle, D. Tatar, and M. Manley, "Conceptual Tools for Planning for the Wireless Classroom," J. Computer Assisted Learning, vol. 19, no. 3, pp. 284-297, Sept. 2003.
• 27. T. White, "Code Talk: Student Discourse and Participation with Networked Handhelds," Computer-Supported Collaborative Learning, vol. 1, no. 3, pp. 359-382, Aug. 2006.
• 28. S. Wilson, "The Use of Ethnographic Techniques in Educational Research," Rev. Educational Research, vol. 47, no. 2, pp. 245-265, Spring 1977.
• 29. R. Goldman-Segall, Points of Viewing Children's Thinking: A Digital Ethnographer's Journey. Erlbaum, 1998.
• 30. J. Buur, and K. Bagger, "Replacing Usability Testing with User Dialogue," Comm. ACM, vol. 42, no. 5, pp. 63-66, May 1999.
• 31. A.L. Back, R.M. Arnold, W.F. Baile, J.A. Tulsky, G. Barley, R.D. Pea, and K.A. Fryer-Edwards, "Faculty Development to Change the Paradigm of Communication Skills Teaching in Oncology," J. Clinical Oncology, The Art of Oncology, Mar. 2009.
• 32. J.W. Stigler, R. Gallimore, and J. Hiebert, "Using Video Surveys to Compare Classrooms and Teaching Across Cultures: Examples and Lessons from the TIMSS Video Studies," Educational Psychologist, vol. 35, no. 2, pp. 87-100, June 2000.
• 33. J.D. Bransford, R.D. Sherwood, T.S. Hasselbring, C.K. Kinzer, and S.M. Williams, "Anchored Instruction: Why We Need It and How Technology Can Help," Cognition, Education and Multimedia, D. Nix and R. Sprio, eds., pp. 163-205, Erlbaum, 1990.
• 34. S. Buckingham Shum, "Design Argumentation as Design Rationale," The Encyclopedia of Computer Science and Technology, vol. 35, supp. 20, pp. 95-128, Marcel Dekker Inc., 1996.
• 35. D.G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints," Int'l J. Computer Vision, vol. 60, no. 2, pp. 1573-1405, 2004.
• 36. P. Dillenbourg, and P. Tchounikine, "Flexibility in Macro-Scripts for Computer-Supported Collaborative Learning," J. Computer Assisted Learning, vol. 23, no. 1, pp. 1-13, 2007.
• 37. R.D. Pea, "Practices of Distributed Intelligence and Designs for Education," Distributed Cognitions, G. Salomon, ed., pp. 47-87, Cambridge Univ. Press, 1993.
• 38. J. Hollan, E. Hutchins, and D. Kirsch, "Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research," ACM Trans. Computer-Human Interaction, vol. 7, no. 2, pp. 174-196, 2000. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16528038680553436, "perplexity": 2676.398706528208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864657.58/warc/CC-MAIN-20180522092655-20180522112655-00081.warc.gz"} |
https://www.physicsforums.com/threads/number-theory-find-two-smallest-integers-with-same-remainders.613244/ | # Homework Help: Number theory find two smallest integers with same remainders
1. Jun 11, 2012
### Wildcat
1. The problem statement, all variables and given/known data
Find the two smallest positive integers(different) having the remainders 2,3, and 2 when divided by 3,5, and 7 respectively.
2. Relevant equations
3. The attempt at a solution I got 23 and 128 as my answer. I tried using number theory where I started with 7x +2 as the number, then if divided by 5 the remainder would be 3 so subtract 3 from 7x+2 to get 7x-1 when I use this method and stop there by having x=0,1,2,3,... 3 works 7(3)-1 is divisible by 5 so put 3 in the original 7(3) +2 =23 Then I just used a trial and error method to find 128. Is my answer correct?? I know my procedure is sketchy because I have never been exposed to number theory.
2. Jun 11, 2012
### Fightfish
Your answers are correct. There are of course, more formal methods of solving it.
In number theory, we usually use the method of taking modulos. Let me illustrate this for the question below:
From the remainders, we have:
a == 2 (mod 3) - (1)
a == 3 (mod 5) - (2)
a == 2 (mod 7) - (3)
From (3), the numbers must have the form a = 7k+2, where k is any positive integer.
Using (1): 7k + 2 == 2 (mod 3)
This implies that 7k == 0 (mod 3), quite a useful result! Thus k = 3n, where n is any positive integer, and so our numbers a = 21n + 2.
Using (2): 21n + 2 == 3 (mod 5)
This implies that 21 n == 1 (mod 5). Since 21 == 1 (mod 5), n == 1 (mod 5) as well for the equation to hold.
Thus the numbers a that satisfy the conditions are of the form 21n + 2, n = 1,6,11,16,21...
The first two numbers are thus 21(1) + 2 = 23 and 21(6) + 2 = 128
3. Jun 18, 2012
### Wildcat
Thanks!! I need to take a class on number theory! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896325409412384, "perplexity": 575.2112646520516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946077.4/warc/CC-MAIN-20180423144933-20180423164933-00384.warc.gz"} |
http://mathhelpforum.com/discrete-math/189538-statistical-recurrence-relation.html | ## statistical recurrence relation
I've a simple recurrence relation of functions:
$a(x)_{n+1} = a(x)_{n} f(y_{n})$
where the distribution of y is known, and given by, say, p(y).
the function f is also given.
i'm looking for a (statistical) solution for the relation | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9829529523849487, "perplexity": 1898.0661179911172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.6/warc/CC-MAIN-20150728002308-00255-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/1703.01159/ | # Manipulation of entanglement sudden death in an all-optical setup
Ashutosh Singh Light and Matter Physics Group, Raman Research Institute, Sadashivanagar, Bangalore 560080, India. Siva Pradyumna A.R.P. Rau Department of Physics and Astronomy, Louisiana State University - Baton Rouge, LA 70803, USA. Urbasi Sinha
###### Abstract
The unavoidable and irreversible interaction between an entangled quantum system and its environment causes decoherence of the individual qubits as well as degradation of the entanglement between them. Entanglement sudden death (ESD) is the phenomenon wherein disentanglement happens in finite time even when individual qubits decohere only asymptotically in time due to noise. Prolonging the entanglement is essential for the practical realization of entanglement-based quantum information and computation protocols. For this purpose, the local NOT operation in the computational basis on one or both qubits has been proposed. Here, we formulate an all-optical experimental set-up involving such NOT operations that can hasten, delay, or completely avert ESD, all depending on when it is applied during the process of decoherence. Analytical expressions for these are derived in terms of parameters of the initial state’s density matrix, whether for pure or mixed entangled states. After a discussion of the schematics of the experiment, the problem is theoretically analyzed, and simulation results of such manipulations of ESD are presented.
## 1 Introduction
Quantum entanglement [1,2] is a non-classical correlation shared among quantum systems which could be non-local [3,4] in some cases. It is a fundamental trait of quantum mechanics. Like classical correlations, entanglement also decays with time in the presence of noise in the ambient environment. The decay of entanglement depends on the initial state and the type and amount of noise (Amplitude damping, Phase damping, etc.) acting on the system [5-7]. The entangled states: and (maximally entangled “Bell states“ for ) being the simplest and most useful entangled states in quantum information processing receive special attention. The maximally entangled states and undergo asymptotic decay of entanglement in the presence of an amplitude damping channel (ADC). The non maximally entangled states always undergo asymptotic decay of entanglement, whereas undergo asymptotic decay for and a finite time end called entanglement sudden death (ESD) for in the presence of ADC. On the other hand, a pure phase damping channel (PDC) causes entanglement to always decay asymptotically. Two different initial states (, where ), which share the same amount of initial entanglement (measured through Negativity) being affected by the same type of noise may follow very different trajectories of entanglement decay. In the presence of multiple stochastic noises, although the decoherence of individual qubits follows the additive law of relaxation rates, the decay of entanglement, does not. In fact, entanglement may not decay asymptotically at all, and disentanglement can happen in finite time (ESD). ESD has been experimentally demonstrated in atomic [8] and photonic systems [9,10].
Since entanglement is a resource in quantum information processing [11-13], manipulation that prolongs entanglement will help realize protocols that would otherwise suffer due to short entanglement times. Also, entanglement purification [14] and distillation [15] schemes could possibly recover the initial correlation from the ensemble of noise-degraded correlation so long as the system has not completely disentangled. Therefore, the delay or avoidance of ESD is important. Several proposals exist to suppress the decoherence; for example, decoherence-free subspaces [16-19], quantum error correction [20,21], dynamical decoupling [22-24], quantum Zeno effect [25-27], quantum measurement reversal [28-33], and delayed-choice decoherence suppression [34]. Protecting entanglement using weak measurement and quantum measurement reversal [32,33], and delayed choice decoherence suppression [34] have been experimentally demonstrated. Both of these schemes, however, have the limitation that the success probability of decoherence suppression decreases as the strength of the weak interaction increases.
The practical question we want to address here is; whether, given a two-qubit entangled state in the presence of amplitude damping channel which causes disentanglement in finite time, can we alter the time of disentanglement by a suitable operation during the process of decoherence? A theoretical proposal exists in the literature for such manipulation of ESD [35] through a local unitary operation (NOT operation in computational basis: ) performed on the individual qubits which swaps their population of ground and excited states. Depending on the time of application of this NOT operation, it can avoid, delay, or hasten the ESD. Based on this proposal [35], we have extended the experimental set up [9] for ESD and propose here an all-optical experimental set up for manipulating ESD involving the NOT operation on one or both the qubits of a bipartite entangled state in a photonic system.
The system consists of polarization-entangled photons () produced in the sandwich configuration Type-I SPDC (spontaneous parametric down conversion)[38]. These photons are sent to two displaced identical Sagnac interferometers, where ADC is simulated using rotating HWPs (half wave plates) placed in the path of incoming photons (See Fig.2). The HWP selectively causes a polarized photon to “decay“ to ( and serve as ground and excited states of the system, the two states of a qubit)[9]. The NOT operation is implemented by a HWP with fast axis at relative to , placed right after the ADC. This HWP is followed by PBS (polarizing beam splitter) to segregate the and polarizations and, with subsequent ADC after the NOT operation implemented by a set of secondary HWPs acting on only. Such a set of secondary HWPs simulating the ADC (or evolution of qubits in noisy environment) is essential to our study as the ADC (for example, spontaneous emission in case of a two-level atomic system) continues to act even after the NOT operation is applied [35] and these secondary HWPs simulate it in our proposed experiment. The orientation of the HWP () plays the role of time () with () analogous to () for a two-level atomic system decaying to the ground state due to spontaneous emission.
We use Negativity as a measure of entanglement. It is defined as the sum of absolute values of negative eigen values of the partially transposed density matrix [36,37]. We find that our simulation results for the manipulation of ESD involving NOT operations on one or both the qubits of a polarization entangled photonic system in presence of ADC are completely consistent with the theoretical predictions of the reference [35] which has analyzed an atomic system. The merit of our scheme is that it can delay or avoid ESD (provided the NOT operation is performed sufficiently early) unlike previous experiments [32-34] where success probabilities scaled with the strength of the weak interaction. Since the photonic system is time independent and noise is simulated using HWPs, it gives experimentalists complete freedom to study and manipulate the disentanglement dynamics in a controlled manner. In this, our photonic system through a controllable HWP offers an advantage over others such as atomic states where the decay occurs through noise sources lying outside experimental control. The NOT operation that we apply through a HWP is the analogue of flipping spin in a nuclear magnetic system, achieved through what is referred to as a pulse.
The paper is organized as follows: In section (2), we discuss the all-optical implementation of the proposed ESD experiment and analyze it theoretically using the Kraus operator formalism. In section (3) and (4), we discuss and theoretically analyze the proposed ESD-manipulation experiment involving NOT operation on both or on only one of the qubits, respectively. In section (5), we give analytical expressions for probabilities and also for ESD and its manipulation curves in terms of the parameters of the initial state (density matrix). The first of these is the setting (“time“) for ESD, the next setting for the NOT marking the border between hastening and delay; that is, if the NOT is applied after (and of course before ), it actually hastens, ESD happening before , whereas application before delays ESD to stretch past to larger but still finite value less than one. The third, , marks the border between delaying or completely avoiding ESD. Applying the NOT after delays to a larger value whereas applying before avoids ESD altogether. In section (6), we summarize the results of manipulation of ESD for different pure and mixed initial entangled states, giving numerical values of , and . Section (7) concludes with pros and cons of the proposed scheme for the manipulation of ESD and the future scope of this work.
## 2 Proposed experimental set up for ESD and its analysis
The proposed experimental setup for ESD is shown in figure (1), which is a generalization of the scheme used in reference [9]. The type-1 polarization entangled photons () can be prepared by standard methods [38]; the amplitudes and and relative phase are controlled by the HWP and QWP (quarter wave plate). These entangled photons are sent to two displaced Sagnac interferometers with HWPs simulating the ADC, where decoherence takes place, and finally these photons are sent for tomographic reconstruction of the quantum state [39]. The and polarizations of the photon serve as the ground and excited states of the analogous atomic system, while output spatial modes of the reservoir serve as the ground and excited states of the reservoir. Asymmetry between degenerate polarization states of a photon ( ) is introduced by the HWP rotation such that it selectively causes an incident polarization to “decay“ to , while leaving polarization intact. Thus the and polarization states are analogous to the ground and excited states, respectively, of a two-level atomic system.
The ADC is implemented using two HWPs: oriented at respectively, such that incident polarization amplitude “decays“ to . For different fixed orientations of , evolution in ADC is completed by rotating . The PBS is used to segregate the and polarization amplitudes such that the HWP is applied only to polarization for it to serve as excited state of the system and leaving polarization (ground state of the system) undisturbed.
The single qubit Kraus operators for ADC are given by,
M1=(100√1−p) , M2=(0√p00), (1)
where for ADC mimicked by HWP in a photonic system [9].
These operators satisfy the completeness condition,
M†1M1+M†2M2=I, (2)
where is the identity matrix.
The Kraus operators for the two qubits are obtained by taking appropriate tensor products of single qubit Kraus operators as follows,
Mij=Mi⊗Mj ; i,j=1,2. (3)
Label another set of Kraus operators by , with variable replaced by () to distinguish it from the former, with the form of Kraus operators remaining similar to that in (1). Such a splitting into two angles or two values of probability will prove convenient for later applications in the section of manipulation using optical elements in between.
Let the initial state of the system be
|ψ⟩=|α||HH⟩+|β|exp(ιδ)|VV⟩, (4)
with a corresponding density matrix given by,
ρ(0,0)=⎛⎜ ⎜ ⎜⎝u00v00000000v∗00x⎞⎟ ⎟ ⎟⎠, (5)
where and . In general, if , this represents a pure entangled state, otherwise a mixed entangled state. A more general mixed state with non-zero entries in the other two diagonal positions is considered in appendix [B].
The initial state of the system (5) in the presence of ADC (due to at ) evolves as follows,
ρ(1)(p,0)=∑i,jMij ρ(0,0) M†ij ; i,j=1,2. (6)
Apply the Kraus operators to complete the evolution in the presence of ADC (due to at ) as follows,
ρ(1)(p,p′)=∑i,jM′ij ρ(p,0) M′†ij ; i,j=1,2 , (7) =⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝ρ(1)11(p,p′)00ρ(1)14(p,p′)0ρ(1)22(p,p′)0000ρ(1)33(p,p′)0ρ(1)41(p,p′)00ρ(1)44(p,p′)⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠,
where,
ρ(1)11(p,p′) =u+p2x+p′2(1−p)2x+2p′(1−p)px, (8) ρ(1)22(p,p′) =(1−p′)p′(1−p)2x+(1−p′)(1−p)px, ρ(1)33(p,p′) =(1−p′)p′(1−p)2x+(1−p′)(1−p)px, ρ(1)44(p,p′) =(1−p′)2(1−p)2x, ρ(1)14(p,p′) =(1−p′)(1−p)v, ρ(1)41(p,p′) =(1−p′)(1−p)v∗.
## 3 Proposed experimental set up for manipulation of ESD using the NOT operation on both qubits of a bipartite entangled state
The proposed experimental set up for manipulation of ESD based on the local NOT operation performed on both the qubits of a bipartite entangled state (5) is shown in figure (2). The HWP acts as ADC for incident polarized photon and then NOT operation is performed by at , which swaps the and amplitudes, which are then segregated by PBS . The ADC is continued by synchronous rotation of oriented at , which causes the swapped amplitude to “decay“ to . The photons from the output spatial modes of the interferometer are sent for tomographic reconstruction of the quantum state [39].
The initial state of the system (5) in the presence of ADC (due to at ) evolves as follows,
ρ(2)(p,0)=∑i,jMij ρ(0,0) M†ij ; i,j=1,2. (9)
Apply the NOT operation on both the qubits at as follows,
ρ(2)(pn,0)=(^σx⊗^σx)ρ(2)(p,0) (^σx⊗^σx)†, (10)
where is the Pauli matrix. This amounts to switching the elements and and and and interchanging (complex conjugation) the off-diagonal elements.
Apply the Kraus operators to complete the evolution of the system in the presence of ADC (due to and at ) after the NOT operation as follows,
ρ(2)(pn,p′)=∑i,jM′ij ρ(2)(pn,0) M′†ij ; i,j=1,2, (11)
with entries now in the form of (7) given by,
ρ(2)11(pn,p′) =(1−pn)2x+2p′(1−pn)pnx+p′2(u+p2nx), (12) ρ(2)22(pn,p′) =(1−p′)(1−pn)pnx+(1−p′)p′(u+p2nx), ρ(2)33(pn,p′) =(1−p′)(1−pn)pnx+(1−p′)p′(u+p2nx), ρ(2)44(pn,p′) =(1−p′)2(u+p2nx), ρ(2)14(pn,p′) =(1−p′)(1−pn)v∗, ρ(2)41(pn,p′) =(1−p′)(1−pn)v.
## 4 Effect of the NOT operation applied on only one of the qubits
The proposed experimental set up for studying the effect of a NOT operation applied on only one of the qubits of a bipartite entangled state in the presence of ADC on the dynamics of entanglement is to retain one half, say the lower, as in figure (1) and have only the upper half as in figure (2), the optical elements and occurring only in the upper arm.
The initial state of the system (5) in the presence of ADC (due to at ) evolves as follows,
ρ(3)(p,0)=∑i,jMij ρ(0,0) M†ij ; i,j=1,2. (13)
Apply the NOT operation on only one of the qubits by at , let us say first qubit, at as follows,
ρ(3)(pn,0)=(^σx⊗^I) ρ(3)(p,0) (^σx⊗^I)†. (14)
Apply next the Kraus operators to complete the evolution of the system in the presence of ADC (due to at ) after the NOT operation to give,
ρ(3)(pn,p′)=∑i,jM′ij ρ3(pn,0) M′†ij ; i,j=1,2, (15)
with entries now in the form of (7) given by,
ρ(3)11(pn,p′) =p′(1−pn)2x+(1−pn)pnx+p′2(1−pn)pnx+p′(u+p2nx), (16) ρ(3)22(pn,p′) =(1−p′)(1−pn)2x+(1−p′)p′(1−pn)pnx, ρ(3)33(pn,p′) =(1−p′)p′(1−pn)pnx+(1−p′)(u+p2nx), ρ(3)44(pn,p′) =(1−p′)2(1−pn)pnx, ρ(3)23(pn,p′) =(1−p′)(1−pn)v∗, ρ(3)32(pn,p′) =(1−p′)(1−pn)v.
## 5 Some analytical expressions
Let the two polarization entangled qubits constitute the system, as given by eqn (5), and the action of the rotating HWPs simulate the ADC. This causes a polarized photon to probabilistically “decay“ to with probability (), where () is the angle between the fast axis of the HWP and . The ADC probability at which ESD happens, depends on the initial state parameters of the entangled system and the ADC setting of first HWP . The criterion for ESD as indicated by a switch in sign of the eigenvalues of the partial transpose of Eq(7) is given by . For the initial state (5), the condition for ESD is obtained by computing the Negativity of the state (7) and equating it to zero. The condition for ESD is given by,
p′0=|v|−xpx(1−p). (17)
Let us denote the effective end of entanglement due to combined evolution through two HWPs by . The involves a multiplication of survival probabilities to give,
1−pend=(1−p)(1−p′0) with pend=|v|/x. (18)
depending only on the initial state parameters in (5).
For the manipulation of ESD using NOT operation on both the qubits, this operation switches and , and , and interchanges the off-diagonal elements and in Eq. (7). With subsequent evolution, the criterion for when ESD now happens, can be used to determine the value of that marks the boundary between hastening or not relative to , and similarly the value that is the boundary between delaying past or averting ESD completely. We get,
pA=1−2u2(1−u) , pB=|v|−u1+|v|−u. (19)
For the manipulation of ESD using NOT operation on only one of the qubits, this operation now switches and , and , and moves into the position. Following subsequent evolution, the ESD criterion through the partial transpose matrix now becomes . We now get,
pA=|v|u+2|v| , pB=|v|2|v|2−u+1. (20)
These simple expressions defining the time for ESD, and the times for NOT operation that define the delay/hasten and avert/delay boundaries may also be given for a more general mixed state density matrix with also non-zero entries in the two other diagonal position in (5) and are recorded in Appendix [B]. The Appendix [C] records similar expressions for a density matrix with non-zero values in the other off-diagonal position as in [35].
The NOT operation applied on both the qubits at of a bipartite entangled state leads to the end of entanglement given by,
(21)
When NOT operation is applied on only one of the qubits at , the end of entanglement is given by,
pend=[4x(pn−1)pn+4(pn−1)2|v|2+1]1/2+2xpn−12xpn. (22)
## 6 Results and Discussion
As an example, we choose the initial state with and and report the results for ESD, and ESD-manipulation using the NOT operation on one or both the qubits of the bipartite entangled state.
For ESD using two HWPs, the disentanglement happens for at and for any other combination of and , follows the non-linear eqn (17) in , and the effective end due to two HWPs is given by non-linear eqn (18) with . The plot of Negativity N vs. probability of decay of qubits () for the state (7) is shown in figure (3). The plot of Purity (defined as ) vs. probability of decay of qubits () for ESD is shown in figure (4). The two qubit entangled state (5), initially in a pure state, gets mixed at intermediate stages of amplitude damping and finally becomes pure again when both the qubits have decohered down to the ground state () at or . However, at ESD for , it ends as a mixed disentangled state.
For the manipulation of ESD using NOT operation on both the qubits: we get , and . The corresponding plot of Negativity vs. ADC probability () for the state (11) is shown in figure (5). For the manipulation of ESD using NOT operation on only one of the qubits: we get . The corresponding plot of Negativity vs. ADC probability () for the state (15) is shown in figure (6).
The plot of vs. for eqn (21) and (22) such that the NOT operations applied on both (only one of) the qubits at leads to disentanglement at is shown by dashed (solid) blue curve in figure (7). In the avoidance range , the vs. curves are cut off at to signify the asymptotic decay with probabilities remaining in the physical domain. For comparison, we have also included the results of ESD; eqn (17) and a rendering of (18), for every , giving the value of , the compounding of them giving the flat line at as shown by dotted red curve and dot-dashed red line in figure (7). The role of NOT operation on manipulation of ESD is evident as for (i) , we get avoidance of ESD with in this range for NOT operation on only one or both the qubits (ii) ( ), we get delay of ESD as the dashed (solid) blue curve lies above 0.5 but less than 1, and (iii) (), we get hastening of ESD as the dashed (solid) blue curve dips below 0.5.
The discussion so far, and figures , pertain to the choice , and result in for NOT applied to both whereas when applied to just one qubit. This is an example when and both lie in the physically relevant interval (). All three phenomena, of hastening (), delaying (), and averting () ESD then occur. The appearance of the various manipulation regimes (hastening, delay, and avoidance of ESD) critically depends on the choice of the parameters of the initial state (density matrix) of the system as expressed in eqn (17-20).
Consider a general initial state (5), with , which captures pure as well as mixed entangled states. The condition for the existence of hastening regime is for manipulation of ESD using single or double NOT operation. The condition for existence of avoidance regime is () for manipulation of ESD using single (double) NOT operation. For the pure entangled state (4), the condition for a physically relevant is . Thus, pure entangled states (4) with give rise to hastening, delay as well as avoidance of ESD, whereas states with give rise to delay and avoidance of ESD only. For all values of initial parameters, the analytical expressions in (17-20) provide for ESD, and and . When these lie within the domain , all three regimes are realized. Otherwise, one may have only two or one of the three regimes of avoidance, delay or hastening of ESD. More general expressions for a wider class of density matrices than (5) are given in Appendix [B,C].
Consider another example of pure entangled state of the form (5) with and . For this state, , () and does not exist in the physical domain for single (double) NOT operation. Therefore, we get only delay and avoidance of ESD. The corresponding plot of vs. is shown by solid (dashed) blue curve in figure(8). Next, consider an example of mixed entangled state of the form (5) with and . For this state, , ( does not exist), and does not exist in the physical domain for single (double) NOT operation. Therefore, NOT operation applied on only one (both) of the qubits delays as well avoids (only delays) the ESD. The corresponding plot of vs. is shown by solid (dashed) blue curve in figure (9).
## 7 Conclusions and future work
We have proposed an all-optical experimental setup for the demonstration of hastening, delay, and avoidance of ESD in the presence of ADC in a photonic system. The simulation results of the manipulation of ESD considering a photonic system, when NOT operations are applied on one or both the qubits, are completely consistent with the theoretical predictions of reference [35] for the two-level atomic system where spontaneous emission is the ADC. We give analytical expressions for which depend on the parameters of the density matrix of the system for both the forms considered here in (5) and that in [35].
Our proposal also has an advantage over decoherence suppression using weak measurement and quantum measurement reversal, and delayed choice decoherence suppression. There, as the strength of weak interaction increases, the success probability of decoherence suppression decreases. In our scheme, however, we can manipulate the ESD, in principle, with unit success probability as long as we perform the NOT operation at the appropriate wave plate angle which is analogous to time in the atomic system. Delay and avoidance of ESD, in particular, will find application in the practical realization of quantum information and computation protocols which might otherwise suffer a short lifetime of entanglement. Also, it will have implications towards such control over other physical systems. The advantage of the manipulation of ESD in a photonic system is that one has complete control over the damping parameters, unlike in most atomic systems. An experimental realization of our proposal will be important for practical noise engineering in quantum information processing, and is under way. Further work in the future could study the dynamics of entanglement in the presence of the generalized ADC [40-43] and the squeezed generalized ADC [44] and the possible schemes for manipulation of entanglement sudden death in the presence of such damping channels.
## 8 Acknowledgments
US and AS acknowledge Prof. Ujjwal Sen for his comments on the manuscript. AS acknowledges Subhajit Bhar for his assistance in literature review as well as verification of some calculation steps. ARPR thanks the Raman Research Institute for its hospitality during visits.
## 9 References
1. E. Schrdinger, ”Naturwissenschaften” 23, 807 (1935).
2. A. Einstein, B. Podolsky, N. Rosen, ”Can the Quantum-mechanical description of a physical reality be considered complete?” Phys. Rev. 47, 777 (1935).
3. J. S. Bell, ”On the Einstein Podolsky Rosen paradox”, Physics (Long Island City, N.Y.) 1, 195-200 (1964).
4. S. J. Freedman and J. F. Clauser, ”Experimental Test of Local Hidden-Variable Theories”, Phys. Rev. Lett. 28, 938 (1972).
5. T. Yu and J. H. Eberly , ”Finite-Time Disentanglement Via Spontaneous Emission”, Phys. Rev. Lett. 93, 140404 (2004).
6. T. Yu and J. H. Eberly , ”Quantum Open System Theory: Bipartite Aspects”, Phys.Rev. Lett. 97, 140403 (2006).
7. T. Yu and J. H. Eberly , ”Sudden Death of Entanglement”, Science 323 , 598 (2009).
8. J. Laurat, K. S. Choi, H. Deng, C. W. Chou, H. J. Kimble, ”Heralded Entanglement between Atomic Ensembles: Preparation”, Decoherence, and Scaling, Phys. Rev. Lett. 99, 180504 (2007).
9. M. P. Almeida, F. de Melo, M. Hor-Meyll, A. Salles, S. P. Walborn, P. H. Souto Ribeiro, L. Davidovich, ”Environment-Induced Sudden Death of Entanglement”, Science 316, 555 (2007).
10. Jin-Shi Xu, Chuan-Feng Li, Ming Gong, Xu-Bo Zou, Cheng-Hao Shi, Geng Chen, and Guang-Can Guo, ”Experimental demonstration of photonic entanglement collapse and revival”, Phys. Rev. Lett. 104, 100502 (2010).
11. R. Horodecki, P. Horodecki, M. Horodecki, K. Horodecki, ”Quantum entanglement”, Rev. Mod. Phys. 81, 865 (2009).
12. C. H. Bennett and D. P. DiVincenzo, “Quantum information and computation,” Nature 404, 247 (2000).
13. M. A. Nielsen and I. L. Chuang, ”Quantum Computation and Quantum Information” (Cambridge University Press, Cambridge, 2000).
14. J.W. Pan, S. Gasparoni, R. Ursin, G. Weihs, & A. Zeilinger, ”Experimental entanglement purification of arbitrary unknown states”, Nature 423, 417 (2003).
15. P. G. Kwiat, S. Barraza-Lopez, A. Stefanov, and N. Gisin, “Experimental entanglement distillation and ‘hidden’ non-locality,” Nature 409, 1014 (2001).
16. D. A. Lidar, I. L. Chuang, and K. B. Whaley, “Decoherence-free subspaces for quantum computation,” Phys. Rev. Lett. 81, 2594 (1998).
17. P. G. Kwiat, A. J. Berglund, J. B. Altepeter, and A. G. White, “Experimental verification of decoherence-free subspaces,” Science 290, 498 (2000).
18. D. Kielpinski, V. Meyer, M. A. Rowe, C. A. Sackett, W. M. Itano, C. Monroe, and D. J. Wineland, ”A decoherence-free quantum memory using trapped ions”, Science 291, 1013 (2001).
19. L. Viola, E. M. Fortunato, M. A. Pravia, E. Knill, R. Laflamme, and D. G. Cory, ”Experimental realization of noiseless subsystems for quantum information processing”, Science 293, 2059 (2001).
20. P.W. Shor, ”Scheme for reducing decoherence in quantum computer memory”, Phys. Rev. A 52, R2493 (1995) .
21. A.M. Steane,” Error Correcting Codes in Quantum Theory”, Phys. Rev. Lett. 77 , 793 (1996) .
22. Lorenza Viola, Emanuel Knill and Seth Lloyd, ”Dynamical Decoupling of Open Quantum Systems”, Phys. Rev. Lett. 82, 2417 (1999).
23. Michael J. Biercuk, Hermann Uys, Aaron P. VanDevender, Nobuyasu Shiga, Wayne M. Itano & John J. Bollinger, ”Optimized dynamical decoupling in a model quantum memory”, Nature 458, 996 (2009).
24. Jiangfeng Du, Xing Rong, Nan Zhao, Ya Wang, Jiahui Yang & R. B. Liu, ”Preserving electron spin coherence in solids by optimal dynamical decoupling” , Nature 461, 1265 (2009).
25. P. Facchi, D. A. Lidar, and S. Pascazio, ”Unification of dynamical decoupling and the quantum Zeno effect”, Phys. Rev. A 69, 032314 (2004).
26. S. Maniscalco, F. Francica, R. L. Zaffino, N. L. Gullo, and F. Plastina, ”Protecting entanglement via the quantum Zeno effect”, Phys. Rev. Lett. 100, 090503 (2008).
27. J. G. Oliveira, Jr., R. Rossi, Jr., and M. C. Nemes, “Protecting, enhancing, and reviving entanglement,” Phys. Rev. A 78, 044301 (2008).
28. Y.S. Kim, Y.W. Cho, Y.-S. Ra, and Y.-H. Kim, ”Reversing the weak quantum measurement for a photonic qubit”, Opt. Express 17, 11978 (2009).
29. A. N. Korotkov and K. Keane, “Decoherence suppression by quantum measurement reversal,” Phys. Rev. A 81,040103(R) (2010).
30. J.C. Lee, Y.C. Jeong, Y.S. Kim, and Y.H. Kim, “ Experimental demonstration of decoherence suppression via quantum measurement reversal,” Opt. Express 19, 16309 (2011).
31. Q. Sun, M. Al-Amri, L. Davidovich, and M. S. Zubairy, “Reversing entanglement change by a weak measurement,” Phys. Rev. A 82, 052323 (2010).
32. Y.S. Kim, J.C. Lee, O. Kwon, and Y.-H. Kim, “ Protecting entanglement from decoherence using weak measurement and quantum measurement reversal,” Nature Phys. 8, 117 (2012).
33. H.T. Lim, J.C. Lee, K.H. Hong, and Y.H. Kim, “ Avoiding entanglement sudden death using single-qubit quantum measurement reversal,” Opt. Express 22, 19055 (2014).
34. Jong-Chan Lee, Hyang-Tag Lim, Kang-Hee Hong, Youn-Chang Jeong, M.S. Kim & Yoon-Ho Kim, ”Experimental demonstration of delayed-choice decoherence suppression”, Nature Communications 5, 4522 (2014).
35. A.R.P. Rau, M. Ali, and G. Alber, ”Hastening, delaying or averting sudden death of quantum entanglement”, EPL 82, 40002 (2008) .
36. A. Peres , ”Separability Criterion for Density Matrices”, Phys. Rev. Lett. 77, 1413 (1996).
37. M.Horodecki, P. Horodecki and R. Horodecki , ”Separability of mixed states: necessary and sufficient conditions”, Phys. Lett. A 223, 1 (1996).
38. P.G. Kwiat, Edo Waks, Andrew G. White, Ian Appelbaum, and Philippe H. Eberhard, ”Ultra bright source of polarization entangled photons”, Phys. Rev. A 60, 773-776(R) (1999).
39. D. F. V. James, P. G. Kwiat, W. J. Munro, A. G. White, ”Measurement of qubits”, Phys. Rev. A 64, 052312 (2001).
40. Akio Fujiwara, ”Estimation of a generalized amplitude-damping channel”, Phys. Rev. A 70, 012317 (2004).
41. Asma Al-Qasimi and Daniel F. V. James, ”Sudden death of entanglement at finite temperature”, Phys. Rev. A 77,012117 (2008).
42. M. Ali, A. R. P. Rau, and G. Alber, “Manipulating entanglement sudden death of two-qubit X-states in zero- and finite-temperature reservoirs,” J. Phys. B: At. Mol. Opt. Phys. 42, 025501(8)(2009).
43. Mahmood Irtiza Hussain, Rabia Tahira and Manzoor Ikram, ”Manipulating the Sudden Death of Entanglement in Two-qubit Atomic Systems”, Journal of the Korean Physical Society, 59, 2901-2904(2011).
44. R. Srikanth and Subhashish Banerjee, ”Squeezed generalized amplitude damping channel”, Phys. Rev. A 77, 012318 (2008).
## Appendix A An alternative approach to analyze the ESD and manipulation experiments
We provide here an alternative and intuitive approach to analyze the ESD and its manipulation set up by tagging the photon polarization states with the spatial modes of the interferometer upon the action of each of optical component encountered in the photon’s path. The evolution of system plus reservoir is represented by a unitary operator . The degrees of freedom of the reservoir can be traced out from to get the Kraus operators which govern the evolution of the system by itself.
Consider the experimental set up for ESD as shown in fig(1). An incident polarized photon is transmitted though the PBS and traverses the interferometer in a counter-clockwise direction, returns to and is transmitted into spatial mode of the reservoir. The corresponding quantum map is given by,
USR|H⟩S|a⟩R→|H⟩S|a⟩R. (23)
An incident polarized photon is reflected by PBS and traverses the interferometer in a clockwise direction. The action of HWPs and and PBS and are represented by the quantum map,
USR|V⟩S|a⟩R H1@θ−−−→P2√1−p|V⟩S|a′⟩R+√p|H⟩S|b⟩R, (24) H2@θ′−−−−−−−→in |V⟩ arm√1−p[√1−p′ |V⟩S|a′⟩R +√p′ |H⟩S|b′⟩R]+√p|H⟩S|b⟩R],
where
Consider the experimental setup for ESD manipulation using double NOT operation as shown in figure (2). An incident polarized photon is transmitted through PBS and traverses the interferometer in a counter-clockwise direction where the NOT operation is applied by HWP and ADC afterwards is simulated by . The corresponding quantum map is given by,
USR|H⟩S|a⟩R H5@45−−−−→|V⟩S|b⟩R , (25) H6@θ′−−−→√1−p′|V⟩S|b⟩R+√p′|H⟩S|a⟩R .
An incident polarized photon is reflected by PBS and traverses the interferometer in a clockwise direction where ADC is introduced by HWP followed by NOT operation by HWP and then ADC is continued by HWP . The corresponding quantum map is given by,
USR|V⟩S|a⟩R H1@θ−−−→√1−p|V⟩S|a⟩R+√p|H⟩S|b⟩R , (26) P2−−−−→H5@45√1−p|H⟩S|b⟩R+√p|V⟩S|a′⟩R , H2@θ′−−−−−−−→in |V⟩ arm√1−p|H⟩S|b⟩R +√p[√1−p′|V⟩S|a′⟩R+√p′|H⟩S|b′⟩R] .
## Appendix B Analytical expressions for X-state with non-zero entries in the other diagonal terms
As discussed in Section 5, we again use the negativity criterion for the partial transposed density matrix to determine the occurrence of ESD. By following, as in [35], the evolution of the parameters in Eq.(2), and double NOT switching and , and , and swapping the off-diagonal terms and , whereas a single NOT at one end switches and , and , and moves and inward along the anti-diagonal, we again obtain analytical expressions for the various (or equivalently, and ) of interest. Consider the initial mixed entangled state with the density matrix given in a form more general than (5):
ρ2=⎛⎜ ⎜ ⎜⎝a00z0b0000c0z∗00d⎞⎟ ⎟ ⎟⎠, (27)
where . As per the convention for ground and excited states in the reference [35], we choose as the population of both qubits being in excited states and being the population of both qubits in the ground states unlike the (reversed) convention in the main body of this paper.
In the presence of ADC, the condition for ESD is given by,
p0=−b−c+[(b−c)2+4|z|2]1/22a. (28)
For manipulation of ESD using NOT operation on both the qubits, the condition for hastening ESD is given by,
pA=a−d1+a−d. (29)
The condition for avoidance of ESD is given by,
pB=1−2a+b+c−[(b−c)2+4|z|2)]1/22[(a+b)(a+c)−|z|2]. (30)
For manipulation of ESD using NOT operation on only one of the qubits, the condition for hastening ESD is given by,
pA=1−(c+a)[(c+a)(1−p0)−1](a+b){(a+b)−[(b−c)2+4|z|2]1/2}−a. (31)
and the condition for avoidance of ESD is given by,
pB=|z|2−c|z|2+a. (32)
## Appendix C Analytical expressions for the X-state in Reference [35]
Consider the initial mixed entangled state with the form of density matrix as in reference [35],
ρ1=⎛⎜ ⎜ ⎜⎝a0000bz00z∗c0000d⎞⎟ ⎟ ⎟⎠. (33)
For simplicity, the 1/3 factor in [35] has been absorbed into the density matrix elements.
In the presence of ADC, the condition for ESD is given by,
p0=−b−c+[(b+c+2a)2−4(a−|z|2)]1/22a, (34)
where , is the spontaneous decay rate of the two level atomic qubit as introduced in [35].
For manipulation of ESD using NOT operation on both the qubits, the condition for hastening ESD is given by,
pA=a−d1+a−d. (35)
The condition for avoidance of ESD is given by,
pB=2(a−|z|2)−(2a+b+c)+[(b+c+2a)2−4(a−|z|2)]1/22(a−|z|2). (36)
For manipulation of ESD using NOT operation on only one of the qubits, the condition for hastening ESD is given by,
pA=(c+a)[2a(1−p0)−(c+a)(1−p0)+c+d]−a(c+a)[2a(1−p0)−(b+a)]−a. (37)
The condition for avoidance of ESD is given by,
pB=1−a+c(a+b)(a+c)+|z|2. (38)
For hastening, delay and avoidance to exist in a physical region, the corresponding parameters must satisfy the condition . As an example, the choice gives , , and an unphysical negative value of . This means that neither hastening nor averting ESD is possible, only delaying it by applying NOT between and . | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8814756870269775, "perplexity": 1887.7924512363054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00393.warc.gz"} |
https://www.ias.ac.in/listing/bibliography/joaa/L._Lahlou | • L. Lahlou
Articles written in Journal of Astrophysics and Astronomy
• Tree Level Potential on Brane after Planck and BICEP2
The recent detection of degree scale B-mode polarization in the Cosmic Microwave Background (CMB) by the BICEP2 experiment implies that the inflationary ratio of tensor-to-scalar fluctuations is 𝑟 = 0.2$^{+0.07}_{-0.05}$, which has opened a new window in the cosmological investigation. In this regard, we propose a study of the tree level potential inflation in the framework of the Randall–Sundrum type-2 braneworld model. We focus on three branches of the potential, where we evaluate some values of brane tension 𝜆. We discuss how the various inflationary perturbation parameters can be compatible with recent Planck and BICEP2 observations.
• # Journal of Astrophysics and Astronomy
Volume 41, 2020
All articles
Continuous Article Publishing mode
• # Continuous Article Publication
Posted on January 27, 2016
Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles.
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4807433485984802, "perplexity": 2693.275820590334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419639.53/warc/CC-MAIN-20200601211310-20200602001310-00548.warc.gz"} |
https://testbook.com/learn/maths-derivatives-of-logarithmic-functions/ | # Derivatives of Logarithmic Functions with Formulas, Proof and Solved Examples
0
Save
We are going to learn the key concepts of derivatives of logarithmic functions with definitions, important formulas with proof, properties and graphical representation of derivatives. We have also added a few solved examples for the derivatives of logarithmic functions that candidates will find beneficial in their exam preparation.
## What are Derivatives?
Derivatives of a function is a concepts in mathematics of real variables that measure the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). They are a part of differential calculus. There are various methods of log differentiation.
Derivative of a function of a real variable measures the sensitivity to change the function value (output value) with respect to a change in its argument (input value).
Derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. You should also learn about the application of derivatives after differentiating log function.
Derivative of a function is represented by $${dy\over{dx}}$$ or f′(x), represents the limit of the secant’s slope as h approaches zero.
Example of derivative: The derivative of a displacement function is velocity.
## What are Derivatives of Logarithmic Functions?
Derivatives of Logarithmic functions of the variable with respect to itself are equal to its reciprocal.
Derivatives of Logarithmic Functions are a series of formulae that can be used to differentiate logarithmic functions quickly.
$${d\over{dx}}{logx}={1\over{x}}$$
Derivatives of logarithmic functions are used to find solutions to differential equations.
## Derivatives of Logarithmic Functions Formula
The Derivatives of Logarithmic Functions Formula by using the normal method is as follows:
If x > 0 and y= lnx, then
$${dy\over{dx}} = {1\over{x}}$$
If x≠0 and y=ln|x|, then
$${dy\over{dx}} = {1\over{x}}$$
If the natural log is not just x but instead is g(x), a differentiable function. Now, using the chain rule, we get a more general derivative: for all values of x for which g(x) > 0, the derivative of f(x) = ln(g(x)) is given by
$$f'(x) = {1\over{g(x)}}g’(x)$$
### Derivatives of Logarithmic Functions using Graphical Representation
To learn the derivatives of logarithmic functions using graphical representation, let’s look at a graph of the log function with base e:
f(x) = log_e(x).
The tangent at x = 2 is included on the graph.
The slope of the tangent of y = ln x at $$\displaystyle{x}={2}$$ is $$\displaystyle\frac{1}{{2}}.$$ We can observe this from the graph, by looking at the ratio rise/run.
If y = ln x,
x 1 2 3 4 5 The slope of the graph 1 1/2 1/3 1/4 1/5 1/x 1 1/2 1/3 1/4 1/5
Learn about Derivative of Cos3x and Differentiation and Integration
## Derivatives of Logarithmic Functions Proof
According to the definition of the derivatives of logarithmic functions proof by first principle of derivative, the logarithmic differentiation of f(x)=x with respect to x can be written in limited operation form as follows:
$$\begin{matrix} f’(x)={dy\over{dx}}=\lim _{h{\rightarrow}0}{f(x+h)–f(x)\over{h}} f(x)=logx\\ f(x+h)=log(x+h)\\ f(x+h)–f(x)=log(x+h)-logx=log({x+h\over{x}})=log(1+{h\over{x}})\\ {f(x+h)–f(x)\over{h}}={log(1+{h\over{x}})\over{h}}\\ \text{Let’s put (h\x) as (1\n)}\\ {f(x+h)–f(x)\over{h}}=({1\over{h}}){log(1+{1\over{n}})}=({n\over{x}}){log(1+{1\over{n}})}=({1\over{x}})({log(1+{1\over{n}})})^n\\ \text{As h tends to zero (1\n) tends to}\infty\\ \lim _{h{\rightarrow}\infty}{f(x+h)–f(x)\over{h}}=\lim _{h{\rightarrow}\infty}{(1\over{x}})({log(1+{1\over{n}})})^n={(1\over{x}})\lim _{h{\rightarrow}\infty}({log(1+{1\over{n}})})^n\\ \text{The value of the limit is equal to}\\ \lim\limits_{n \to \infty } {\left( {1 + \frac{1}{n}} \right)^n} = e \approx 2.718281828459 \ldots\\ \left( {{{\log }_a}x} \right)^\prime = \frac{1}{x}{\log _a}e\\ {\log _a}e = \frac{{\ln e}}{{\ln a}} = \frac{1}{{\ln a}}\\ f’(x)={d\over{dx}}{logx}={1\over{x}} \end{matrix}$$
Learn about First Principles of Derivatives and Derivative of Root x
## Solved Examples of Derivatives of Logarithmic Functions
Example: Find the derivative of y = ln 2x
We use the log law:
log ab = log a + log b
We can write our question as:
y = ln 2x = ln 2 + ln x
Now, the derivative of a constant is 0, so
$${d\over{dx}} ln 2 = 0$$
So we are left with (from our formula above)
$$y’ = {d\over{dx}}lnx = {1\over{x}}$$
Example: Find the derivative of $$y = lnx^2$$
We use the log law:
$$log a^n = n log a$$
So we can write the question as $$y = ln x^2 = 2 ln x$$
The derivative will be simply 2 times the derivative of ln x.
$$y’ = 2\times{d\over{dx}}lnx = {2\over{x}}$$
Example: Find the derivative of $$f(x) = ln (x^3+3x−4)$$
$$f(x) = ln (x^3+3x−4)$$
We use the formula $$f'(x) = {1\over{g(x)}}g’(x)$$ where g(x) is the composite function of x.
Therefore we get,
$$f'(x) = {1\over{(x^3+3x−4)}}[3x^2 + 3]$$
Example: Find the derivative of $$f(x) = ln({x^2sinx\over{2x+1}})$$
$$f(x)=ln({x^2sinx\over{2x+1}}) = 2lnx + ln(sinx) − ln(2x+1)$$
We use the formula $$f'(x) = {1\over{g(x)}}g’(x)$$ where g(x) is the composite function of x.
Therefore we get,
$$\begin{matrix} f'(x) = {2\over{x}} + {1\over{sinx}}.cosx – {1\over{2x+1}}.2\\ f'(x) = {2\over{x}} + cotx – {2\over{2x+1}} \end{matrix}$$
Example: Differentiate: $$f(x)=ln(3x+2)^5$$
$$f(x)=ln(3x+2)^5$$
We use the log law:
$$log a^n = n log a$$
f(x) = 5ln(3x+2)
We use the formula $$f'(x) = {1\over{g(x)}}g’(x)$$ where g(x) is the composite function of x.
Therefore we get,
$$\begin{matrix} f'(x) = 5\times {1\over{(3x + 2}} \times3\\ f'(x) = {15\over{(3x + 2}} \end{matrix}$$
$$f(x) = {3^x\over{3^x + 2}}$$
$$\begin{matrix} f(x) = {3^x\over{3^x + 2}}\\ f’(x) = {3^x ln 3(3^x + 2) – 3^x ln 3(3^x)\over{(3^x + 2)^2}}\\ f’(x) = {2⋅3^xln3\over{(3^x + 2)^2}} \end{matrix}$$
Hope this article on the Derivatives of Logarithmic Functions was informative. Get some practice of the same on our free Testbook App. Download Now!
If you are checking Derivatives of Logarithmic Functions article, also check the related maths articles: Logarithmic Functions Derivative of log x Applications of Derivatives Derivative Rules Logarithmic Function MCQs Derivatives of Algebraic Functions
## Derivatives of Logarithmic Functions FAQs
Q.1 What are Derivatives?
Ans.1 The derivative of a function of real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The derivative of a function, represented by $${dy\over{dx}}$$ or f′(x), represents the limit of the secant’s slope as h approaches zero. Example: The derivative of a displacement function is velocity.
Q.2 What is the derivative of a logarithmic function?
Ans.2 The derivative of a logarithmic function of the variable with respect to itself is equal to its reciprocal. $${d\over{dx}}{logx}={1\over{x}}$$
Q.3 What is derivative of log 2x?
Ans.3 f(x) = log 2x We use the log law: log ab = log a + log b We can write our question as: y = log 2x = log 2 + ln x Now, the derivative of a constant is 0, so {d\over{dx}} log 2 = 0 So we are left with (from our formula above) y’ = {d\over{dx}}logx = {1\over{x}}
Q.4 What is the derivative of exponential functions?
Ans.4 The derivative of an exponential term, which contains a variable as a base and a constant as power, is called the constant power derivative rule. Suppose a and x represent a constant and a variable respectively then the exponential function is written as a^x in mathematics. The derivative of a raised to the power of x with respect to x is written in the following form in calculus. $${d\over{dx}}a^x=a^x.\ln{a}$$
Q.5 What is the derivative of f(x) = loga?
Ans.5 a is constant. Derivative of any function of constant is 0. Therefore, f’(x) = o.
Q.5 What is the derivative of log a?
Ans.5 The derivative of loga(x):
y=loga(x)x=ay1=ddx(ay)1=ayln(a)dydxdydx=1ayln(a)dydx=1xln(a).
Q.6 What is the differentiation of log 2?
Ans.6 The derivative of log 2(x) log 2 ( x ) with respect to x is 1xln(2) 1 x ln (2). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9464746713638306, "perplexity": 430.22502133748213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00121.warc.gz"} |
https://stdlib.io/docs/api/latest/@stdlib/stats/incr/pcorrdistmat | # incrpcorrdistmat
Compute a sample Pearson product-moment correlation distance matrix incrementally.
A sample Pearson product-moment correlation distance matrix is an M-by-M matrix whose elements specified by indices j and k are the sample Pearson product-moment correlation distances between the jth and kth data variables. The sample Pearson product-moment correlation distance is defined as
where r is the sample Pearson product-moment correlation coefficient, cov(x,y) is the sample covariance, and σ corresponds to the sample standard deviation. As r resides on the interval [-1,1], d resides on the interval [0,2].
## Usage
var incrpcorrdistmat = require( '@stdlib/stats/incr/pcorrdistmat' );
#### incrpcorrdistmat( out[, means] )
Returns an accumulator function which incrementally computes a sample Pearson product-moment correlation distance matrix.
// Create an accumulator for computing a 2-dimensional correlation distance matrix:
var accumulator = incrpcorrdistmat( 2 );
The out argument may be either the order of the correlation distance matrix or a square 2-dimensional ndarray for storing the correlation distance matrix.
var Float64Array = require( '@stdlib/array/float64' );
var ndarray = require( '@stdlib/ndarray/ctor' );
var buffer = new Float64Array( 4 );
var shape = [ 2, 2 ];
var strides = [ 2, 1 ];
// Create a 2-dimensional output correlation distance matrix:
var dist = ndarray( 'float64', buffer, shape, strides, 0, 'row-major' );
var accumulator = incrpcorrdistmat( dist );
When means are known, the function supports providing a 1-dimensional ndarray containing mean values.
var Float64Array = require( '@stdlib/array/float64' );
var ndarray = require( '@stdlib/ndarray/ctor' );
var buffer = new Float64Array( 2 );
var shape = [ 2 ];
var strides = [ 1 ];
var means = ndarray( 'float64', buffer, shape, strides, 0, 'row-major' );
means.set( 0, 3.0 );
means.set( 1, -5.5 );
var accumulator = incrpcorrdistmat( 2, means );
#### accumulator( [vector] )
If provided a data vector, the accumulator function returns an updated sample Pearson product-moment distance correlation matrix. If not provided a data vector, the accumulator function returns the current sample Pearson product-moment correlation distance matrix.
var Float64Array = require( '@stdlib/array/float64' );
var ndarray = require( '@stdlib/ndarray/ctor' );
var buffer = new Float64Array( 4 );
var shape = [ 2, 2 ];
var strides = [ 2, 1 ];
var dist = ndarray( 'float64', buffer, shape, strides, 0, 'row-major' );
buffer = new Float64Array( 2 );
shape = [ 2 ];
strides = [ 1 ];
var vec = ndarray( 'float64', buffer, shape, strides, 0, 'row-major' );
var accumulator = incrpcorrdistmat( dist );
vec.set( 0, 2.0 );
vec.set( 1, 1.0 );
var out = accumulator( vec );
// returns <ndarray>
var bool = ( out === dist );
// returns true
vec.set( 0, 1.0 );
vec.set( 1, -5.0 );
out = accumulator( vec );
// returns <ndarray>
vec.set( 0, 3.0 );
vec.set( 1, 3.14 );
out = accumulator( vec );
// returns <ndarray>
out = accumulator();
// returns <ndarray>
## Notes
• Due to limitations inherent in representing numeric values using floating-point format (i.e., the inability to represent numeric values with infinite precision), the correlation distance between perfectly correlated random variables may not be 0 or 2. In fact, the correlation distance is not guaranteed to be strictly on the interval [0,2]. Any computed distance should, however, be within floating-point roundoff error.
## Examples
var randu = require( '@stdlib/random/base/randu' );
var ndarray = require( '@stdlib/ndarray/ctor' );
var Float64Array = require( '@stdlib/array/float64' );
var incrpcorrdistmat = require( '@stdlib/stats/incr/pcorrdistmat' );
var dist;
var dxy;
var dyx;
var dx;
var dy;
var i;
// Initialize an accumulator for a 2-dimensional correlation distance matrix:
var accumulator = incrpcorrdistmat( 2 );
// Create a 1-dimensional data vector:
var buffer = new Float64Array( 2 );
var shape = [ 2 ];
var strides = [ 1 ];
var vec = ndarray( 'float64', buffer, shape, strides, 0, 'row-major' );
// For each simulated data vector, update the sample correlation distance matrix...
for ( i = 0; i < 100; i++ ) {
vec.set( 0, randu()*100.0 );
vec.set( 1, randu()*100.0 );
dist = accumulator( vec );
dx = dist.get( 0, 0 ).toFixed( 4 );
dy = dist.get( 1, 1 ).toFixed( 4 );
dxy = dist.get( 0, 1 ).toFixed( 4 );
dyx = dist.get( 1, 0 ).toFixed( 4 );
console.log( '[ %d, %d\n %d, %d ]', dx, dxy, dyx, dy );
} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18967507779598236, "perplexity": 26244.079169878274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00464.warc.gz"} |
http://www.eurotrib.com/story/2007/6/3/82538/92559 | Welcome to the new version of European Tribune. It's just a new layout, so everything should work as before - please report bugs here.
## Wind power: some lessons from 2006
by Jerome a Paris Mon Jun 4th, 2007 at 04:46:47 AM EST
The Department of Energy's Energy Efficiency and Renewable Energy (EERE) center has published its Annual Report on U.S. Wind Power Installation, Cost, and Performance Trends: 2006 (pdf - the graphs below come from the accompanying powerpoint presentation - also pdf).
I've cherry-picked a few tidbits of information that underline what are in my view interesting lessons from last year for the wind power sector.
Disclaimer: I finance wind farms. While that means in practice that I make sure that the projects I work on have as few vulnerabilities (technical, economic, legal, or political) as possible, I am naturally interested in the growth of the industry that underpins my job. So take this diary with the grain of salt you think it accordingly deserves.
From the diaries - afew
2006 was another good year for wind power, with a (via GWEC) 32% growth in capacity installed over the year:
For the second year running, the USA was the first country by MW installed in that year, although not the first by cumulative capacity, with Germany still far ahead, and Spain still ahead of it.
The strong position of China, and even more of India (home of manufacturer Suzlon, which has just won the battle to buy German manufacturer Repower, and purchased Belgian gearbox subcontractor Hansen last year) should be noted.
However, in terms of capacity relative to domestic electricity markets, the European pioneers (Denmark, then Germany and Spain) are still far ahead:
We're beginning to see countries where wind penetration is large enough to provide a visible portion of total electricity (and note - this is the fraction of actual kWh consumed, after taking into account the lower availability of wind power generation capacity) - and these numbers are set to keep on growing significantly in the coming years, as more capacity comes online. Even though most of market growth now comes from newcomers, like France or Canada, countries like Spain and Germany are still adding 10-15% new capacity to their existing stock each year. As I wrote in an earlier diary (No technical limitation to wind power penetration), there's still a lot to go before integration of wind into the grid becomes an issue. The EERE has a table which confirms this, with the additional cost of dealing with wind power between 0.2 and 0.5 c/kWh:
by Oui - Mar 28
1 comment
by Oui - Apr 9
by Oui - Apr 12
by Oui - Apr 8
by Oui - Apr 2
# Recent Diaries
## Support for abortion in Ireland slips
by Frank Schnittger - Apr 20
by Oui - Apr 17
1 comment
by Oui - Apr 17
by Cat - Apr 14
by Cat - Apr 14
by Oui - Apr 12
by Oui - Apr 10
by Oui - Apr 9
by Oui - Apr 8
by Cat - Apr 6
by Oui - Apr 6
by Oui - Apr 4
by Oui - Apr 4
by Oui - Apr 3
by Oui - Apr 2
by Oui - Apr 1
1 comment
by Oui - Mar 31
by Oui - Mar 30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18895508348941803, "perplexity": 3960.0735680811927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946120.31/warc/CC-MAIN-20180423164921-20180423184921-00476.warc.gz"} |
https://hal.telecom-paristech.fr/hal-02288594v1 | # A Fundamental Storage-Communication Tradeoff in Distributed Computing with Straggling Nodes
Abstract : The optimal storage-computation tradeoff is characterized for a MapReduce-like distributed computing system with straggling nodes, where only a part of the nodes can be utilized to compute the desired output functions. The result holds for arbitrary output functions and thus generalizes previous results that restricted to linear functions. Specifically, in this work, we propose a new information-theoretical converse and a new matching coded computing scheme, that we call coded computing for straggling systems (CCS).
Document type :
Preprints, Working Papers, ...
Domain :
Cited literature [13 references]
https://hal.telecom-paristech.fr/hal-02288594
Contributor : Qifa Yan <>
Submitted on : Sunday, September 15, 2019 - 5:39:54 AM
Last modification on : Thursday, October 17, 2019 - 12:36:55 PM
### File
CCS_ISIT.pdf
Files produced by the author(s)
### Identifiers
• HAL Id : hal-02288594, version 1
### Citation
Qifa Yan, Michèle Wigger, Sheng Yang, Xiaohu Tang. A Fundamental Storage-Communication Tradeoff in Distributed Computing with Straggling Nodes. 2019. ⟨hal-02288594⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8330921530723572, "perplexity": 7733.836628026313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541317967.94/warc/CC-MAIN-20191216041840-20191216065840-00538.warc.gz"} |
http://www.stat.cmu.edu/~cshalizi/statcomp/14/lectures/08/lecture-08.html | 22 September 2014
## In Previous Episodes
• Seen functions to load data in passing
• Learned about string manipulation and regexp
## Agenda
• Getting data into and out of the system when it's already in R format
• Import and export when the data is already very structured and machine-readable
• Dealing with less structured data
• Web scraping
• You can load and save R objects
• R has its own format for this, which is shared across operating systems
• It's an open, documented format if you really want to pry into it
• save(thing, file="name") saves thing in a file called name (conventional extension: rda or Rda)
• load("name") loads the object or objects stored in the file called name, with their old names
gmp <- read.table("http://www.stat.cmu.edu/~cshalizi/statcomp/14/lectures/06/gmp.dat")
gmp$pop <- round(gmp$gmp/gmp$pcgmp) save(gmp,file="gmp.Rda") rm(gmp) exists("gmp") ## [1] FALSE not_gmp <- load(file="gmp.Rda") colnames(gmp) ## [1] "MSA" "gmp" "pcgmp" "pop" not_gmp ## [1] "gmp" • We can load or save more than one object at once; this is how RStudio will load your whole workspace when you're starting, and offer to save it when you're done • Many packages come with saved data objects; there's the convenience function data() to load them data(cats,package="MASS") summary(cats) ## Sex Bwt Hwt ## F:47 Min. :2.00 Min. : 6.30 ## M:97 1st Qu.:2.30 1st Qu.: 8.95 ## Median :2.70 Median :10.10 ## Mean :2.72 Mean :10.63 ## 3rd Qu.:3.02 3rd Qu.:12.12 ## Max. :3.90 Max. :20.50 Note: data() returns the name of the loaded data file! ## Non-R Data Tables • Tables full of data, just not in the R file format • Main function: read.table() • Presumes space-separated fields, one line per row • Main argument is the file name or URL • Returns a dataframe • Lots of options for things like field separator, column names, forcing or guessing column types, skipping lines at the start of the file… • read.csv() is a short-cut to set the options for reading comma-separated value (CSV) files • Spreadsheets will usually read and write CSV ## Writing Dataframes • Counterpart functions write.table(), write.csv() write a dataframe into a file • Drawback: takes a lot more disk space than what you get from load or save • Advantage: can communicate with other programs, or even edit manually ## Less Friendly Data Formats • The foreign package on CRAN has tools for reading data files from lots of non-R statistical software • Spreadsheets are special ## Spreadsheets Considered Harmful • Spreadsheets look like they should be dataframes • Real spreadsheets are full of ugly irregularities • Values or formulas? • Headers, footers, side-comments, notes • Columns change meaning half-way down • Whole separate programming languages apparently intended to mostly to spread malware • Ought-to-be-notorious source of errors in both industry (1, 2) and science (e.g., Reinhart and Rogoff) ## Spreadsheets, If You Have To • Save the spreadsheet as a CSV; read.csv() • Save the spreadsheet as a CSV; edit in a text editor; read.csv() • Use read.xls() from the gdata package • Tries very hard to work like read.csv(), can take a URL or filename • Can skip down to the first line that matches some pattern, select different sheets, etc. • You may still need to do a lot of tidying up after require(gdata, quietly=TRUE) ## gdata: read.xls support for 'XLS' (Excel 97-2004) files ENABLED. ## ## gdata: read.xls support for 'XLSX' (Excel 2007+) files ENABLED. ## ## Attaching package: 'gdata' ## ## The following object is masked from 'package:stats': ## ## nobs ## ## The following object is masked from 'package:utils': ## ## object.size setwd("~/Downloads/") gmp_2008_2013 <- read.xls("gdp_metro0914.xls",pattern="U.S.") head(gmp_2008_2013) ## U.S..metropolitan.areas X13.269.057 X12.994.636 X13.461.662 ## 1 Abilene, TX 5,725 5,239 5,429 ## 2 Akron, OH 28,663 27,761 28,616 ## 3 Albany, GA 4,795 4,957 4,928 ## 4 Albany, OR 3,235 3,064 3,050 ## 5 Albany-Schenectady-Troy, NY 40,365 42,454 42,969 ## 6 Albuquerque, NM 37,359 38,110 38,801 ## X13.953.082 X14.606.938 X15.079.920 ....... ## 1 5,761 6,143 6,452 252 ## 2 29,425 31,012 31,485 80 ## 3 4,938 5,122 5,307 290 ## 4 3,170 3,294 3,375 363 ## 5 43,663 45,330 46,537 58 ## 6 39,967 41,301 41,970 64 ## Semi-Structured Files, Odd Formats • Files with metadata (e.g., earthquake catalog) • Non-tabular arrangement • Generally, write function to read in one (or a few) lines and split it into some nicer format • Generally involves a lot of regexps • Functions are easier to get right than code blocks in loops ## In Praise of Capture Groups • Parentheses don't just group for quantifiers; they also create capture groups, which the regexp engine remembers • Can be referred to later (\1, \2, etc.) • Can also be used to simplify getting stuff out • Examples in the handout on regexps, but let's reinforce the point ## Scraping the Rich • Remember that the lines giving net worth looked like <td class="worth">$72 B</td>
or
<td class="worth">$5,3 B</td> One regexp which catches this: richhtml <- readLines("http://www.stat.cmu.edu/~cshalizi/statcomp/14/labs/03/rich.html") worth_pattern <- "\\$[0-9,]+ B"
worth_lines <- grep(worth_pattern, richhtml)
length(worth_lines)
## [1] 100
(that last to check we have the right number of matches)
Just using this gives us strings, including the markers we used to pin down where the information was:
worth_matches <- regexpr(worth_pattern, richhtml)
worths <- regmatches(richhtml, worth_matches)
head(worths)
## [1] "$72 B" "$58,5 B" "$41 B" "$36 B" "$36 B" "$35,4 B"
Now we'd need to get rid of the anchoring $ and B; we could use substr, but… Adding a capture group doesn't change what we match: worth_capture <- worth_pattern <- "\\$([0-9,]+) B"
capture_lines <- grep(worth_capture, richhtml)
identical(worth_lines, capture_lines)
## [1] TRUE
but it does have an advantage
## Using regexec
worth_matches <- regmatches(richhtml[capture_lines],
regexec(worth_capture, richhtml[capture_lines]))
worth_matches[1:2]
## [[1]]
## [1] "$72 B" "72" ## ## [[2]] ## [1] "$58,5 B" "58,5"
List with 1 element per matching line, giving the whole match and then each paranethesized matching sub-expression
Functions make the remaining manipulation easier:
second_element <- function(x) { return(x[2]) }
worth_strings <- sapply(worth_matches, second_element)
comma_to_dot <- function(x) {
return(gsub(pattern=",",replacement=".",x))
}
worths <- as.numeric(sapply(worth_strings, comma_to_dot))
head(worths)
## [1] 72.0 58.5 41.0 36.0 36.0 35.4
Exercise: Write one function which takes a single line, gets the capture group, and converts it to a number
## Web Scraping
1. Take a webpage designed for humans to read
2. Have the computer extract the information we actually want
3. Iterate as appropriate
Take in unstructured pages, return rigidly formatted data
## Being More Explicit in Step 2
• The information we want is somewhere in the page, possibly in the HTML
• There are usually markers surrounding it, probably in the HTML
• We now know how to pick apart HTML using regular expressions
• Figure out exactly what we want from the page
• Understand how the information is organized on the page
• What does a human use to find it?
• Where do those cues appear in the HTML source?
• Write a function to automate information extraction
• Generally, this means regexps
• Parenthesized capture groups are helpful
• The function may need to iterate
• You may need more than one function
• Once you've got it working for one page, iterate over relevant pages
## Example: Book Networks
• Two books are linked if they're bought together at Amazon
• Amazon gives this information away (to try to drive sales)
• How would we replicate this?
• Do we want "frequently bought together", or "customers who bought this also bought that"? Or even "what else do customers buy after viewing this"?
• Let's say "customers who bought this also bought that"
• Now look carefully at the HTML
• There are over 14,000 lines in the HTML file for this page; you'll need a text editor
• Fortunately most of it's irrelevant
<div class="shoveler" id="purchaseShvl">
<h2>Customers Who Bought This Item Also Bought</h2>
</div>
<div class="shoveler-pagination" style="display:none">
<span> </span>
<span>
Page <span class="page-number"></span> of <span class="num-pages"></span>
<span class="start-over"><span class="a-text-separator"></span><a href="#" onclick="return false;" class="start-over-link">Start over</a></span>
</span>
</div>
<div class="shoveler-button-wrapper" id="purchaseButtonWrapper">
<a class="back-button" href="#Back" style="display:none" onclick="return false;"><span class="auiTestSprite s_shvlBack"><span>Back</span></span></a>
<div class="shoveler-content">
<ul tabindex="-1">
Here's the first of the also-bought books:
<li>
<div class="new-faceout p13nimp" id="purchase_0387981403" data-asin="0387981403" data-ref="pd_sim_b_1">
<a href="/ggplot2-Elegant-Graphics-Data-Analysis/dp/0387981403/ref=pd_sim_b_1?ie=UTF8&refRID=1HZ0VDHEFFX3EM2WNWRH" class="sim-img-title" > <div class="product-image">
<img src="http://ecx.images-amazon.com/images/I/31I22xsT%2BXL._SL500_PIsitb-sticker-arrow-big,TopRight,35,-73_OU01_SS100_.jpg" width="100" alt="" height="100" border="0" />
</div>
<span title="ggplot2: Elegant Graphics for Data Analysis (Use R!)">ggplot2: Elegant Graphics for Data …</span> </a>
<div class="byline">
<span class="carat">›</span>
We could extract the ISBN from this, and then go on to the next book, and so forth…
<div id="purchaseSimsData" class="sims-data"
style="display:none" data-baseAsin="0387747303"
data-deviceType="desktop" data-featureId="pd_sim" data-isAUI="1" data-pageId="0387747303" data-pageRequestId="1HZ0VDHEFFX3EM2WNWRH" data-reftag="pd_sim_b" data-vt="0387747303"
data-wdg="book_display_on_website"
data-widgetName="purchase">0387981403,0596809158,1593273843,1449316956,
0387938362,144931208X,0387790535,0387886974,0470973927,0387759689,
1439810184,1461413648,1461471370,1782162143,1441998896,1429224622,
1612903436,1441996494,1461468485,1617291560,1439831769,0321888030,1449319793,
1119962846,0521762936,1446200469,1449358659,1935182390,0123814855,1599941651,
0387759352,1461476178,0387773169,0387922970,0073523321,141297514X,1439840954,
1612900275,1449339735,052168689X,0387781706,1584884509,0387848576,1420068725,
1441915753,1466572841,1107422221,111844714X,0716762196,0133412938,1482203537,
0963488406,1466586966,0470463635,1493909827,1420079336,0321898656,1461422981,
158488424X,1441926127,1466570229,1590475348,1430266406,0071794565,0071623663,
111866146X,1441977864,1782160604,1449340377,1449309038,0963488414,0137444265,
1461406846,0073014664,1449370780,144197864X,3642201911,0534243126,1461443423,
158488651X,1449357105,1118208781,1420099604,1107057132,1449355730,1118356853,
1449361323,0470890819,0387245448,0521518148,0521169828,1584888490,1461464455,
0387781889,0387759581,0387717617,0123748569,188652923X,0155061399,0201076160</div>
In this case there's a big block which gives us the ISBNs of all the also-bought books
Strategy:
• Load the page as text
• Search for the regexp which begins this block, contains at least one ISBN, and then ends
• Extract the sequence of ISBNs as a string, split on comma
• Record in a dataframe that Data Manipulation's ISBN is also bought with each of those ISBNs
• Snowball sampling: Go to the webpage of each of those books and repeat
• Stop when we get tired…
• Or when Amazon gets annoyed with us
## More considerations on web-scraping
• You should really look at the site's robots.txt file and respect it
• See [https://github.com/hadley/rvest] for a prototype of a package to automate a lot of the work of scraping webpages | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22725269198417664, "perplexity": 11417.660496842222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.84/warc/CC-MAIN-20170423031207-00444-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://github.com/4teamwork/ftw.pdfgenerator | # 4teamwork/ftw.pdfgenerator
A library for generating PDF representations of Plone objects with LaTeX.
Python TeX
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information. docs ftw .gitignore MANIFEST.in README.rst bootstrap.py development.cfg setup.py sources.cfg test-plone-4.1.x.cfg test-plone-4.2.x.cfg test-plone-4.3.x.cfg
## Introduction
ftw.pdfgenerator is meant to be used for generating PDFs from structured data using predefined LaTeX views. It is not useful for converting full HTML pages into LaTeX / PDFs, although it is able to convert small HTML chunks into LaTeX.
Certified: 01/2013
## Requirements
ftw.pdfgenerator requires a TeX distribution with a pdflatex executable to be installed.
These TeX distributions are recommended:
The package is compatible with Plone 4.x.
## Installing
Add ftw.pdfgenerator to your buildout configuration:
[instance]
eggs =
ftw.pdfgenerator
## Usage
The pdfgenerator uses LaTeX for generating the PDF. You need to provide a layout and a view for your context for being able to create a PDF.
### Real world examples
Some packages using ftw.pdfgenerator:
### Defining a layout
A layout is a multi adapter addapting context, request, builder. You can easily define a new layout using the mako templating engine (example: layout.py):
>>> from example.conference.session import ISession
>>> from ftw.pdfgenerator.interfaces import IBuilder
>>> from ftw.pdfgenerator.interfaces import ICustomizableLayout
>>> from ftw.pdfgenerator.layout.customizable import CustomizableLayout
>>> from zope.interface import Interface
>>> from zope.interface import implements
>>> class SessionLayout(MakoLayoutBase):
... implements(ICustomizableLayout)
...
... template_directories = ['session_templates']
... template_name = 'layout.tex'
...
... def before_render_hook(self):
... self.use_babel()
... self.use_package('inputenc', options='utf8')
... self.use_package('fontenc', options='T1')
Register the layout with zcml (example: configure.zcml):
<configure
xmlns="http://namespaces.zope.org/zope"
xmlns:browser="http://namespaces.zope.org/browser">
provides="ftw.pdfgenerator.interfaces.ILaTeXLayout" />
</configure>
Create a template as defined in SessionLayout.template_name (example: session_templates/layout.tex):
<%block name="documentclass">
\documentclass[a4paper,10pt]{article}
</%block>
<%block name="usePackages">
${packages} </%block> <%block name="beneathPackages"> </%block> <%block name="aboveDocument"> </%block> \begin{document} <%block name="documentTop"> % if logo:${logo}
% endif
</%block>
${content} <%block name="documentBottom"> </%block> \end{document} There are more methods on the layout, see the definition in ftw.pdfgenerator.interfaces.ILaTeXLayout. ### Defining a LaTeX view For every context for which a PDF is generated a LaTeX view (ILaTeXView) is rendered. The view is a multi adapter adapting context, request, layout. There is a view based on the mako templating engine which can be extended (example: views.py): >>> from example.conference.session import ISession >>> from ftw.pdfgenerator.interfaces import ILaTeXLayout >>> from ftw.pdfgenerator.interfaces import ILaTeXView >>> from ftw.pdfgenerator.view import MakoLaTeXView >>> from zope.component import adapts >>> from zope.interface import Interface >>> from zope.interface import implements >>> class SessionLaTeXView(MakoLaTeXView): ... adapts(ISession, Interface, ILaTeXLayout) ... implements(ILaTeXView) ... ... template_directories = ['session_templates'] ... template_name = 'view.tex' ... ... def get_render_arguments(self): ... return {'title': self.convert(self.context.Title()), ... 'description': self.convert(self.context.description), ... 'details': self.convert(self.context.details)} Register the view with zcml (example: configure.zcml): <configure xmlns="http://namespaces.zope.org/zope" xmlns:browser="http://namespaces.zope.org/browser"> <adapter factory=".views.SessionLaTeXView" provides="ftw.pdfgenerator.interfaces.ILaTeXView" /> </configure> Create a template with the name defined in the class (example: session_templates/view.tex): \section*{${title}}
% if description:
\small ${description} % endif \normalsize${details}
### Generating a PDF
When a layout and a view for the context are registered the PDF can be generated by simply calling the view @@export_pdf on the context.
### Recursive views
When extending from ftw.pdfgenerator.view.RecursiveLaTeXView and inserting the variable latex_content in your template, the view automatically renders all children for which a ILaTeXView is found.
### HTML to LaTeX conversion
ftw.pdfgenerator comes with a simple but powerful HTML to LaTeX converter which is optimized for the common WYSIWYG-Editors used in Plone.
The converter can be used:
• in views, using self.convert(html)
• in layouts, using self.get_converter().convert(html)
It uses regular expressions for the simple conversions and python subconverters for the more complicated conversions. The converter is heavily customizable.
#### Footnote
Generate a footnote by wrapping any text in a span with the class footnote. Specify the footnote text in the data-footnote attribute. Example:
<span class="footnote" data-footnote="text in footnote">text on the page</span>
### Customizable layouts
When using multiple, independent addon packages using ftw.pdfgenerator, every package may implement a new, specific layout. This can be painful if there is a need to customize all layouts and add a logo image for example.
For making this easier all customizable layouts can be customized with one single adapter. This only works for layouts subclassing ftw.pdfgenerator.layout.customizable.CustomizableLayout. Those layouts need to follow certain concepts and provide inheritable blocks in the mako template. Ensure you follow the standards by subclassing and running the tests from ftw.pdfgenerator.tests.test_customizable_layout.TestCustomizableLayout.
Implementing customization adapter is very simple when customizable layouts are used. For example we change the logo image (assume the logo is at custom/mylogo.png):
>>> from ftw.pdfgenerator.customization import LayoutCustomization
>>> from ftw.pdfgenerator.interfaces import ILayoutCustomization
>>> from zope.interface import implements
>>>
>>> class MyCustomization(LayoutCustomization):
... implements(ILayoutCustomization)
...
... template_directories = ['custom']
... template_name = 'layout_customization.tex'
...
... def before_render_hook(self):
... self.layout.use_package('graphicx')
...
... def get_render_arguments(self, args):
... args['logo'] = r'\includegraphics{mylogo.png}'
... return args
It is also possible to change the template and fill predefined slots (example: custom/layout_customization.tex):
<%inherit file="original_layout" />
<%block name="documentTop">
my branding
</%block>
The layout customization adapter adapts context, request and the original layout.
## Tables
ftw.pdfgenerator is able to convert HTML-Tables to LaTeX. Since HTML and LaTeX have completely different presentation concepts the convertion is limitted.
For getting the best results theese rules should be followed:
• Define the width of every column. The table will be streched to the text width in the defined proportions. Without defining the widths LaTeX is unable to insert newlines dynamically.
• Use relative widths (%).
• Define table headings using <thead> for long tables which may be splitted over multiple pages.
CSS classes:
page-break (<table>)
Force the longtable environment, allowing LaTeX to split up the table over multiple pages.
no-page-break (<table>)
Force the tabular environment, prohibiting LaTeX from splitting the table up over multiple pages. If the table is longer than the page it is truncated - content may be missing in this case.
border-grid / listing (<table>)
Display the table in a grid: every cell has a border on every side.
notListed (<table>)
When using a <caption>, do not list the table in the list of tables.
border-left (<td>, <th>)
Display a border on the left side of the cell.
border-right (<td>, <th>)
Display a border on the right side of the cell.
border-top (<td>, <th>)
Display a border on the top side of the cell.
border-bottom (<td>, <th>)
Display a border on the bottom side of the cell.
right (<td>, <th>)
Right align the content of the cell.
left (<td>, <th>)
Left align the content of the cell.
center (<td>, <th>)
Center the content of the cell.
indent2 (<td>, <th>)
Indent the content by 0.2 cm.
indent10 (<td>, <th>)
Indent the content by 1 cm.
bold (<td>, <th>)
Display cell contents in bold font.
grey (<td>, <th>)
Display cell content with grey text color.
footnotesize (<td>, <th>)
Display cell content with smaller font size (\footnotesize).
scriptsize (<td>, <th>)
Display cell content with smaller font size (\scriptsize).
ftw.pdfgenerator is licensed under GNU General Public License, version 2. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5653470754623413, "perplexity": 15295.902458024419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860112228.39/warc/CC-MAIN-20160428161512-00100-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://math.stackexchange.com/users/33789/probabilitnator?tab=summary | # Probabilitnator
less info
reputation
7
bio website location age member for 1 year, 10 months seen Mar 28 at 14:06 profile views 15
# 4 Questions
2 A certain family of continuous functions on $[0,1]^2$ the closure of which linear span is $\tilde{\mathcal{C}}([0,1]^2,\mathbb{R}))$ 2 Approximation of bounded measurable functions with continuous functions 1 Applicability of Itô's Lemma for $g\in \mathcal{C}^2((0,1)^2)\cap \mathcal{C}_0([0,1]^2)$ 0 The $\alpha$-Potential-Operator (Definition and resolvent Equation)
# 38 Reputation
+10 A certain family of continuous functions on $[0,1]^2$ the closure of which linear span is $\tilde{\mathcal{C}}([0,1]^2,\mathbb{R}))$ +16 Approximation of bounded measurable functions with continuous functions +5 Applicability of Itô's Lemma for $g\in \mathcal{C}^2((0,1)^2)\cap \mathcal{C}_0([0,1]^2)$
This user has not answered any questions
# 10 Tags
0 probability × 2 0 special-functions 0 stochastic-processes × 2 0 stochastic-analysis 0 analysis × 2 0 potential-theory 0 measure-theory 0 operator-theory 0 functions 0 stochastic-integrals
# 7 Accounts
Quantitative Finance 1,294 rep 13 MathOverflow 179 rep 9 Stack Overflow 118 rep 8 Area 51 101 rep 1 Mathematics 38 rep 7 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211734294891357, "perplexity": 2804.3622521854454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://puzzling.stackexchange.com/questions/85679/how-do-you-make-prime-computers | # How do you make Prime “COMPUTERS”?
$$Given$$:
$$COMPUTERS$$ is the smallest Pan Digital containing all the digits 1 to 9 occurring only once.
$$COMPUTERSV$$ is a Prime only when one of the correct digit ($$V$$)is added at the end.
Also,
$$COMPUTERSE$$ = $$CE$$ * $$EOTOCTPC$$
What is the digit $$V$$ that has to be added to make $$COMPUTERS$$ A Prime?
• Enough info given... – Uvc Jun 29 '19 at 10:39
$$1$$
$$COMPUTERS=123456789$$, $$V\ne\{2,4,6,8\}$$ (even), $$V\ne\{3,6,9\}$$ (multiple of $$3$$), $$V\ne\{5\}$$ (multiple of $$5$$). So $$V=1$$ or $$V=7$$. But $$E=7$$ and this is not prime, as given later. Therefore $$V=1$$. And indeed $$1234567891$$ is prime. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910769522190094, "perplexity": 480.89126690514837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00244.warc.gz"} |
http://www.eurotrib.com/story/2008/1/19/84357/9824 | Welcome to the new version of European Tribune. It's just a new layout, so everything should work as before - please report bugs here.
## European HSR expansion in 2007
by DoDo Sun Jan 20th, 2008 at 03:30:26 AM EST
Gauge-changing semi-high-speed train 130 003 at Madrid Chamartín, start of a new line in service since 23 December. (In the background: a class 252 "Eurosprinter" loco.) Photo by Mariano Alvaro from Flickr.com
2007 saw the opening of several high-speed lines in Europe, and it could have seen even more save for delays. This is my overdue intro and review of them.
Channel Tunnel Rail Link (CTRL) 2
When then British PM Margaret Thatcher reluctantly agreed to the construction of the Channel Tunnel ("Chunnel", Eurotunnel) in 1986, her condition was that not a single penny of public funds should be spent on it. One result was chaotic organisation, including a messy financial structure with hundreds of private banks. Another was that while France managed to finish its Paris–Chunnel high-speed line mostly on time and budget, and Belgium its part from Brussels with some delays, no one volunteered to build anything from the Chunnel to London.
Thus, high-speed trains had not only to slog along on old lines at low speed, but
• instead of standard TGV trains, ones with new chassis had to be designed and purchased, so that they fit into narrower British loading gauges (=cross sections);
• the trains had to be fitted with extra third-rail electric equipment as used in South East England;
• ditto for train control and safety equipment;
• once running, getting struck in South London's and South East England's busy traffic meant lots of delays.
An Eurostar crosses the Medway Viaduct on the not yet opened first leg of the Channel Tunnel Rail Link, during the 30 July 2003 record run that achieved the British rail speed record of 334.7 km/h. Photo from Erik's Rail News
So, a decade after Thatcher's decision, her successor John Major's government decided to get construction on track in the form of a public–private partnership (PPP): private companies shall build and run the line, with heavy financial involvement of the state at the construction stage.
On the surface, the result seems to validate the concept: construction finished largely on time and budget. The only significant construction accident was when a tunnel boring machine (TBM) hit a buried 19th-century well that was missing from the maps, resulting in a hole on the surface. The only major over-budget work was a station construction that was not in full control of the project (it's only in part for the high-speed line).
However, on one hand, even though the builders included an American company infamous from Iraq, Bechtel, quality technology was ensured by the inclusion of Systra, one of the companies building the TGV lines in France. In fact, the entire line has been built according to French high-speed line standards.
On the other hand, the agreed contract price (altogether £5.2 billion) and deadlines were soooo generous that abiding by them is less surprising.
CTRL 1 was opened in autumn 2003. It was the easier part: only one major bridge, one station (Ashford International), and one major tunnel (the North Downs Tunnel, across the water divide to the Thames valley, which became the UK's longest at a mere 3.2 km). The 74 km section carried trains to the edge of Greater London, cutting some 20 minutes from scheduled times (and even more from actual travel times, as delays were reduced). The result: double-digit percentage growth in passenger numbers, and that for two years! This would seem less impressive considering that absolute numbers only climbed back to the c. 8 million/year pre-dotcom-crash record, but it was achieved on a smaller market with low-budget airlines as competitors.
CTRL 2 continued by diving under the Thames, surfacing onto a viaduct, then some level track, followed by 19 km of tunnels under outer London, right until the track complex before King's Cross and St. Pancras stations. The 39 km line terminates in the latter, which received a major overhaul. In the middle of the London tunnels, there is an one-kilometre open trench, harbouring Stratford International station, which is just next to the site of the 2012 Olympics.
I should note that CTRL 1+2 has been renamed, into the entirely uninspired/ing High Speed 1, but now that I linked to it I'm going to stick to the old name. Since CTRL 2 opened on 14 November, cutting a further 20 minutes from travel time (also because of shorter route, to another station) further growth was impressive: in its first one-and-half months, +11% on the same period last year (contributing to 2007's overall annual record of 8.26 million).
In the renewed London terminal St Pancras International, Eurostar train 3005 stands ready for the 08:05 departure to Brussels Midi, 28 December 2007. Photo by kpmarek from Flickr.com
I close this section with a new train. From 2009, 29 class 395 "Javelin" trainsets will run semi-high-speed services branching off from the CTRL to South East England cities. The class is from Hitachi's A-train platform. (This break into the EU market is a great success for Japanese rail technology.)
The first Hitachi Class 395 high speed domestic train sits outside the new depot at Ashford as the 10.43 London Waterloo to Brussels Eurostar races over the viaduct at 160 mph (257.5 km/h) on the left, 9 November 2007. Photo by Brian Stephenson from RailPictures.Net
Spain
The boldest high-speed expansion plans in Europe are Spain's. High-speed rail meant modernity for both the centre-left PSOE and the right-wing PP parties. But the recent history is that while the Aznar government pushed too many projects with too tight deadlines and not enough oversight, the Zapatero government seems stressed even with just trying to manage to completion of projects begun during Aznar's time.
Three lines were about to open just before Christmas last year.
True high-speed lines in Spain (own drawing).
Black: in 250+km/h service previously
Red: in 300 km/h service since last year, dark red: soon
Blue: lines currently in construction for 300 km/h or higher
Not shown: lines in planning stage, 200+km/h conventional line upgrades
One was the long overdue final section of the Madrid–Barcelona high-speed line, delayed due to long disputes with Barcelona about the layout, and a rather messy contracting process. Then earlier last year, when seeing that construction companies aren't on track to meet their deadlines, the transport minister decided to push the companies to work full-throttle. Of course, the result was irresponsible and shoddy work, which led to accidents, further delays, and a collapse of Barcelona's commuter traffic (see kcurie's account: 1, 2, 3). Opening is now planned for 28 February.
An AVE/RENFE series 120 variable-gauge train (CAF-Alstom, sometimes named "BRAVA" after its bogie type) on the renewed broad-gauge line into Barcelona, 17 December 2007. To the left, the then end of the standard-gauge high-speed track-laying. Photo by Sanlucar-Playa from SkyscraperCity
On 22 December 2007, the Madrid–Segovia–Valladolid line opened [pdf, Spanish!]. 179.5 km built from €4.205 billion, this line will serve as trunk line for Spain's entire North, to be fed by at least five future lines. Beyond several "shorter" tunnels (up to 9.5 km), it runs through the 28,418.66 m long Guadarrama Tunnel (crossing the Sierra de Guadarrama mountain chain north of Madrid), which is currently the fourth-longest tunnel in the world.
The line is currently served by two types of trains. One is the Talgo 350 type (AVE/RENFE series 102), whose top speed (despite the type name) would be 330 km/h, but is now limited to 'only' 300 km/h, due to ongoing lack of fitness for service of the ERTMS Level 2 train control system. The duckbill nose shape (hence the Spanish nickname: Pato) is meant to better deal with side winds and reduce aerodynamic noise. The trains cut travel time by between one and one and a half hours.
An AVE/RENFE series 102 near Tres Cantos, during final test runs on the Madrid-Valladolid line, 11 December 2007. Photo by Luis Miguel R.S. from Flickr.com
The other train type on the line, which got a similar "duckbill-nosed" styling (hence Spanish nickname: Patito), is the series version of Talgo's variable-gauge train (AVE/RENFE series 130). (The (diesel) prototype, when testing this line, was covered in a comment thread on ET.) The S-130 has a top speed of only 250 km/h, but after simply driving through a special transitional track at 15 km/h, it can continue to destinations on broad-gauge conventional lines. (It's rival, the S-120, does the same, but Talgo deserves credit for being first.)
An AVE/RENFE series 130 runs through the gauge-changing facility in Roda de Bará (current end of the Madrid-Barcelona line). Video by peettheengineer
On 23 December 2007, the Córdoba–Málaga high-speed line was fully opened [pdf, in Spanish!].
Short TV reportage on the opening, with some aerial shots. Video from YouTube
This is a branch off Spain's first high-speed line (Madrid–Sevilla). It first climbs on a plateau, then crosses the 7297 m Abdalajís Tunnel, and descends across several tunnels and bridges along a valley to the Mediterranean coast. It was put in service until Antequera (first two-thirds on the plateau) one year ago, but didn't see full high-speed service until now. Altogether 168.8 km was built from €2.539 billion.
Again cutting more than one and a half hour, the line is served by AVE/RENFE series S-102 and S-103 "Velaro-E" trains (see one of both behind Zapatero in the video). The latter is Siemens's up-powered version of the German railways' flagship ICE-3. But again, though the S-103 could do 350 km/h, the present train control system limits it to 300 km/h.
The inaugural run with an AVE/RENFE series 103 "Velaro-E" just left the 837 m viaduct of Jévar, near Ávila/Andalusia, 22 December 2007. Photo by Fuen446 from trensim.com
Until 7 January, 416 of the 438 train runs on the two new lines were on-time (in the last few days, 100%) – note that AVE/RENFE pays back 50% of your ticket when 15 minutes late, and all when 30 minutes late. The Valladolid line saw 18,378 passengers in 130 trains, the Málaga line 58,510 passengers in 308 trains – that's seat utilisations around 50%, low for high-speed rail, but expect improvement as passengers discover the new offer.
Elsewhere
Two more lines that were diaried earlier, so here only in brief:
1. The first two-thirds of the LGV Est Européenne in France went into service on 10 June 2007. We covered the VIP opening trains, the new record run on it, praised competent state management, and I wrote a trip report one way as well as the other way.
In the first three months, SNCF's overall traffic on all relations using the new line was 2.9 million passengers (a growth of 65%!), and 7 million by the end of 2007. the plan was an annual 11.5 million by 2010, now it looks like that will be surpassed in the first year. The (over-budget) €5.515 billion seems to have been worth it.
2. The 34,576.6 m long (world's third longest) Lötschberg Base Tunnel (and its short connections to existing lines) in Switzerland, commissioned for 250 km/h, was officially opened on 15 June 2007, I covered it. But while regular freight trains used it soon, for passenger shuttle trains, one had to wait until 15 September, and full-scale through service started on 9 December last year.
Honorary mention should go to two line doublings in Italy: the now four-tracked (Milan–)Pioltello–Treviglio and Padova–Mestre(–Venice) sections, which went into service on 2 July resp. 1 March 2007. Only about a third of the former is suitable for 250 km/h, but the higher-speed tracks of both will form part of a future Milan–Trieste line.
Another honorary mention should go to Turkey: although not in the European part, two long lines are in construction for 250 km/h. First to open is the 226 km Ankara–Eskişehir section of the line to Istanbul. The Italian State Railways' ETR 500 Y2 high-speed test train was used for commissioning. During a test run on 12 September 2007, it set a new rail speed record for Turkey: 303 km/h.
However, Hızlı Tren revenue service starting this year will first be shouldered by Spanish exports, 10 trainsets equivalent to the AVE/RENFE S-120, and later by South Korean ROTEM's HSR-350x.
ETR 500 Y2 during an early low-speed trial near Beylikköprü on 26 April 2007, posted on YouTube by skyzes
What's next?
First a string of delayed projects:
• Barcelona city access (28 February 2008?);
• Antwerp–Rotterdam–Amsterdam (HSL 4/HSL Zuid) in Belgium and the Netherlands, delays with rolling stock delivery (October 2008?);
• Liège to (near) Aachen (HSL/LGV 3) in Belgium, again rolling stock equipment delivery delays (December 2008?);
• Naples city access in Italy, was held up by archaeological works (June 2008?);
• (most of) Bologna–Milan, delays with city accesses (15 December 2008?);
• Florence–Bologna (a line almost exclusively in tunnels), delays with city access sections and tunnel fitting (October 2009?).
For all but the first, the on-going problems with ERTMS Level 2 contributes to the delay, too.
One more this year: in northern Sweden, the Örnsköldsvik–Husum section of the Botniabanan will open in October 2008. This will be a single-track mixed-traffic mainline, but in theory for 250 km/h.
In Italy, finishing the great question-mark-shaped line from Torino to Naples, the Novara–Milan section may open in December 2009 (if it is not delayed).
Then there are a whole load of in-construction lines in Spain, which I drew into the map above:
• Barcelona–Figueres–Perpignan/France (ready maybe in 2010);
• Vitoria–Bilbao/San Sebastián/Irún ("Y Vasca"='Basque Y');
• Pajares (first-built section of line to Gijón, with a 24,667 m tunnel);
• Ourense–Santiago de Compostela (the final section of a future line to Galicia; construction of the other end, where it branches off the Valladolid line, will begin soon);
• a whole tree of lines from Madrid to the south-east (Valencia, Albacete, Alicante, Murcia);
• Mérida–Badajoz (a first section of the Madrid–Lisbon line) near the Portugal border;
• Sevilla–Antequera–Granada ("Transversal Andaluz"), crosses the just opened Málaga branch (2013/2010).
In addition, there are 200+km/h conventional line upgrades (with preparation for re-gauging): from Zaragoza south to Teruel, Vigo–A Coruña (along the western shore) and Monforte–Lugo (near the eastern border) in Galicia, the northern third of the Valencia–Tarragona(–Barcelona) Mediterranean line, Alcázar de San Juan–Linares–Jaén (south of Madrid); and Sevilla–Cádiz, which shall be further upgraded to a high-speed line (hence also on the map).
Of the curently more or less in-construction lines in France, the one with a fixed schedule is the eastern leg (Branche Est) of the LGV Rhin–Rhône, to be opened in December 2011 (earlier mentions on ET: 1, 2).
Above: Boring of the Chavanne Tunnel starts. Just 1970 m, but the only significant tunnel along Branche Est of the LGV Rhin–Rhône.
Below: Viaduc de la Linotte - Ormenans, the bridge deck starts progress from one side.
Photos made on 17 July 2007 from the official construction photo album
Update [2008-1-20 17:28:40 by DoDo]: Forgot: in Germany, the sole project to mention is the 9,385 m Katzenbergtunnel, and the adjoining sections of a line doubling north of Basel, in the south-eastern corner of Germany. The two tunnel boring machines (TBMs) holed through on 20 September resp. 1 October, but 250 km/h traffic will only roll through from 2011.
:: :: :: :: ::
Check the Train Blogging index page for a (hopefully) complete list of ET diaries and stories related to railways and trains.
Enjoy -- I spent entirely too much time travling for pictures & data on Spanish, Italian, Turkish sites without speaking the languages... *Lunatic*, n. One whose delusions are out of fashion.
I see in the end I forgot just the sole German project warranting a mention, now added at the end. *Lunatic*, n. One whose delusions are out of fashion.
It's a bit annoying to see that France, the High Speed pioneer in France, hasn't built more high-speed lines. Big infrastructure projects are not hip right now, compared to Spain... compare the city rail construction between Madrid and Paris, too. Un roi sans divertissement est un homme plein de misères
See this page for all of RFF's projects (the French network operator) In the long run, we're all dead. John Maynard Keynes
One major line entered service, on 5 January: Taiwan's THSR (Taipei-Kaohsiung, all along the Eastern shore). It is served by a modified, 300 km/h version of the Series 700 Shinkansen from Japan (700T). In Japan itself, JR Central and JR West put the N700 in service: an improved Series 700 that can both tilt and travel at 300 km/h. Also in Japan, the tests with JR East's two Fastech 360 prototypes weren't as positive as hoped. These trains were built for 405 km/h, and research with them was meant to result in trains doing 360 km/h in revenue service. The problem wasn't speed: 398 km/h was achieved. Rather, the very strict limits on noise emissions, overhead wire wear, and braking distances weren't met as planned. Thus, JR East's future E5 units will only do 320 km/h (same as the TGV POS and ICE-3 units on the LGV Est Européenne). South Korea, however, was undaunted by the Spanish/European and Japanese failures to significantly raise top speed (well, to wit, there's still the French AGV): the government approved $100 million for the HEMU-400x project, for a 400 km/h prototype that shall result in a regular-service 350 km/h successor to the not-even-in-service HSR-350x. The prototype shall start testing in 2013. China wants to finish at least one line until the Olympics, but that may be too rushed even for that country. Meanwhile, China Railways put in service more than five hundred trains of four imported/technology-transfer semi-high-speed types (CRH-1 to -5, with -4 reserved for a domestic production). The trains now run 'only ' at 200 km/h, to be (or already?) raised to 250 km/h. CRH-2 (based on JR East's series E2-1000) was publicized to the media without any mention of its Japanese origin... * Morocco moved ahead with serious plans to build a high-speed network with French TGV technology. *Lunatic*, n. One whose delusions are out of fashion. It seems the Sun never shines for South Korean railfans -- the best I found: (This is the HSR-350x prototype) *Lunatic*, n. One whose delusions are out of fashion. One major line entered service, on 5 January: Taiwan's THSR Check ridership and revenue data in the Wikipedia article (most of it entered by yours truly...). But while 15.55 million passengers in a first year would be a dream for any European line, East Asia has different standards: daily ridership, though growing rapidly, is still about a third of the first, half of the later projections. The construction of the line & trains ate up NT$489 billion (at current exchange rates €10.35 billion), with outstanding liability now totalling NT$377 (€8 billion), against a first year revenue of NT$14 billion (€0.3 billion), though this year they can safely expect the latter to double. Airlines already operate parallel flights at a loss, even with reduced fares. So, behind expectations, but it still will turn profitable. *Lunatic*, n. One whose delusions are out of fashion.
Brilliant... teh overall data... brilliant. teh description of spain network brilliant.... Notice that in Spain there is no TGV following teh coast from barcelona to valencia.. there a 200 km/h train will be all... the delays in PSpain will be hueg... I doubt the AVe will eb ready in border by 2010 as the officials say. well.. ina ny case... great . great. Europe is getting ready for fast travel without planes and one day without cars if encessary. the problem in Spain is that there are no conventional lines for transport of goods.. adn no right-wing nor left-wing is doing anything to solve it. A pleassure I therefore claim to show, not how men think in myths, but how myths operate in men's minds without their being aware of the fact. Levi-Strauss, Claude
Notice that in Spain there is no TGV following teh coast from barcelona to valencia.. there a 200 km/h train will be all... At the level of plans, that should improve: a full high-speed line from Valencia to Castellón shall be part of the Levante network, and the two major bypasses further North (Benicassim-Oropresa, and the AFAIK still not in service Vandellos 1+2+the entry to Tarragona) could be up-rated for 250 or 300 km/h after re-gauging. But with delays, delays, maybe in 2020... *Lunatic*, n. One whose delusions are out of fashion.
Yeah upgrades hsould took it to 250 km/h int he slowest part.. but as you say.. theya re still far away.. Ina n case a network of 200km/h trains in network is highly probable int he next 15 years... enough to dispalce the car if necessary. Plans for lines for long and heavy "mercaderias"... never heard of that.. only today after 10 years they opened a train line to bring the cars foromt he Seat factory in barceloan to the docks int he port.. (less than 50 km).. until now hundreds of truckes delievered the cars to the ports... so if this basic line took so mcuh being so profitable adn necessary.. imagient he rest. It is really ashaming. A pleasure I therefore claim to show, not how men think in myths, but how myths operate in men's minds without their being aware of the fact. Levi-Strauss, Claude
By the way, let me test your track knowledge as a trainrider! Where in Barcelona was that photo of an S-120 shot? (local.google.com link?) ;-) *Lunatic*, n. One whose delusions are out of fashion.
Of course it was in the Southern side of the city. the landscape it is pretty clear the same... I would say that it is not Barceloan city, it is already outside.. probably Bellvitge... though a full AVE can not take that far...maybe it is an S!"0 for Altaria travels, it can certainly be El Prat del Llobregat getting close to Bellvitge. That would be my bet :) A pleasure I therefore claim to show, not how men think in myths, but how myths operate in men's minds without their being aware of the fact. Levi-Strauss, Claude
Close, but not quite! Hint: the photo was actually made at the border of Barcelona. *Lunatic*, n. One whose delusions are out of fashion.
So it was Bellvitge-Hospitalet-Barcelona corner... great... a lit bit closer to barcelona than expected :) A pleasure I therefore claim to show, not how men think in myths, but how myths operate in men's minds without their being aware of the fact. Levi-Strauss, Claude
It's the middle of this map. *Lunatic*, n. One whose delusions are out of fashion.
Precisely.. what I called the corner exactly...it is the only place where Hospitalet, Bellvitge and barcelona meet.... Plaza Europa is the biggest square close to it which is urbanized... it is very close to the exit of barcelona (Gran Via South) closest to my house inside Barcelona.. but the shouthern part is strictly speaking Barcelona..the Zona france... Well.. I just got the point exactly...it was this or either a little to the south when it is clsoer to El Parat del Llobreagat.. it little bit to the south in your map. A pleasure I therefore claim to show, not how men think in myths, but how myths operate in men's minds without their being aware of the fact. Levi-Strauss, Claude
The media is reporting that our "high-speed" (200 km/h) trains are travelling slower now than 10 years ago, due to increased congestion. We really need some true HSR. We could use some of the money from privatising some of the state owned companies for getting HSR, instead of reducing the national debt. Peak oil is not an energy crisis. It is a liquid fuel crisis.
Any bets on a hi-speeed trans-siberian in our lifetimes ? Any news on the possibility of freight on hi-speed networks ? woulnd't that have serious track implications ? keep to the Fen Causeway
Do you mean high-speed or low-speed freight? *Lunatic*, n. One whose delusions are out of fashion.
I thought it was long-distance fast freight to be carried on hi-speed lines overnight keep to the Fen Causeway
Oh, so the second. That idea was championed by the German Railways, and the first two lines were built accordingly. But the experience was negative, and entirely predictably so... Most freight cars are brake shoe braked, and are built robust and simple anyway, so wheel blocking and thus wheeltread flattenings happen, which 'beat up' the rail -- roughing up its surface for high-speed trains. There is only a short gap between the last and first high-speed services, not to mention the occasional depot run, so in practice high-speed trains and railfreight has to be allowed to run in parallel, in opposite directions. Now what happens if an open or light freight car is passed, at 400 km/h relative speed, especially in a tunnel? Either the cargo loaded could move or the car itself could derail (it happened). What if a freight train is late, or worse, there is some technical problem (say a wheel/axle that ran hot) in the morning? Delays for the high-speed train. Nothing bad happened in practice, but there is also the safety concern if say a gasoline-loaded tank car derails and burns in one of the long tunnels. Now all of the above doesn't hold the bosses of many other railways back from believing mixed traffic makes a high-speed line more economically viable. (Holding back is the wrong word: they probably never heard of the precedent.) Now maybe for speeds until 250 km/h, that may be (barely) tolerable. But 350 km/h lines?... *Lunatic*, n. One whose delusions are out of fashion.
... existing freight rolling stock. True HSR freight will require dedicated stock ... say, with mini-containers accepting Euro-pallets that lock into a HSFR car, and also into a frame to form a standard intermodal container. And also, substantially higher aviation fuel prices to be in a position to start stealing the lower margins of the air freight market. Which suggests that the main game at the moment should be 110kph double / 160 kph single container Express freight. Its perfectly fine if that infrastructure is shared with regular Express passenger stopping services. I've been accused of being a Marxist, yet while Harpo's my favourite, it's Groucho I'm always quoting. Odd, that.
Yes, that is the second thing people may think of when they ask about freight on high-speed lines. Further issues to consider: High-speed lines have an axle load of 17 tons (for the non-technical: that means that if a railcar has four axles, that is eight wheels with four on each side, then the entire car -- car + cargo - can't be heavier than 17x4=68 tons). Now freight transport is more economical if we go for higher axle-loads. Today in Europe, 20-22.5 tons is the norm, and there are pilot lines for 25 tons, while US railways even do 35 tons. Higher-speed transport is also higher energy use and thus higher transport costs. Even if for some types of time-sensitive cargo where the extra transport cost vs. trucks or low-speed rail may make it worth, and air cargo would be the rival, there is the issue of an as yet not wide network: you can't offer many destinations, the customer won't count on you. *Lunatic*, n. One whose delusions are out of fashion.
(1) This is why regular intermodal containers are not a likely HSFR technology as such ... however, given greater flexibility to design the HSFR freight car to accomodate something that could fit into an intermodel container than air transport, something that can be rolled out of the HSFR and locked into an intermodal container (or visa versa) would give substantial logistical advantages over air freight (2) And this, of course, is why the main target at the current point in time is getting Express freight out of trucks and onto Express freight rail, since the gain from HSFR is only if it shoots freight planes out of the sky. (3) Goes back to (1) ... it has to integrate into the existing intermodal container system, and from my experience in working in the warehouse, a small enclosed mini-container is going to be the only serious option if the process is going to be largely automated. But no hurry sorting out the details ... Express freight 110kph, 25 ton axle load / 160kph 21 ton axle load, that's the target currently in the frame, and that's just not at a speed that it can be seamlessly inserted into the HSR network. I've been accused of being a Marxist, yet while Harpo's my favourite, it's Groucho I'm always quoting. Odd, that.
Why not using Eurostar-shuttle like trains, that allows for trucks to be loaded on trains at (relatively) high speed? Trucks do not have a 35 ton/axle cargo! They have 35 tons cargo, and 5-6 axles... In logistics, you could perhaps imagine a HSR line between two important nodes (like Paris/Lyon/Marseille in France), taking trucks on a no booking, shuttle basis so as to lower the community costs of road repairing - which is actually a government indirect subvention, at least in Europe-. Thus, road and train transport would compete on a same cost basis... which is conform to economic doxa.
Trucks do not have a 35 ton/axle cargo! They have 35 tons cargo, and 5-6 axles... The axle loading is for the rail car ... and, yes, if there is a Freight Express rail clearway, with those axle loading Designing a High Speed Freight Rail set that takes whole trucks is, for one thing, hauling weight around unnecessarily, magnifying the extra energy cost of HSFR over Express Freight Rail, and, for another thing, the job of trucks should be to haul a container the last mile to from the railhead to the final street address or warehouse ... the extension of the HSFR should be the load racking into a standard container to go to that closest railhead. I've been accused of being a Marxist, yet while Harpo's my favourite, it's Groucho I'm always quoting. Odd, that.
I do not get the point of using HSR for fret, except express fret that is going to be light enough to use normal HSR trains (La Poste has its own TGV). Isn't it a bit counter productive to increase the energy use of cargo for transporting it at higher speed ? The speed problem, for cargo trains, at least in France, is lousy logistics, and there are much more gains to be made by improving those. I might see that a private developer might want to increase the potential use of an High Speed Line, but that is a problem with having private developers doing rail infrastructure. Un roi sans divertissement est un homme plein de misères
... the lighter freight that can use ordinary HSR locomotives are a more appealing if they can be shipped out of the warehouse in sealed containers, like the standardized containers used for distinct models of aircraft for air freight, except that with HSFR there is an opportunity to develop it in a way that can smoothly and efficiently flow from standard inter-modal shipping containers to HSFR and back, to allow dependence on trucks to be minimize and even, in some cases with the right technology, eliminated altogether. For example, Aerobus has a container by container freight transport option: Aerobus Cargo (RealPlayer Video). I've been accused of being a Marxist, yet while Harpo's my favourite, it's Groucho I'm always quoting. Odd, that.
Any bets on a hi-speeed trans-siberian in our lifetimes ? My sobering projection, also taking potential future delays into account: just in Europe, we won't see national networks worth their name before 2020, the four main national networks (French, Spanish, Italian, German), which may become seven by then (Turkish, Russian, Scandinavian) may not integrate into a real pan-European network (with full-quality link-ups) before 2035... 400 km/h all the way to Beijing: maybe in 2050? But it's insane to project so long, who knows how railways, transport technology, economy, politics and society will look in half a century... *Lunatic*, n. One whose delusions are out of fashion.
Here I disagre!!!! At long last!!! The four main networks will be all integrated by 2020. i think it is really teh day most travel in europe wil be done my train... the key country is Italy. portugal, netherlands, Spain, Frange, GB will eb ready for sure (maybe the ondon Scotland link will eb missing but other than that). So if Italy finishes on time... 2020 is the date.. no need to wait until 2035.... Unless you include Sacandaniva and some easter coutnries.. then .. yes 2035 to get them onboard... maybe 2025 if they do a viable 200km/h network ... A pleasure I therefore claim to show, not how men think in myths, but how myths operate in men's minds without their being aware of the fact. Levi-Strauss, Claude
I have no doubt that high-speed trains will run all across the borders. But I have grave doubts that they will do so at 300 km/h or even 250 km/h. There might be a SPain/France link at Irún by 2020, but I doubt it: we can be happy if Bordeaux-Dax is kicked off in the early 2010s, and finished at the end of that decade. As for Perpignan-Montpellier, not even plans. The French/BeNeLux and German networks would have three link-up points. But there are no German plans for a Düren-Aachen gap (would be short), Saarbrücken-Mannheim is only upgraded for 200 km/h and only in part (and full high-speed would require big tunnels), and the two railways are content with dozens of kilometres of 200 km/h or lower speed connections of their high-speed lines into Strasbourg, and not even thinking of a bypass. Italy-Germany would require a full transsect of the Alps. On the Bologna-Verona-Innsbruck-Munich axis, nothing high-speed is planned, only the Brenner Base Tunnel, which won't turn a reality before 2020 itself. There could be a connection across Switzerland earlier, but the Swiss won't complete just the full Gotthard Axis (Zurich-Italian border) until the 2020s, and think onventional lines suffice for their purposes on the Basel-Zurich route. Now 250 km/h lines all the way from Basel to the Lötschberg Base Tunnel have a greater probability until 2020, but no plans for an Italian continuation including a second Simplon. France-Italy: in effect, that's the Lyon-Turin line with the Mount d'Ambin Base Tunnel. With the delays thanks to Chirac et al, the base tunnel may open only in 2023, and even then, it is likely that the connecting lines won't be all high-speed (current italian plans are to leave passenger trains on the old line and build a line doubling in tunnels for freight). Overall, note that for a high-speed line, ten years from start of planning to opening is often even optimistic. *Lunatic*, n. One whose delusions are out of fashion.
HSR just isn't a viable proposition for trips over 1000 kilometres. Unless fuel for air travel becomes 20 times more expensive, I don't think we are ever going to see a HSR connection to China. We should be too happy if we ever get one to Moscow. We need to spend more money on Trans-European Networks.
Transport Minister Calls for Train Travel Improvements On Thursday, VR reported a record number of passengers last year. Nearly 67 million train trips were recorded, up by more than three percent. By far the largest volume was on commuter trains. Passenger rail travel to Russia went up by close to a fifth. You can't be me, I'm taken
Actually, no. Building an isolated line connecting destinations 1000 kilometre away is what's not viable, the airplane would win on market share. But, on one hand, in a network of connections between cities a few hundred kilometres away, routes thousands of kilometres could become possible. On the other hand, just as today there are passengers riding an express 10 hours or longer (not to mention the weeks on the Transsib), running high-speed trains over such longer routes would make extra trips possible. (Ii's a synergy: linking up two train services at one node will keep all the passengers on the two lines, and add ones who would have viewed changing trains at the node too much of a hassle or lost time.) But this is the far far far future (if it comes at all), and for the question at hand, I only considered the existence of an Eurasia-spanning network,, not through services. *Lunatic*, n. One whose delusions are out of fashion.
In my experience, what you describe here in a network of connections between cities a few hundred kilometres away, routes thousands of kilometres could become possible is not always done right. The international aspect of international train routes can be seriously degraded by too many domestic stops. All too logical: no city wants to be left out. But I guess I could have a 45 minute shorter trip to Amsterdam on the IC-International (now just over 6 hours) if Germany and the Netherlands just scrapped a few stations where only 10 people get on and off. Especially stops which are only a few kilometres in between. On the other hand, I have to admit that it doesn't make much of a difference to me because I'm going to take that train anyway. So you have a point there.
a few stations where only 10 people get on and off What is your estimate, how many people take the Berlin-Amsterdam ICs all the way? My guess would be that they are outnumbered by the sum of domestic passengers, and don't add up to a full train-load (to sustain extra trains with less stops), even considering the extra attractivity of somewhat shorter trip times. For you, this shucks, of course. But 5½ hours vs. 6¼ hours, that's less significant than say 2¾ hours vs. 4 hours, would there be high-speed lines all the way. In short, with less stops and greater distances more rational for high-speed, international relations would be in the 'normal' range, and passengers with your kind of problem would be say Berlin-Moscow or Berlin-Madrid travellers. *Lunatic*, n. One whose delusions are out of fashion.
All the way? Maybe 5%. Depends upon the time. One train on the line goes on to Szczecin, but I'd guess there is on average 0.5 passenger per train taking that all the way. Now to take out some small city/town stops (from Amsterdam) Hilversum - Amersfoort (14 minutes) Apeldoorn - Deventer (11 minutes) Almelo - Hengelo (11 minutes) Stendal - Rathenow (15 minutes) Berlin Spandau - Berlin Hbf (10 minutes) Take out Hilversum, Apeldoorn, Almelo, Rathenow and Berlin Spandau, and you not only have a better international train, but also a better domestic intercity, IMO. Note that the intercity runs only every two hours on a track that also has a normal service. The few people in these minor cities and on connecting lines that may take the car rather than taking on 15 minutes of extra travel time should be compensated by people who do take the train rather than the car or airplane.
I would like a Berlin - Moscow HSR train, but I can only see it work if it goes through Belarus. Unless something changes politically in Belarus, I don't see it happening.
I think Lukashenko will be dead before high-speed construction from Moscow resp. Berlin/Warshaw could progress so far, i.e. I think the political timescale os shorter here... *Lunatic*, n. One whose delusions are out of fashion.
Note, for reference, this thread. The Dutch, unfortunately, have scrapped the idea of a HSL Noord for now. I think it's rather stupid that we don't build an Amsterdam - Groningen - Bremen - Hamburg - Kiel - Copenhagen line as a priority project of the Trans European Networks. With the existing HSL Zuid, we will have connected Europe's three largest ports (Antwerp - Rotterdam - Hamburg) within I would guess a 700-800 kilometre trip from Antwerp to Hamburg (as a quick reference Rotterdam is geographically 78 km from Antwerp and 414 km from Hamburg). There should be plenty business travel. Surely there must be an economic case. To repeat myself, we need more money for TEN (and less for the CAP).
How long would it take to go Copenhagen-Amsterdam with HSR? I'm wondering because the Swedish HSR is supposed to end in Copenhagen... Being able to take the train to Amsterdam from Stockholm (in what, 6 hours?) would be GREAT. Peak oil is not an energy crisis. It is a liquid fuel crisis.
Can only guess at this (unless DoDo can find distances for the existing track). A Copenhagen - Amsterdam service would have stops in Odense (~ 150 km), pass by Kolding and Flensburg (maybe stops on a slower service), Kiel (at ~ 330 km) and Hamburg (~ 430 km). From Hamburg it should be another 460 - 500 km to Amsterdam, with stops in Bremen, Oldenburg, Groningen, (Assen, on a slower service), Zwolle, (Lelystad, on a slower service), Almere, and Amsterdam Zuid. So that is 790 - 830 kilometres and 9 stops from Copenhagen Now the Amsterdam - Paris line is about 550 km in length, I think, and will have 6 stops from Amsterdam Zuid with a travel time of 2:57. I'd guess you should take 1.5 times that travel time, so you would have a 4:30 hour trip, with Amsterdam - Hamburg at 2:30 hours. If we build a dedicated HSR. I'd guess Stockholm - Copenhagen will be 2 hours? With the existing plan, there should be a 200 km/h connection between Amsterdam and Hamburg over Amersfoort, Hengelo, Osnabrück, Bremen somewhere maybe in 2015. That should be about a ~ 4 hour trip (currently the fastest connection is 5:15 hours).
If bold, let's be really bold: envision the high-speed connection along the Vogelfluglinie (via Lübeck), with the long-planned Fehmarn crossing. That would be around 330 km until Hamburg. Hamburg-Bremen is currently 115.6 km, Bremen-Groningen would be around 170 km, HSL Noord again around 170 km. As for times, if I am optimistic, with stopping times: Copenhagen-Hamburg 1h20m, on to Bremen 40m, on to Groningen 50m, on to Amsterdam 50m, 3h40m total. If I am less optimistic about just how through the true high-speed lines would become (e.g. longer upgraded/four-tracked sections along the way, near cities and in a Fehmarn tunnel): 2h, 50m, 1h10m, 1h, together 5h. (Note: Paris-Amsterdam on upgraded conventional line from Brussels to Antwerp, and again Schipol-Amsterdam C). Stockholm-Copenhagen would be roughly 550 km, so your two hours for a true high-speed service sound realistic. (Currently: more than five hours...) *Lunatic*, n. One whose delusions are out of fashion.
Fehmarn belt bridge is going to be built, that's decided. The Swedish HSR program will not be built before 2020, if business is as usual, but with PO coming, it won't be and I am hoping for earlier construction. There have been some very postive signs during the last year or two. I should probably do a diary on it, as the project has the fitting name Europakorridoren, the European Corridor. Peak oil is not an energy crisis. It is a liquid fuel crisis.
Fehmarn belt bridge is going to be built, that's decided. That's not how it works. With such big projects, the decision that really matters is the start of the main construction tenders. When the 'decision' is a joint government or even EU-level declaration, that can be drawn out indefinitely, with repeated joint declarations that now we really mean business. Or worse, the decisionmakers might be only willing to pay for preliminary studies, and sell those as the start of the project, but then solicit ever more studies (an example: Brenner Base Tunnel). Even when the decision is tendering the detailed plans, that may be followed up by several plan modifications, or disputes over the price tag that may delay the construction tender (example: Malmö city tunnel), even indefinitely. I should probably do a diary on it You should! (And with the political boundaries clearly drawn, I'd be curious at contrarian comments from other ETers versed in Swedish politics ;-) ) *Lunatic*, n. One whose delusions are out of fashion.
The HSL Zuid is delayed due to issues with the new Dutch security system and issues with ERTMS. There is also an issue with the rolling stock for the sub-HSR service, which the NS/KLM Hispeed alliance wanted to solve through renting trains for a 160 km/h service from Britain. The HSR service with the Thalys could already be running, but isn't due to problems with ERTMS. For the slower (250 km/h) service the Hispeed alliance will run the AnsaldoBreda V250, which has been designed by Pininfarina. Here's a pic. Don't know if it's any good. The English Wiki states that there are delivery problems with Bombardier, but I don't know where that comes from.
the AnsaldoBreda V250, which has been designed by Pininfarina. Here's a pic. Don't know if it's any good. What might be a bad omen is the Danish IC4. With a little delay, the first was delivered in 2003. Then it was a disaster story, production delays, technical problems, wrong specifications, the trains aren't fully commissioned to this day. They were/are made by Ansaldo-Breda... The Wiki article is garbled, but I won't fix it today, just briefly: the Bombardier part of the story was the idea to run locomotive-pulled trains as an interim solution, using leased Bombardier TRAXX locos. But the problem wasn't delivery, it was getting permissions for the new TRAXX type to run on both the new line and conventional Belgian and Dutch lines, which would have required too much time. *Lunatic*, n. One whose delusions are out of fashion.
Superb diary. I love trains and train travel but never would have imagined myself thinking of them as anything other than big machines that take you somewhere :-) My perception is forever changed.
Thanks so much for this diary. Here in California we're gearing up for the critical year in our HSR plan. In the late 1990s the state legislature directed the creation of a high speed rail plan. In 2002 that plan was delivered, the first stage of which involves a line from downtown SF to downtown LA. It was to go before voters in 2004, but Arnold Schwarzenegger postponed the vote until 2006, and again until 2008. This time it's going to happen, but with a state budget crisis and low public awareness of peak oil it's not at all clear that voters will go for it. So I've appointed myself the task of trying to change that. I spent much of the last couple days working on a high speed rail advocacy site. Once it's ready to go I'll post a link here for you all; I hope you'll give me feedback and thoughts for improvement. One thing I would LOVE is for folks here to submit "testimonials" that I can post on the site about HSR. Something that explains why you have found HSR lines you've used to be so valuable, and/or why it would be compelling for you if CA built such a line. Also, if any of you have useful information, such as stats showing the fiscal viability of HSR, favorable comparisons with air travel (especially in terms of ridership), or hard numbers about greenhouse gas reduction, that would be great too. E-mail's below. And the world will live as one
You should get in touch with BruceMcF. I will also collect some material and email you. *Lunatic*, n. One whose delusions are out of fashion.
I decided to dump material I collected into one or more diaries on ET, posted from tomorrow. Stay tuned. *Lunatic*, n. One whose delusions are out of fashion.
Thanks - and the list of links you added at the end of this diary was very useful. I've been reading ET only since about the beginning of 2007, so the links from earlier years are particularly helpful. And the world will live as one
... action until February 6th or so. I've been accused of being a Marxist, yet while Harpo's my favourite, it's Groucho I'm always quoting. Odd, that.
Why, mate? Work, family, study, politics? *Lunatic*, n. One whose delusions are out of fashion.
... Feb. 5 if the big day ... after that there will be a chance to catch breath before the primary here in Ohio. I've been accused of being a Marxist, yet while Harpo's my favourite, it's Groucho I'm always quoting. Odd, that.
Well, I've been using the Lyon-Paris TGV on an average of around 30 times a year for 20 years and the Lyon-Brussels and Paris-Brussels lines on a monthly basis for 6 years, so I could write a "testimonial". Are you interested? "Dieu se rit des hommes qui se plaignent des conséquences alors qu'ils en chérissent les causes" Jacques-Bénigne Bossuet
Absolutely. That would be fantastic. CA has a lot of people who make frequent trips between San Francisco and Los Angeles, and so they'd be very interested in your experience. Many such commuters can't imagine any other way to make that trip than flying, although as I try to point out, once you factor in travel time to the airport, security, waiting for the plane, etc, HSR is very comparable if not quicker in terms of travel time. And the world will live as one
Two Eurostars sitting in Gare du Nord after the run from St Pancras.
The one on the left is Colman's, the one on the right is Sam's. ;-) "Life shrinks or expands in proportion to one's courage." - Anaïs Nin
I always enjoy your diaries, DoDo and this is another excellent one!
Wonderful detail. Thank you. Am sending the link to a number of train fans, on several continents. And while Europe builds high-speed trains and looks to the future, the U.S. of A. is building more . . . SUVs! And politicians are still squabbling about whether to put any kinds of funds into anything other than expensive roads that are invariably outdated/overcrowded by the time they're built.
Ah, again, those efficient Englishmen and women... The Hun is always either at your throat or at your feet. Winston Churchill
# Top Diaries
## Civic Self Defense Resources
by gmoke - Sep 19
1 comment
## 2034
by Frank Schnittger - Sep 10
## LQD: "I don't know how to be human any more."
by ARGeezer - Sep 7
## The Third Tribe of Ulster
by Frank Schnittger - Sep 2
## Labour grows up?
by Frank Schnittger - Aug 27
# Recent Diaries
## Civic Self Defense Resources
by gmoke - Sep 19
1 comment
by Cat - Sep 14
## 2034
by Frank Schnittger - Sep 10
## LQD: "I don't know how to be human any more."
by ARGeezer - Sep 7
## The Third Tribe of Ulster
by Frank Schnittger - Sep 2
## The Focus Group
by THE Twank - Aug 31
## Labour grows up?
by Frank Schnittger - Aug 27
by Cat - Aug 22
by Cat - Aug 22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25980129837989807, "perplexity": 5126.287578307679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687642.30/warc/CC-MAIN-20170921044627-20170921064627-00353.warc.gz"} |
http://www.cs.nyu.edu/pipermail/fom/2013-August/017551.html | # [FOM] First Order Logic
MartDowd at aol.com MartDowd at aol.com
Sat Aug 31 14:12:25 EDT 2013
The second-order axiom of induction is
$\forall S ( 0\in S \wedge \forall n ( n\in S\Rightarrow n+1\in S ) \Rightarrow \forall n ( n\in S ))$
It is provable in ZFC that $N$ (the integers with 0,1,+,x) is the only
structure satisfying Peano's axioms with the second order induction axiom.
However, the statements in the language of number theory which can be proved
in ZFC to hold in this structure are recursively enumerable. The true
statements are not. Any attempt to remedy the situation by means of providing
axioms for second order validity cannot succeed.
- Martin Dowd
In a message dated 8/30/2013 2:29:24 P.M. Pacific Daylight Time,
hewitt at concurrency.biz writes:
I am having trouble understanding why the proponents of first-order logic
think that second-order systems are unusable.
[Dedekind 1888] and [Peano 1889] thought they had achieved success because
they had presented axioms for natural numbers and real numbers such that
models of these axioms are unique up to isomorphism with a unique
isomorphism
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20130831/168abbac/attachment.html> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8954846858978271, "perplexity": 3700.8232159404283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999669442/warc/CC-MAIN-20140305060749-00069-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://link.springer.com/chapter/10.1007%2F978-1-4757-1915-4_18 | 1986, pp 348-370
Measurements of Total Carbon Dioxide and Alkalinity in the North Atlantic Ocean in 1981
* Final gross prices may vary according to local VAT.
Abstract
The ocean uptake of fossil fuel CO, has long been recognized as the principal modulator of the rising atmospheric CO, level. If we are to observe and understand this effect, then an essential step is the accurate measurement of the CO, properties of the ocean. Historically, this has been quite difficult to achieve. Although measurements of some kind date back to the late 19th century, complete, documented, and verifiable measurements are scarce indeed. This chapter describes and documents the series of total CO, and alkalinity measurements of seawater made on the North Atlantic Ocean during the Transient Tracers in the Ocean (TTO) expedition in 1981, and presents briefly the signals these data reveal. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8537163138389587, "perplexity": 1504.5726423734302}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133485.50/warc/CC-MAIN-20140914011213-00011-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://brilliant.org/discussions/thread/problem-in-rating-system/ | ×
# Problem in rating system
Initially, this week, in Number theory and Algebra, I had a nice rating almost equal to level 5. I had solved all questions, except 1 question viewed but unattempted in both subjects. Since Tuesday, these 2 questions were undone. And the problem is, my rating in both the subjects is decreasing intensively, that is, almost 200 points till now! I don't even know why has it happened. If it is supposed to be the same way, then Brilliant staff could tell me why. But if this some sort of a glitch, staff must fix it, as it may further reduce and demote me! I have already lost extreme number of rating. Had those 200 rating not vanished, I would have been on level 5! I hope this will surely be explained to me by the Birlliant staff, waiting for a soon reply, especially before next week, so that the problem doesn't get delayed to one more week!
Note by Akshat Jain
3 years, 2 months ago
Sort by:
Hi Akshat,
It looks like your rating has been dropping because you have been viewing lots of problems outside your level. When you view these problems it affects your rating exactly like . We will fix it to better indicate this by displaying ratings on problems that aren't in your problem set. You can recover your rating, and improve it, by solving the problems you viewed. Staff · 3 years, 2 months ago
Sir, till date i m solving only problems of my level. How can i get acess to see and solve problems of higher level... · 3 years, 2 months ago
What! :O
I have viewed many many problems outside my level, because only 4 problems in each subject every week is not enough food for my mind! Though I have solved most of them, the ones unsolved have got me.
I personally feel this should be changed. I mean, rating should not decrease on viewing a problem outside the problem set, though it should be reduced on giving the wrong answer. I would really like to suggest the staff to make this change! · 3 years, 2 months ago
I didn't know about this at all. So just like every week, I have been solving problems outside my level and didn't understand why my rating was going down. Just like you, my rating has gone down by 400-500. But now that I know, I hope it won't go down again. · 3 years, 2 months ago | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8203195333480835, "perplexity": 1156.686035433998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00141-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/130641/how-to-put-the-qed-symbol-of-a-proof-at-the-right-place-inside-align | # How to put the QED symbol of a proof at the right place inside align?
When a proof ends with a formula in equation or equation* environment, putting \qedhere after the equation would cause the QED symbol to appear in the right place. I.e. at the end of the line in which the equation appears. For example:
$$x = y+z \qedhere$$
However, I cannot get the same result when the formula is inside an align/align* environment using the same method:
\begin{align}
x &= a+b \\
&= y+z \qedhere
\end{align}
The square appears just after (y+z) and is not shifted to the right boundary of the page.
Here is a MWE:
\documentclass[a4paper]{article}
\usepackage{hyperref}
\usepackage{amsthm}
\usepackage[cmex10]{amsmath}
\interdisplaylinepenalty=2500
\usepackage{amssymb}
\title{Example to Show QED is Misplaced}
\author{}
\begin{document}
\begin{proof}
This proof is typeset correctly:
\begin{equation*}
x = y + z \qedhere
\end{equation*}
\end{proof}
\begin{proof}
But this one not!
\begin{align*}
x & = u + v \\
& = y + z \qedhere
\end{align*}
\end{proof}
\end{document}
-
Why don't you use the proof environment of the acm packages? – Willem Van Onsem Aug 29 '13 at 9:50
ntheorem might also be worth a look – moewe Aug 29 '13 at 9:54
It appears at the right margin in my experiment (but it has the side effect of removing the equation number). Can you show a minimal working example (MWE)? – egreg Aug 29 '13 at 9:58
I added a MWE to the end of my question. I am sorry, I don't know how can I upload the MWE file directly. – Mani Bastani Parizi Aug 29 '13 at 11:30
@ManiBastaniParizi The cmex10 option to amsmath may be needed only with very old TeX installations; don't use it unless you get a Math formula deleted error. In this case, first try updating your TeX distribution. – egreg Aug 29 '13 at 16:32
## 2 Answers
Change the order, this works just fine, amsthm after amsmath, otherwise it might be a bit hard for it to hook into align*
\documentclass[a4paper]{article}
\usepackage{amsmath}
\usepackage{amsthm}
\interdisplaylinepenalty=2500
\usepackage{amssymb}
\usepackage{hyperref}
\title{Example to Show QED is Misplaced}
\author{}
\begin{document}
\begin{proof}
This proof is typeset correctly:
\begin{equation*}
x = y + z \qedhere
\end{equation*}
\end{proof}
\begin{proof}
But this one not!
\begin{align*}
x & = u + v \\
& = y + z \qedhere
\end{align*}
\end{proof}
\end{document}
-
Thank you very much :-)! – Mani Bastani Parizi Aug 29 '13 at 12:10
with the package order amsthm before amsmath, a warning is issued: Package amsthm Warning: The \qedhere command may not work correctly here on input line .... it's also documented in the first section of amsthdoc that "amsthm must be loaded after amsmath, not before." – barbara beeton Aug 29 '13 at 14:30
Another caveat, be sure to have all the & in the last line. If the preceding lines contained for example there & don't forget to use all of them on the last line too. – Yrogirg Oct 20 '15 at 1:58
Section 5 of the amsthm package documentation contains the following.
When used with the amsmath package, version 2 or later, \qedhere will position the QED symbol flush right; with earlier versions, the symbol will be spaced a quad away from the end of the text or display. If \qedhere produces an error message in an equation, try using \mbox{\qedhere} instead.
However, when I tried this with your example I got a QED symbol one quad away from the end of the display, despite the fact that my distribution contains amsmath version 2.13. However, using
\tag*{\qedhere}
instead solved the problem.
-
@daleif's solution is preferable, though! – Ian Thompson Aug 29 '13 at 11:55
see the second paragraph of section "1 introduction" of amsthdoc. (i had to check, because if it didn't already say that the loading order is important, i would have to add it to the list of bugs. glad i don't have to.) – barbara beeton Aug 29 '13 at 14:32
@barbarabeeton --- indeed, I overlooked the issue of loading order. My hack first, consult manual later approach wasn't the best here! – Ian Thompson Aug 29 '13 at 15:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9615054130554199, "perplexity": 1827.0168753386295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053379198.78/warc/CC-MAIN-20160524012939-00057-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://mathhelpforum.com/geometry/210916-vertex-triangle-equilateral-print.html | the vertex of the triangle to be equilateral
• January 7th 2013, 06:41 AM
rcs
the vertex of the triangle to be equilateral
A triangle has two vertices at ( -a, 0 ) and (a , 0 ). What must be the third vertex of the triangle to be equilateral?
Thanks
• January 7th 2013, 06:54 AM
Plato
Re: the vertex of the triangle to be equilateral
Quote:
Originally Posted by rcs
A triangle has two vertices at ( -a, 0 ) and (a , 0 ). What must be the third vertex of the triangle to be equilateral?
The third vertex will be $(0,b)$ where the distance $\mathcal{D}[(0.b);(a,0)]=2|a|$
• January 7th 2013, 07:07 AM
skeeter
1 Attachment(s)
Re: the vertex of the triangle to be equilateral
Quote:
Originally Posted by rcs
A triangle has two vertices at ( -a, 0 ) and (a , 0 ). What must be the third vertex of the triangle to be equilateral?
... note that half an equilateral triangle is a 30-60-90 special right triangle. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8122163414955139, "perplexity": 696.8814794118399}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00172-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://www.jiskha.com/members/profile/posts.cgi?name=CANDICE | Sunday
January 22, 2017
# Posts by CANDICE
Total # Posts: 156
maths
oxygen occupies about 1/5 percentage of atmospheric air and the rest is taken up by other gases.what space do theother space occupy?
December 2, 2016
Math2
Bob has entered his giant pumpkin into the tops field fair in New England. To qualify for the finals, pumpkins must meet a minimum weight requirement, which is based on the weights of all of the entries. This year, 55.17% of all entries will make it to the finals. With a ...
June 29, 2016
Math( Assignment Due Today)
During a nature trip, Joni, Billy and Larry documented the location of an eagles nest. From point a, the campers observed the Eagles best on tip of a tree directly across a deep stream. The angle of elevation to the nest from point a is 35 degrees. The campers walked 42 m ...
June 29, 2016
For a particular group of people, the number of hours of sleep per night is normally distributed with a mean of 7.7 and a standard deviation of 1.6. To the nearest tenth 67% of people in this group will sleep over blank hours?
June 29, 2016
Math(Help, AssignmentDueToday)
Adora has a photo of the Calgary tower that measures 7cm by 19cm. She scans the photo and enlarges it by 160%. By what percentage will the area of the photo increase? A.) 160% B.) 320% C.) 410% D.) 256% A shop sells a small globe that hangs on a keychain, a medium globe used ...
June 28, 2016
Math(AssignmentDueToday)
Mail, a beaver at a zoo, is 0.90m long. A stuffed animal version of Amik is sold in the gift shop. If the scale factor used to reduce the real Amik to the stuffed animal Amik is 0.15, how long is the stuffed animal Amik? A.) 13.5cm B.) 6cm C.) 13.5m D.) 6m. **When I did the ...
June 28, 2016
Adora has a photo of the Calgary tower that measures 7cm by 19cm. She scans the photo and enlarges it by 160%. By what percentage will the area of the photo increase? A.) 160% B.) 320% C.) 410% D.) 256% A shop sells a small globe that hangs on a keychain, a medium globe used ...
June 28, 2016
By the way, this assignment is due today. Thanks in advance
June 28, 2016
Mail, a beaver at a zoo, is 0.90m long. A stuffed animal version of Amik is sold in the gift shop. If the scale factor used to reduce the real Amik to the stuffed animal Amik is 0.15, how long is the stuffed animal Amik? A.) 13.5cm B.) 6cm C.) 13.5m D.) 6m. **When I did the ...
June 28, 2016
Math
Visitor Numbers Total visitors: 2004. 2005. 2006 3,135,727. 3,164,906. 3,281,435 Total visitor days: 2004. 2005. 2006 7,453,465. 7,518,997. 7,784,044 Total visitors have increased by 4.6% and total visitor days by 4.4% in this period. Group tour visitors have increased ...
June 23, 2016
I Really Need Help
Visitor Numbers Total visitors: 2004. 2005. 2006 3,135,727. 3,164,906. 3,281,435 Total visitor days: 2004. 2005. 2006 7,453,465. 7,518,997. 7,784,044 Total visitors have increased by 4.6% and total visitor days by 4.4% in this period. Group tour visitors have increased ...
June 23, 2016
Visitor Numbers Total visitors: 2004. 2005. 2006 3,135,727. 3,164,906. 3,281,435 Total visitor days: 2004. 2005. 2006 7,453,465. 7,518,997. 7,784,044 Total visitors have increased by 4.6% and total visitor days by 4.4% in this period. Group tour visitors have increased ...
June 22, 2016
There were approximately 28 million adults on Canada when a survey was conducted. In an online survey, 1008 Canadians took part. 68% said that the penny should be abolished. If the margin of error was +- 3.1%, give an upper and lower estimate for the number of Canadian adults ...
June 22, 2016
Table 9: Visitor Numbers Total visitors: 2004. 2005. 2006 3,135,727. 3,164,906. 3,281,435 Total visitor days: 2004. 2005. 2006 7,453,465. 7,518,997. 7,784,044 Total visitors have increased by 4.6% and total visitor days by 4.4% in this period. Group tour visitors have ...
June 22, 2016
There were approximately 28 million adults on Canada when a survey was conducted. In an online survey, 1008 Canadians took part. 68% said that the penny should be abolished. If the margin of error was +- 3.1%, give an upper and lower estimate for the number of Canadian adults ...
June 22, 2016
Table 9: Visitor Numbers Total visitors: 2004. 2005. 2006 3,135,727. 3,164,906. 3,281,435 Total visitor days: 2004. 2005. 2006 7,453,465. 7,518,997. 7,784,044 Total visitors have increased by 4.6% and total visitor days by 4.4% in this period. Group tour visitors have ...
June 22, 2016
A group of 894 women aged 70-79 had their height and weight measured. The mean height was 159 cm with a standard deviation of 5 cm and the mean weight was 65.9kg with a standard deviation of 12.7kg. Both sets of data are fairly normal. A.) Suppose you were asked for a range of...
June 21, 2016
Graphing Ciricles
What is the shortest distance between the circles defined by x^2-10x +y^2-4y-7=0 and x^2+14x +y^2+6y+49=0? Thank you
April 9, 2016
Math
For what values of x is it true that x^2 - 5x - 4 < 10? Express your answer in interval notation. Thank you!
April 7, 2016
Math
For what values of x is it true that x^2 - 5x - 4 < 10? Express your answer in interval notation. Thank you!
April 6, 2016
Math
The perimeter of a rectangle is 24 inches. What is the number of square inches in the maximum possible area for this rectangle?
April 6, 2016
Math
The operation @ is defined as m/n@p/q = (m)(p)(q/n). What is the simplified value of 7/69@23/24$? April 6, 2016 Math For the nonzero numbers a, b, and c, define J(a,b,c) = a/b + b/c + c/a. Find J(2,12, 9). April 6, 2016 Math I'm still a bit confused April 6, 2016 Math Suppose a function f(x) is defined on the domain [-8,4]. If we define a new function g(x) by g(x) = f(-2x), then what is the domain of g(x)? Express your answer in interval notation. April 6, 2016 Math Let A(t) = 3- 2t^2 + 4^t. Find A(2) - A(1). April 6, 2016 Math Let f(x) = x^2 + 4x - 31. For what value of a is there exactly one real value of x such that f(x) = a? Thank you! April 3, 2016 Math Let f(x) = 3x-7/x+1 Find the domain of f. Give your answer as an interval. Thank you! April 3, 2016 Maths - calculus equation for the hyperbola which has vertices (0;9) and asymptotes y= 18/5 x April 3, 2016 maths Equation for the hyperbola which has a vertices of (0;+_9) and asymtotes = y= +-18/5 April 3, 2016 Chemistry (30s) Wow thank you for putting the time to answer that! I appreciate it. That's a simpler understanding. My teacher explained it horribly and much more complex! Thank you again. March 23, 2016 Chemistry (30s) I need help on this question. I can't seem to understand it. Find the molecular formula of a compound with a gram molecular mass of 30 g and an empirical formula of CH3. Please show your work! and briefly explain how you got it! March 23, 2016 History Is there any credible sites that tells how electricity impacted america? February 18, 2016 usHistory Is there any website that tells how electricity impacted america February 17, 2016 Poetry Oh, my. So sorry. I meant B for 3. January 12, 2016 Poetry I’m Nobody! Who are you? Are you – Nobody – too? Then there’s a pair of us! Don’t tell! they’d banish us – you know! How dreary – to be – Somebody! How public – like a Frog – To tell one’s name – the livelong ... January 12, 2016 math A particle moves along the path y=x^3+3x+1 where units are in centimetres.If the horizontal velocity Vx is constant at 2cm/s,find the magnitude and direction of the velocity of the particle at the point (1,5). November 7, 2015 Chemistry Thank you both so much! This makes much more sense then what I was trying to read from the textbook. June 9, 2015 Chemistry Could a buffer system employing H3PO4 as the weak acid and H2PO4^- as the weak base be used as a buffer system within cells? Explain. I know the answer is no but I'm not really quite sure how to explain why. June 9, 2015 Chemistry I've tried it by changing mmHg to atm and I still get 61.1L February 22, 2015 Chemistry That is the equation that I used. P1V1T2/T1P2 = (736mmHg x 25.6 L x 304.9 K)/(365mmHg x 257.4K) = 61.1 L February 22, 2015 Chemistry The answer I got is 61.1 L but this is not correct. I do not understand what I am doing wrong. February 22, 2015 Chemistry A weather balloon is inflated to a volume of 25.6L at a pressure of 736mmHg and a temperature of 31.9∘C. The balloon rises in the atmosphere to an altitude, where the pressure is 365mmHg and the temperature is -15.6∘C. Assuming the balloon can freely expand, ... February 22, 2015 Chemistry A student is instructed to standardize an approximately 2.5N KOH solution using a 0.500N H2SO4 stock solution. Describe the experimental procedure he/she must follow. I do not understand what procedure needs to be followed. January 26, 2015 Chemistry Why is the carbonate solution boiled before titrating? January 18, 2015 chem/math A car traveling with a constant speed travels 150km in 7200s. What is the speed of the car? January 12, 2015 Chemistry Thank you very much! this makes much more sense now! I really appreciate your help. January 12, 2015 Chemistry 1.984 g of a hydrate of K2CO3 is dissolved in 250. mLs of H2O. 10.0 mLs of this solution is tirade against 0.100 M HCl, producing the following results. Flask 1 - Initial 0, final 9.2 Flask 2 - Initial 9.2, final 18.3 Flask 3 - initial 18.3, final 27.8 Flask 4 - initial 27.8, ... January 12, 2015 Math A ladder 10 ft long leans against a wall. The bottom of the ladder is 6 ft from the wall. How much would the lower end of the ladder have to be pulled away so that the top end would be pulled down by 3 ft? December 7, 2014 Physics Batman (mass = 75.6 kg) jumps straight down from a bridge into a boat (mass = 631 kg) in which a criminal is fleeing. The velocity of the boat is initially +10.4 m/s. What is the velocity of the boat after Batman lands in it? December 4, 2014 Physics A 2.86-g bullet, traveling at a speed of 452 m/s, strikes the wooden block of a ballistic pendulum, such as that in Figure 7.14. The block has a mass of 230 g. (a) Find the speed of the bullet/block combination immediately after the collision. (b) How high does the combination... December 4, 2014 Chemistry A 10.00 ml sample of vinegar, density 1.01g/ml, was diluted to 100.0ml volume. It was found that 25.0 ml of the diluted vinegar required 24.15ml of 0.0976 M NaOH to neutralize it. Calculate the strength of CH3COOH in terms of: a) molarity b) grams CH3COOH per litre c) % ... November 18, 2014 Physics The helicopter in the drawing is moving horizontally to the right at a constant velocity. The weight of the helicopter is W=54300 N. The lift force L generated by the rotating blade makes an angle of 21.0° with respect to the vertical. (a) What is the magnitude of the lift... October 27, 2014 Algebra The function f(x) satisfies $f(\sqrt{x + 1}) = \frac{1}{x}$ for all$x \ge -1,x\neq 0. Find f(2).
October 23, 2014
english
compare and contrast themes in the crucible and the film pleasantville
October 16, 2014
Physics
A bird watcher meanders through the woods, walking 1.10 km due east, 0.624 km due south, and 1.09 km in a direction 18.2 ° north of west. The time required for this trip is 1.046 h. Determine the magnitudes of the bird watcher's (a) displacement and (b) average velocity.
October 5, 2014
Physics
A police car is traveling at a velocity of 20.0 m/s due north, when a car zooms by at a constant velocity of 44.0 m/s due north. After a reaction time 0.700 s the policeman begins to pursue the speeder with an acceleration of 6.00 m/s^2. Including the reaction time, how long ...
September 29, 2014
Physics
rom the top of a cliff, a person uses a slingshot to fire a pebble straight downward, which is the negative direction. The initial speed of the pebble is 7.56 m/s. (a) What is the acceleration (magnitude and direction) of the pebble during the downward motion? (b) After 0.950 ...
September 29, 2014
life orientation
identify three study fields/ career paths in order of preference and provide two reasons for each choice
June 4, 2014
Statistics
Hi, Are market returns (with numbers such as -3.45, 4, 2.456 representing the "returns") ordinal or nominal data? Thanks!
April 18, 2014
Chemistry
Calculate the number of moles of ag in 5ml of .004m agno3 and the number of moles of cr04 in 5ml of .0024m k2cro4
March 19, 2014
Economics
According to the following game tree, and if the entrant and incumbent both only care about their own monetary payoff, what is/are the game's Nash equilibrium? Entrant - stays out (2,15) - enters If the entrant enters, Incumbent - co operates (5,5) - punishes (-3,9) So ...
May 14, 2013
finance
Calculate the IRR of the following project: Year Cash Flow 0 -$30,000 1$40,000
May 6, 2013
L.O
identify and describe 3 environmental heath hazards that cause ill health,crises,and or any other community within south africa and globally.
April 29, 2013
Math
When f(x)is divided by x-1 and x+2,the remainders are 4 and -2 respectively . Hence find the remainder when f(x)is divided by x^2+x-2 .
April 20, 2013
Economics
Hi, The demand for inflatable garden gnomes is given by P = 300 – 2Q, while the supply of is P= 100 + Q/2. How many garden gnomes are traded in equilibrium? I found the answer to be Q = 80. The related question was the one I had difficulty with: Suppose that the market ...
April 10, 2013
algebra
Which property would allow you to use mental computation to simplify the problem 27 + 15 + 3 + 5?
April 1, 2013
physics
Mason lifts a box off the ground and places it in a car that is 0.50 meters high. If he applies a force of 8.0 newtons to lift this suitcase, what is the work done?
January 25, 2013
gvernemnt
purpose of a filibuster is to
November 26, 2012
Physics
A soccer player kicks a football frombground level with an initial velocity of 27.0m/s, 30deg. Solve for the: A. Ball's hang time B. Range C. Max Height Ignore air resistance. (Hint: hang time refers to the time the ball is in the air.)
November 21, 2012
financial management
Suppose you invest $2,500 in an account bearing interest at the rate of 14% per year. What will the future value of your investment in six years? November 20, 2012 Math Use mathematical induction to prove that 2^(3n) - 3^n is divisible by 5 for all positive integers. ThankS! October 22, 2012 maths urgent please a particle moves in a straight line such that its position x from a fixed point 0 at time 't' is given by x= 5 + 8sin2t + 6cos2t 1. Find the period and amplitude of the particle. 2. Find the greatest speed of the particle. Could you please explain the steps on how to ... October 22, 2012 maths urgent please the chord PQ joining the points P(2p, p^2) and Q(2q, q^2) on x^2 = 4ay always passes through the point A(2,0) when produced. show that (P+q) = pq thanks! October 20, 2012 Math A particle is projected horizontally with velocity V m/s, from a point h meters above ground. Taking g ms^-2 as the acceleration due to gravity, 1. Show the equation of the path is given by y = (2hV^2 - gx^2)/ 2V^2 2. Find the range of the particle. Thanks October 20, 2012 Math By using the expansion (1 + x)^2. Prove 8[sigma notation]k=0 2^(3K)(N K) = 3^(2n). Thanks October 20, 2012 Mathematics Can you please explain how to get the answers please? October 20, 2012 Mathematics a particle moves in a straight line such that its position x from a fixed point 0 at time 't' is given by x= 5 + 8sin2t + 6cos2t 1. Find the period and amplitude of the particle. 2. Find the greatest speed of the particle. Thanks October 20, 2012 maths by equating the coefficients of sin x and cos x , or otherwise, find constants A and B satisfying the identity. A(2sinx + cosx) + B(2cosx - sinx) = sinx + 8cosx I got A = 2, B = 3, which the answers said were correct. However, another part of the question asked hence integrate... October 19, 2012 Math Prove by induction that 9^(n+2)- 4n is divisible by 5 for n greater than or equal to 1. Thanks. October 19, 2012 maths A box contains 20 balls, of which 5 are red, 5 are blue, 5 are white and 5 are green. Suppose a sample of 5 balls are chosen without replacement. a) find the probability that the sample contains balls of all the same colour (answer is 4/ 20C5, but why isnt it 5/20C5??) b) find... October 11, 2012 Math Find the probability that if the letters of the word "parallel" are randomly arranged that the L's will not be together. In class, I'm studying permutations and combinations. The solutions stated the no. of permutations where 3 L's are together is 6!/2!. ... October 11, 2012 Economics Find the future value one year from now of a$7,000 investment at a 3% annual compound interest rate. Also calculate the future value if the investment is made for 2 yeaars?
August 20, 2012
MATH
You are interested in finding out if a student’s ACTscore is a good predictor of their final college grade point average(GPA). You have obtained the following data and are going to conduct a regression analysis. Follow instructions on page 324 of your textbook under line ...
August 20, 2012
math
could you please help with this question :) Find the speed and direction of a particle which, when projected from a point 15 m above the horizontal ground, just clears the top of a wall 26.25 m high and 30 m away. Thanks in advance
July 15, 2012
math
A batsman hits a cricket ball 'off his toes' towards a fieldsman who is 65 m away. The ball reaches a maximum height of 4.9 m and the horizontal compoenent of its velocity is 28 m/s. Find the constant speed with which the fieldsman must run forward, starting at the ...
July 15, 2012
Math
A batsman hits a cricket ball 'off his toes' towards a fieldsman who is 65 m away. The ball reaches a maximum height of 4.9 m and the horizontal compoenent of its velocity is 28 m/s. Find the constant speed with which the fieldsman must run forward, starting at the ...
July 14, 2012
Math
Hi, could you please help with this question :) Find the speed and direction of a particle which, when projected from a point 15 m above the horizontal ground, just clears the top of a wall 26.25 m high and 30 m away. Thanks in advance
July 14, 2012
Finance
13.4%
June 17, 2012
Math
Evaluate the integral from 0 to sqrt3 cot^-1(x) dx. Thanks :)
April 19, 2012
Math
The region above the curve y = sin^-1(x), (0 greater than or equal to y greater than or equal to pi/4), is rotated about the y-axis through a complete revolution to form a solid. Evaluate the volume. Thanks!
April 19, 2012
Math
Evaluate the integral from 0 to 4 of (x+1)/(16+x^2) dx the answers are log(sqrt2) + pi/16 Could you please take me through how to get to that answer please, thank you very much!
April 19, 2012
general chemistry
How much heat in kilojoules is evolved in converting 1.50moles of steam at 150 degrees celcius to ice at -60.0 degrees celcius? The heat capacity of steam is 1.84J/g degree C and that of ice is 2.09J/g degree C.
April 3, 2012
math
the lawn order lawnmower factory can produce 12 lawnmowers in 8 hours. How many hours will it take the actory to produce 30 lawnmowers.
February 15, 2012
Math
I have a box that is 15cm-wide, 10cm-long, and 3cm- wide. To get it's volume I would add all the sides. Thanks
January 16, 2012
English
Hi, I have to write up paragraphs on these themes from Frankenstein: a) good vs. evil (manichean) b) duty of responsibility c) nature vs. nurture However, I'm confused about what the differences between these points are... For example, in "nature vs. nurture", ...
December 26, 2011
Math
A circular piece of wire has a radius of 12 cm. This is cut then bent to form an arc of a circle whose radius is 60 cm. Find the angle subtended at the centre by this arc. Give your answer to the nearest degree. Please show working out Thank you so much!
December 9, 2011
Maths
What is the area of the minor segment cut off a circle of radius 10 cm by a chord of length 12 cm? Could you please show me the working out for this question? The answer in the textbook is 16 sq cm. Thanks!!!
December 9, 2011
1. Pages:
2. 1
3. 2
4. Next>> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5221617221832275, "perplexity": 3056.635657648677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00206-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/108121/computing-the-general-form-of-the-coefficients-of-a-power-series | # Computing the general form of the coefficients of a power series
This question is related to at least two previous questions: Finding the power series of a rational function and Computing the nth coefficient of the power series representing a given rational function
However, mine goes in a slightly different (and perhaps more general) direction. I want to obtain the general form of the coefficients of a power series representing some rational function. As it has been observed before, this can be done mechanically, and the Mathematica function SeriesCoefficient does the magic. E.g. I ask
SeriesCoefficient[-2 ((1 + x)^2) (1 + x + x^2)/((1 - x)^4 (x^2 - 1)), {x, 0, n}]
and I get the answer $(1+n)^2(4+2n+n^2)/2$. Now, does anybody know how Mathematica does it? Ultimately, what I want to know is whether you can trust that answer blindly, i.e. whether it is not necessary to PROVE that result in a paper, say. Thank you very much in advance.
-
"what I want to know is whether you can trust that answer blindly" - never trust the output of any computer program blindly. Any complex piece of software can and will have bugs. – J. M. Feb 11 '12 at 13:36
You can blindly distrust that answer. All the coeffieciets of you series must be even (because of the factor $-2$ and the constant term $-1$ of the denominator) but your answer does not have this property (it gives $7$ for $n=1$ for instance). Also note you can simplify your fraction by a factor $x+1$. – Marc van Leeuwen Feb 11 '12 at 14:34
@Marc: Yes, OP copied the output of Mathematica wrong, as the actual result returned has the factor $(1+n)^2$ as opposed to $1+n^2$... – J. M. Feb 11 '12 at 14:38
@J.M. You're right, thanks. I will fix the error. – Manolito Pérez Feb 11 '12 at 14:41
In general (and quite simply in your case) you can break your rational function into partial fractions, then get the series for the partial fractions and add them. I am no where near awake enough to do this by hand right now. I have no idea if this is what Mathematica does. – deinst Feb 11 '12 at 14:59
If I have done this correctly,
$$\frac{-2 ((1 + x)^2) (1 + x + x^2)}{((1 - x)^4 (x^2 - 1))}=-\frac{2}{(x-1)^2}-\frac{10}{(x-1)^3}-\frac{18}{(x-1)^4}-\frac{12}{(x-1)^5}$$
From this and the fact that the power series of $\frac{1}{(x-1)^n}$ is $\sum_i\binom{n-1+i}{n-1}x^i$ you can get the result. I am not going to slog through the algebra, Mathematica is better at that sort of thing.
-
OK, thanks. I assume you can do that for any arbitrary rational function, right? – Manolito Pérez Feb 11 '12 at 15:32
Yes. It gets fiddly, but still straightforward when you need to deal with complex roots, the kind of thing best left to Mathematica (or something similar) – deinst Feb 11 '12 at 16:00
Then that means that in principle I could accept Mathematica's answer without further proof, right? – Manolito Pérez Feb 11 '12 at 16:59
@ManolitoPérez Probably. I'd say that it is more likely that Mathematica will be correct than anything you do by hand. – deinst Feb 11 '12 at 17:09
Whether you can get an explicit expression at all for the coefficients at all depends on your capability of factoring the denominator. For instance, for the Fibonacci numbers, their explicit (Binet's) formula involves the roots to the reversed polynomial $x^2-x-1$ of the denominator $1-x-x^2$ of the rational expression $\frac x{1-x-x^2}$ giving their generating function (the reversal induces the involution $\alpha\mapsto\alpha^{-1}$ on the roots, so the reversed denominator can be factored if and only if the denominator can). Here and in your example the denominator is easily factored, but there is no method that will factor arbitrary polynomials.
You may try what SeriesCoefficient gives you for rational functions with denominators that are known not to be solvable by radicals (try $1-x^4+x^5$), or for those that are but whose roots are given by horrendous expressions (try a quartic polynomial with a not too special form). So while you can probably trust a computer algebra system when it does give a nice expression (and then it's not hard either to check the correctness by hand), you should not trust it to in general give you a nice expression at all.
-
Thanks, I will try that. – Manolito Pérez Feb 12 '12 at 11:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398056030273438, "perplexity": 457.0775949780392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276584.58/warc/CC-MAIN-20140728011756-00299-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/178474/taylor-polynomial-of-fx-1-1-cos-x | Taylor polynomial of $f(x) = 1/(1+\cos x)$
I'm trying to solve a problem from a previous exam. Unfortunately there is no solution for this problem.
So, the problem is:
Calculate the Taylor polynommial (degree $4$) in $x_0 = 0$ of the function: $$f(x) = \frac{1}{1+\cos(x)}$$
What I tried so far:
• calculate all $4$ derivatives
• $1+\cos(x) = 2\cos^2(\frac{x}{2})$ and work with this formula
• $\int\frac{1}{1+\cos(x)}dx = \tan(\frac{x}{2})$ and then use the Taylor series of $\tan(\frac{x}{2})$
• $\frac{1}{1 + \cos(x)} = \frac{1}{1 + \left(1 + \frac{x^2}{2!} + \cdots\right)}$
What do you think, is there a good way to calculate the Taylor polynomial of this function or is there just the hard way (derivatves)?
-
Let's do this in a couple of different ways. We want to expand $\dfrac{1}{1 + \cos x}$.
Method 1
If it's not obvious what the best method is, we might just choose one and go. A fourth degree Taylor Expansion isn't so bad - it's only a few derivatives. So you might just go, suffer through a few derivatives, and save yourself the trouble of deciding the best way.
Alternately, if you happen to know the series for $\tan x$, then that's a great way to proceed (referring to your idea of using the series expansion for $\tan (x/2)$
Method 2
If we are certain it has a Taylor expansion, and we are comfortable then we know it will look like $a_0 + a_1x + a_2x^2/2 + \ldots$ We know that $\cos x = 1 - x^2/2 + x^4/4! + \ldots$, so that $\dfrac{1}{1 + \cos x} = \dfrac{1}{2 - x^2/2 + x^4/4! + \dots}$
So we consider $\dfrac{1}{2 - x^2/2 + x^4/4! + \dots} = a_0 + a_1x + a_2x^2/2 + \ldots$, or equivalently $$(a_0 + a_1x + a_2x^2/2 + \ldots)(2 - x^2/2 + x^4/4! + \dots) = 1$$
By equating the coefficients of powers of $x$ on the left and on the right (which are all $0$ except for $x^0$, which has coefficient $1$), we get that $2a_0 = 1$, $a_1 = 0$, $a_0(-x^2/2) + (a_2x^2/2)(2) = 0$, etc. This isn't too bad, and is just a set of linear equations.
-
wow, +1! thanks for this solution! – lee.O Aug 3 '12 at 16:58
this way of finding a taylorpolynom is really good to know, thanks a lot! – lee.O Aug 3 '12 at 17:18
I'm glad you like it - – mixedmath Aug 3 '12 at 17:23
There are some things that can make it easier, since you are expanding around $x_0 = 0$.
Let $f(x) = \frac{1}{1+\cos x} = \left(1+\cos x\right)^{-1}$. We can compute $df/dx$ as $\frac{df}{dx} = -\left(1+\cos x\right)^{-2}\frac{d (1+\cos x)}{dx} = -\left(1+\cos x\right)^{-2}\sin x$. Now, we have a product rule situation going to higher derivatives.
This makes our life a little easier, since $\sin x_0 = \sin 0 = 0$. In other words, we can simply "not care" about higher-order derivatives that have a $\sin$ term in them.
So, in short, the best way to compute the Taylor expansion quickly for a few terms on an exam, in my opinion, is to compute the derivatives, and note that since we need to compute $\frac{d^nf}{dx^n}\mid_{x = 0}$, to just skip the rigor of computing in detail any term that gets multiplied by $\sin x$.
-
In fact, since the function to be expanded is an even function, one only needs to worry about computing the coefficients of the even-order terms... – J. M. Aug 3 '12 at 17:41
@J.M. Indeed. I'm hoping the OP recognizes the pattern -- it certainly helps in doing such computations quickly within a time limit. – Arkamis Aug 3 '12 at 17:53
Thanks for your answer, I had to try it with your approach :). But @J.M. , what do you mean by "only needs to worry about the even-order-terms"? – lee.O Aug 3 '12 at 20:01
@lee.O He means that "odd" derivatives will always wash out. Consider $$f'(x) = -\left(1+\cos x\right)^{-2}\sin x,$$ but $$f''(x) = 2\left(1+\cos x\right)^{-3}\sin x - \left(1+\cos x\right)^{-2}\cos x.$$ Compute these for $x=0$, and the $\sin x$ term makes $f'(0) = 0$ in the first derivative. For the second derivative, however, the $\sin x$ term only wipes out part of the answer. – Arkamis Aug 3 '12 at 20:59
now i got it, thanks for your explanation! :) – lee.O Aug 3 '12 at 21:23
$$\cos x=\sum_{k=0}^\infty (-1)^k\frac{x^{2k}}{(2k)!}=1+\cos x= 2-\frac{x^2}{2}+\frac{x^4}{24}-...\Longrightarrow$$
$$\frac{1}{1+\cos x}=\frac{1}{2-\frac{x^2}{2}+\frac{x^4}{24}-...}=\frac{1}{2}\frac{1}{\left[1-\left(\frac{x}{2}\right)^2\right]\left(1-\frac{x^2}{24}+...\right)}=$$
$$=\frac{1}{2}\left(1+\frac{x^2}{4}+\frac{x^4}{8}+...\right)$$by taking the development of $$\frac{1}{1-x^2}$$
-
When I start with: $\frac{1}{2}\dfrac{1}{\left[ 1 - (x/2)^2\right](1-x^2/24 + \dots)} = \frac{1}{2}(1 + x^2/4 + x^4/8 + \dots) \cdot \dfrac{1}{(1 - x^2/24 + \dots)}$ That is - what happened to the $(1 - x^2/24 + \dots)$ of the denominator? – mixedmath Aug 3 '12 at 16:56
There's still need to develop it, of course. I'd say that in a similar was as was done above, and multiply: above, it is "swallowed" in the ...part within the parentheses. It all depends, of course, what polynomial approximation is wanted. – DonAntonio Aug 3 '12 at 22:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033166170120239, "perplexity": 409.91656240310886}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663060.18/warc/CC-MAIN-20140930004103-00345-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://www.tohokugolf.co.jp/res/?c_id=mi034 | IDF [AhXF
{StNuFE\
v[EEIāAuJnv{^IĂB
N
~`
~
R[X
tƕԓ
A^C\
StꂪQSԂRUT\tA\ƂȂ܂B
NGXg\
TCgʂăSt̃v[\̎z\݂s܂B
\͓o^Ă郁[AhX֕ԓ܂B
ԑѕʋ
F
~FȂ
FO
F㔼
kStKCh
Ԙg芄
g芄
I[vRy
I[vRy(1OK)
I[vRy(2ȏ)
I[vRy(3ȏ)
Ztf[
g
1.5Rz
T[rX
Ht
HoCLOt
PhNt
ݕt
Qa\
Qaۏ
ӓ_ \TCg\ꍇɓKpƂȂ܂B ̏ڍדéyzytT[rXz̃ACRNbNĉBB A^Cł̗\́yc[]gzANGXg̏ꍇ́ytzNbNĉB Ԓ̊AT[rXf[̏ڍׂR` ut~v͗\t̏ŕ\܂B
/ Zt4B Lt4B t
T[rX
\@
k p k p A^C NGXg
11/23 () @ 14,980 @ @ ~ 11/24 () @ 6,880 @ @ t 11/25 (y) @ 14,980 @ @ t 11/26 () @ 13,980 @ @ ~ 11/27 () @ 5,980 @ @ t 11/28 () @ 5,980 @ @ t 11/29 () @ 6,580 @ @ t 11/30 () @ 5,980 @ @ t 2017 N 12 12/01 () @ 5,980 @ @ t 12/02 (y) @ 14,980 @ @ ~ 12/03 () @ 13,980 @ @ t 12/04 () @ 5,980 @ @ t 12/05 () @ 5,980 @ @ t 12/06 () @ 6,580 @ @ t 12/07 () @ 6,580 @ @ t 12/08 () @ 6,580 @ @ t 12/09 (y) @ 14,980 @ @ t 12/10 () @ 13,980 @ @ t 12/11 () @ 5,980 @ @ t 12/12 () @ 8,100 @ @ t 5,980 12/13 () @ 6,580 @ @ t 12/14 () @ 6,580 @ @ t 12/15 () @ 6,580 @ @ t 12/16 (y) @ 13,980 @ @ ~ 12/17 () @ 12,980 @ @ t 12/18 () @ 5,780 @ @ t 12/19 () @ 5,780 @ @ t 12/20 () @ 8,100 @ @ t 5,980 12/21 () @ 5,780 @ @ t 12/22 () @ 5,980 @ @ t 12/23 (y) @ 10,980 @ @ ~ 12/24 () @ 9,980 @ @ t 12/25 () @ 5,780 @ @ t 12/26 () @ 5,780 @ @ t 12/27 () @ 5,780 @ @ t 12/28 () @ 6,880 @ @ t 12/29 () @ 14,980 @ @ t 12/30 (y) @ 14,980 @ @ t 12/31 () @ 13,980 @ @ t 2018 N 01 01/01 () @ @ @ @ t 01/02 () @ @ @ @ t 01/03 () @ @ @ @ ~ 01/04 () @ @ @ @ t 01/05 () @ @ @ @ t 01/06 (y) @ @ @ @ t 01/07 () @ @ @ @ t 01/08 () @ @ @ @ t 01/09 () @ @ @ @ t 01/10 () @ @ @ @ t 01/11 () @ @ @ @ t 01/12 () @ @ @ @ t 01/13 (y) @ @ @ @ t 01/14 () @ @ @ @ t 01/15 () @ @ @ @ t 01/16 () @ @ @ @ t 01/17 () @ @ @ @ t 01/18 () @ @ @ @ t 01/19 () @ @ @ @ t 01/20 (y) @ @ @ @ t 01/21 () @ @ @ @ t 01/22 () @ @ @ @ t 01/23 () @ @ @ @ t 01/24 () @ @ @ @ t 01/25 () @ @ @ @ t 01/26 () @ @ @ @ t 01/27 (y) @ @ @ @ t 01/28 () @ @ @ @ t 01/29 () @ @ @ @ t 01/30 () @ @ @ @ t 01/31 () @ @ @ @ t 2018 N 02 02/01 () @ @ @ @ t 02/02 () @ @ @ @ t 02/03 (y) @ @ @ @ t 02/04 () @ @ @ @ t 02/05 () @ @ @ @ t 02/06 () @ @ @ @ t 02/07 () @ @ @ @ t 02/08 () @ @ @ @ t 02/09 () @ @ @ @ t 02/10 (y) @ @ @ @ t 02/11 () @ @ @ @ t 02/12 () @ @ @ @ t 02/13 () @ @ @ @ t 02/14 () @ @ @ @ t 02/15 () @ @ @ @ t 02/16 () @ @ @ @ t 02/17 (y) @ @ @ @ t 02/18 () @ @ @ @ t 02/19 () @ @ @ @ t 02/20 () @ @ @ @ t 02/21 () @ @ @ @ t 02/22 () @ @ @ @ t
TGGiȉTGG\ꍇɓKpj
@X|bgiԗDKpj@
11/22() VjAfB[XRy Q: Q XRAWv@: _uyA 60Έȏ̒jȂ炨llQł܂@ 4BZt 8,100~iQE1,200~Htj I[vRyQ]̕AɁuI[vRyQ]vƂLB tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 8,100 \ \ LT[rXpɂȂȂꍇA܂́ALT[rX̌萔Iꍇ L̗KpƂȂ܂B tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 6,580 \ \
11/23() jv lC̏jvł@\͂߂Ɂ`I 4BZt 14,980~i1,200~Htj yRypbNz@p[eB\tghNARy^i 3g9lȏœKp@ 4BZtpF 15,500~(1,200~Ht)@ 2B̏ꍇ3,240~̉ZƂȂ܂B tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 14,980 \ \
11/24(), 12/28() v ̂ȗvł@@@ 4BZt 6,880~i1,200~Htj@ yRypbNz@p[eB\tghNARy^i 3g9lȏœKp@ 4BZtpF 7,800~(1,200~Ht) tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 6,880 \ \
11/27(), 11/30(), 12/01(), 12/04(), 12/11() yzv ̂ȗvł@@@ 4BZt 6,580~i1,200~Htj 5,980~i1,200~Htj@ yRypbNz@p[eB\tghNARy^i 3g9lȏœKp@ 4BZtpF 6,900~(1,200~Ht) tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 5,980 \ \
11/28(), 12/05() yzJ[f[ oCLOṽJ[f[ł@@@ 4BZt 6,580~i1,200~Htj 5,980~iJ[oCLOtj@ ƃ}Ch2ނƃgbsO킹50ʂȏ̃j[Ɂ T_o[t ̃j[͂܂̂łB tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 5,980 \ \
12/12() {nt@@J[f[ Q: Q XRAWv@: _uyA llQł܂@AyXn[tRyI@ 4BZt 8,100~iQEJ[oCLOtj ʏv[oCLOṽJ[f[ł@@@ 4BZt 6,580~i1,200~Htj 6,280~iJ[oCLOtj@ ƃ}Ch2ނƃgbsO킹50ʂȏ̃j[Ɂ T_o[t ̃j[͂܂̂łB I[vRyQ]̕AɁuI[vRyQ]vƂLB tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 8,100 \ \ LT[rXpɂȂȂꍇA܂́ALT[rX̌萔Iꍇ L̗KpƂȂ܂B tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 5,980 \ \
12/16(y) yxzxv lC̋x@\͂߂Ɂ`I 4BZt 14,980~i1,200~Htj 13,980~i1,200~Htj yRypbNz@p[eB\tghNARy^i 3g9lȏœKp@ 4BZtpF 14,500~(1,200~Ht)@ 2B̏ꍇ3,240~̉ZƂȂ܂B tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 13,980 \ \
12/17() yxzxv lC̋x@\͂߂Ɂ`I 4BZt 13,980~i1,200~Htj 12,980~i1,200~Htj yRypbNz@p[eB\tghNARy^i 3g9lȏœKp@ 4BZtpF 13,500~(1,200~Ht)@ 2B̏ꍇ3,240~̉ZƂȂ܂B tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 12,980 \ \
12/18(), 12/21(), 12/25(), 12/27() yzv ̂ȗvł@@@ 4BZt 5,980~i1,200~Htj 5,780~i1,200~Htj@ yRypbNz@p[eB\tghNARy^i 3g9lȏœKp@ 4BZtpF 6,800~(1,200~Ht) tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 5,780 \ \
12/19(), 12/26() yzJ[f[ oCLOṽJ[f[ł@@@ 4BZt 5,980~iHtj 5,780~iJ[oCLOtj ƃ}Ch2ނƃgbsO킹50ʂȏ̃j[Ɂ T_o[t ̃j[͂܂̂łB tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 5,780 \ \
12/20() VjAfB[XI[vRy Q: Q XRAWv@: _uyA 60Έȏ̒jȂ炨llQł܂@ 4BZt 8,100~iQE1,200~Htj I[vRyQ]̕AɁuI[vRyQ]vƂLB tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 8,100 \ \ LT[rXpɂȂȂꍇA܂́ALT[rX̌萔Iꍇ L̗KpƂȂ܂B tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 5,980 \ \
12/23(y) yxzxv lC̋x@\͂߂Ɂ`I 4BZt 14,980~i1,200~Htj 10,980~i1,200~Htj yRypbNz@p[eB\tghNARy^i 3g9lȏœKp@ 4BZtpF 11,500~(1,200~Ht)@ 2B̏ꍇ3,240~̉ZƂȂ܂B tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 10,980 \ \
12/24() yxzxv lC̋x@\͂߂Ɂ`I 4BZt 13,980~i1,200~Htj 9,980~i1,200~Htj yRypbNz@p[eB\tghNARy^i 3g9lȏœKp@ 4BZtpF 10,500~(1,200~Ht)@ 2B̏ꍇ3,240~̉ZƂȂ܂B tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 9,980 \ \
12/29() Nʗ lC̋x@\͂߂Ɂ`I 4BZt 14,980~i1,200~Htj yRypbNz@p[eB\tghNARy^i 3g9lȏœKp@ 4BZtpF 15,500~(1,200~Ht)@ 2B̏ꍇ3,240~̉ZƂȂ܂B tT[rX Ztk4B Ztp4B Ltk4B Ltp4B \ 14,980 \ \
@ @
2017/10/1() ` 2017/12/17()
• \ō\LłB
H1,200~HN[|tƂȂ܂B(K̕ԋ͏o܂)
yj2B 3,240~̉ZƂȂ܂B(T[rXj
yRypbNz@
3g9lȏœKp@1,200~Ht@
F 6,800~`7,900~@
yjF J_[{520~@
p[eB\tghNARytbOtA^it
i^i̓v[̑gElŊmj
4B y j
Ztp6,5806,5806,5806,5806,58014,98013,980
tT[rX
2017/12/18() ` 2017/12/31()F12
• \ō\LłB
H1,200~HN[|tƂȂ܂B(K̕ԋ͏o܂)
yj2B 3,240~̉ZƂȂ܂B(T[rXj
yRypbNz@
3g9lȏœKp@1,200~Ht@
F 6,800~`7,900~@
yjF J_[{520~@
p[eB\tghNARytbOtA^it
i^i̓v[̑gElŊmj
4B y j
Ztp5,9805,9805,9805,9805,98014,98013,980
tT[rX
@ @
@ @ | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719588160514832, "perplexity": 10181.047715062918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806447.28/warc/CC-MAIN-20171122012409-20171122032409-00480.warc.gz"} |
https://www.groundai.com/project/reconstruction-of-the-core-convex-topology-and-its-applications-in-vector-optimization-and-convex-analysis/ | Reconstruction of the core convex topology and its applications in vector optimization and convex analysis
Reconstruction of the core convex topology and its applications in vector optimization and convex analysis
Wayne State University, Detroit, MI 48202;
Majid Soleimani-damaneh
University of Tehran, Tehran, Iran;
[email protected]
April 2017
Abstract
In this paper, the core convex topology on a real vector space , which is constructed just by operators, is investigated. This topology, denoted by , is the strongest topology which makes into a locally convex space. It is shown that some algebraic notions existing in the literature come from this topology. In fact, it is proved that algebraic interior and vectorial closure notions, considered in the literature as replacements of topological interior and topological closure, respectively, in vector spaces not necessarily equipped with a topology, are actually nothing else than the interior and closure with the respect to the core convex topology. We reconstruct the core convex topology using an appropriate topological basis which enables us to characterize its open sets.
Furthermore, it is proved that is not metrizable when X is infinite-dimensional, and also it enjoys the Hine-Borel property. Using these properties, -compact sets are characterized and a characterization of finite-dimensionality is provided. Finally, it is shown that the properties of the core convex topology lead to directly extending various important results in convex analysis and vector optimization from topological vector spaces to real vector spaces.
Keywords: Core convex topology, Functional Analysis, Vector optimization, Convex Analysis.
1 Introduction
Convex Analysis and Vector Optimization under real vector spaces, without any topology, have been studied by various scholars in recent years [1, 2, 3, 8, 16, 18, 19, 20, 29, 30]. Studying these problems opens new connections between Optimization, Functional Analysis, and Convex analysis. Since (relative) interior and closure notions play important roles in many convex analysis and optimization problems [4, 18, 21], due to the absence of topology, we have to use some algebraic concepts. To this end, the concepts of algebraic (relative) interior and vectorial closure have been investigated in the literature, and many results have been provided invoking these algebraic concepts; see e.g. [1, 2, 3, 11, 16, 18, 20, 23, 24, 29, 30] and the references therein. The main aim of this paper is to unifying vector optimization in real vector spaces with vector optimization in topological vector spaces.
In this paper, core convex topology (see [10, 19]) on an arbitrary real vector space, , is dealt with. Core convex topology, denoted by , is the strongest topology which makes a real vector space into a locally convex space (see [10, 19]). The topological dual of under coincides with its algebraic dual [10, 19]. It is quite well known that when a locally convex space is given by a family of seminorms, the locally convex topology is deduced in a standard way and vice versa. In this paper, is reconstructed by a topological basis. It is known that algebraic (relative) interior of a convex set is a topological notion which can be derived from core convex topology [10, 19]. We provide a formula for -interior of an arbitrary (nonconvex) set with respect to the algebraic interior of its convex components. Furthermore, we show that vectorial closure is also a topological notion coming from core convex topology (under mild assumptions). According to these facts, various important results, in convex analysis and vector optimization can be extended easily from topological vector spaces (TVSs) to real vector spaces. Some such results are addressed in this paper. After providing some basic results about open sets in , it is proved that, is not metrizable under topology if it is infinite-dimensional. Also, it is shown that enjoys the Hine-Borel property. A characterization of open sets in terms of there convex components is given. Moreover, -convergence as well as -compactness are characterized.
The rest of the paper unfolds as follows. Section 2 contains some preliminaries and Section 3 is devoted to the core convex topology. Section 4 concludes the paper by addressing some results existing in vector optimization and convex analysis literature which can be extended from TVSs to real vector spaces, utilizing the results given in the present paper.
2 Preliminaries
Throughout this paper, is a real vector space, is a subset of , and is a nontrivial nonempty ordering convex cone. is called pointed if . , , and denote the cone generated by , the convex hull of , and the affine hull of , respectively.
For two sets and a vector , we use the following notations:
A±B:={a±b: a∈A, b∈B},¯a±A:={¯a±a: a∈A},A∖B:={a∈A: a∉B}.
is the set of all subsets of and for ,
∪Γ:={x∈X: ∃A∈Γ; x∈A}
The algebraic interior of , denoted by , and the relative algebraic interior of , denoted by , are defined as follows [16, 19]:
cor(A):={x∈A: ∀x′∈X,∃λ′>0; ∀λ∈[0,λ′], x+λx′∈A},icr(A):={x∈A: ∀x′∈L(A),∃λ′>0; ∀λ∈[0,λ′], x+λx′∈A},
where is the linear hull of . When we say that is solid; and we say that is relatively solid if . The set is called algebraic open if . The set of all elements of which do not belong to and is called the algebraic boundary of . The set is called algebraically bounded, if for every and every there is a such that
x+ty∉A ∀t∈[λ,∞).
If is convex, then there is a simple characterization of as follows: if and only if for each there exists such that for all
Lemma 2.1.
Let be a vector basis for , and be nonempty and convex. if and only if for each there exists scalar such that
Proof.
Assume that for each there exists scalar such that Let . There exist a finite set and positive scalars such that
d=∑j∈Jλjej−∑j∈Jμjej, a±δjej∈A, ∀j∈J.
Let and . Considering , we have
a+2mδλjej∈[a,a+δjej], a−2mδμjej∈[a−μjej,a], ∀j∈J,
where stands for the line segment joining ,. Since is convex,
a+2mδλjej∈A, a−2mδμjej∈A, ∀j∈J,
and then, due to the convexity of again,
a2+∑j∈Jδλjej∈A2, a2−∑j∈Jδμjej∈A2.
This implies
a+∑j∈Jδλjej−∑j∈Jδμjej∈A,
which means Furthermore, the convexity of guarantees that for all Thus The converse is obvious. ∎
Some basic properties of the algebraic interior are summarized in the following lemmas. The proof of these lemmas can be found in the literature; see e.g. [1, 2, 10, 16, 19].
Lemma 2.2.
Let be a nonempty set in real vector space . Then the following propositions hold true:
1. If is convex, then ,
2. ,
3. for each
4. for each
5. If , then is absorbing (i.e. ).
Lemma 2.3.
Let be a convex cone. Then the following propositions hold true:
i. If , then is a convex cone,
ii. ,
iii. If are convex and relatively solid, then
iv. If is a convex (concave) function, then is continuous.
Although the (relative) algebraic interior is usually defined in vector spaces without topology, in some cases it might be useful under TVSs too. It is because the algebraic (relative) interior can be nonempty while (relative) interior is empty. The algebraic (relative) interior preserves most of the properties of (relative) interior.
Let be a real topological vector space (TVS) with topology . We denote this space by The interior of with respect to topology is denoted by . A vector is called a relative interior point of if there exists some open set such that The set of relative interior points of is denoted by
Lemma 2.4.
Let be a real topological vector space (TVS) and . Then . If furthermore is convex and , then .
The algebraic dual of is denoted by , and exhibits the duality pairing, i.e., for and we have . The nonnegative dual and the positive dual of are, respectively, defined by
K+:={l∈X′: ⟨l,a⟩≥0, ∀a∈K},K+s:={l∈X′: ⟨l,a⟩>0, ∀a∈K∖{0}}.
If is a convex cone with nonempty algebraic interior, then
The vectorial closure of , which is considered instead of closure in the absence of topology, is defined by [1]
vcl(A):={b∈X:∃x∈X;∀λ′>0,∃λ∈[0,λ′];b+λx∈A}.
is called vectorially closed if .
3 Main results
This section is devoted to constructing core convex topology via a topological basis. Formerly, the core convex topology was constructed via a family of separating semi-norms on ; see [19]. In this section, we are going to construct core convex topology directly by characterizing its open sets. The first step in constructing a topology is defining its basis. The following definition and two next lemmas concern this matter.
Definition 3.1.
[22] Let be a subset of , where stands for the power set of . Then, is called a topological basis on if and moreover, the intersection of each two members of can be represented as union of some members of .
The following lemma shows how a topology is constructed from a topological basis.
Lemma 3.1.
If is a topological basis on , then the collection of all possible unions of members of is a topology on .
Lemma 3.2 provides the basis of the topology which we are looking for. The proof of this lemma is clear according to Lemma 2.2.
Lemma 3.2.
The collection
B:={A⊆X : cor(A)=A, conv(A)=A}
is a topological basis on .
Now, we denote the topology generated by
B:={A⊆X : cor(A)=A, conv(A)=A}
by ; more precisely
τc:={∪Γ∈P(X) : Γ⊆B}.
The following theorem shows that is the strongest topology which makes into a locally convex TVS. This theorem has been proved in [19] using a family of semi-norms defined on . Here, we provide a different proof.
Theorem 3.1.
i. is a locally convex TVS;
ii. is the strongest topology which makes into a locally convex space.
Proof.
By Lemmas 3.1 and 3.2, is a topology on .
Proof of part i: To prove this part, we should show that is a Hausdorff space, and two operators addition and scalar multiplication are -continuous.
Continuity of addition: Let and let be a -open set containing . We should find two -open sets and containing and , respectively, such that . Since is a basis for , there exists such that
x+y∈A⊆V.
Defining
Vx:=12(A−x−y)+x \textmdand Vy:=12(A−x−y)+y,
by Lemma 2.2, we conclude that and Convexity of , implies that and are the desired -open sets, and hence the addition operator is -continuous.
Continuity of scalar multiplication: Let , , and be a -open set containing . without lose of generality, assume that . We must show that there exist and a -open set containing such that
(α−ε,α+ε)Vx⊆V.
Since , by considering in the definition of algebraic interior, there exists such that
αx+λx∈V, λ∈(−δ,δ).
Define
U:=(V−αx)∩−(V−αx).
We get , , and by Lemma 2.2, . Furthermore, is balanced (i.e. for each ), because is convex and . Now, we claim that
(α−δ2,α+δ2)(12∣α∣+δU+x)⊆V. (1)
To prove (1), let with . Therefore
∣α+t2∣α∣+δ∣=12∣α+t∣α∣+δ2∣≤12∣α∣+∣t∣∣α∣+δ2≤12
Thus,
α+t2∣α∣+δU⊆12U.
Hence,
(α+t)(12∣α∣+δU+x)=α+t2∣α∣+δU+αx+tx⊆12U+αx+tx
⊆12(V−αx)+αx+tx=12V+12(αx+2tx)⊆12V+12V=V.
This proves (1). Setting and proves the continuity of the scalar multiplication operator.
Now, we show that is a Hausdorff space. To this end, suppose . Consider such that , and set and . It is not difficult to see that and while . This implies that is a Hausdorff space.
ii. Let be an arbitrary topology on which makes into a locally convex space. Let be a locally convex basis of topology For each we have (by Lemma 2.4) and hence . Thus we have , which leads to and completes the proof.∎
The interior of with respect to topology is denoted by The following theorem shows that the algebraic interior (i.e. ) for convex sets is a topological interior coming from .
Theorem 3.2.
[19, Proposition 6.3.1] Let be a convex set. Then
intc(A)=cor(A).
Proof.
Since is a TVS, ; see Lemma 2.4. Since is convex, is also convex, and furthermore (by Lemma 2.2). Hence, . Therefore, , because in the biggest subset of belonging to . Thus , and the proof is completed. ∎
Notice that the convexity assumption in Theorem 3.2 is essential; see Example 3.1.
The proof of the following result is similar to that of Theorem 3.2.
Theorem 3.3.
If is a convex subset of , then , where denotes the relative interior of with respect to the topology .
It is seen that the convexity assumption plays a vital role in Theorems 3.2 and 3.3. In the following two results, we are going to characterize the -interior of an arbitrary (nonconvex) nonempty set with respect to the of its convex components. Since and is a basis for the set could be written as union of some subsets of which are algebraic open.
Lemma 3.3.
Let be a nonempty subset of real vector space Then could be uniquely decomposed to the maximal convex subsets of , i.e. where are non-identical maximal convex subsets (not necessary disjoint) of (Here, sets are called convex components of ).
Proof.
For every , let be the set of all maximal convex subsets of containing (the nonemptiness of such is derived from Zorn lemma). Set the index set (this type of defining enables us to avoid repetition). For every define Hence, where are non-identical maximal convex subsets of To prove the uniqueness of s, suppose such that are non-identical maximal convex subsets of Let and ; then , and hence there exists such that This means , and the proof is completed.∎
Theorem 3.4.
Let be a nonempty subset of real vector space . Then
intc(A)=⋃i∈Icor(Ai),
where are convex components of
Proof.
Let There exists such that Set Clearly and hence . Furthermore it is easy to verify that each chain (totaly ordered subset) in has an upper bound within . Therefore, using Zorn lemma, has a maximal element. Let be maximal element of . Obviously, is a convex component of . It leads to the existence of an such that . Therefore, To prove the other side, suppose for some Since is convex, by Theorem 3.2,
Example 3.1.
Consider
A:={(x,y)∈R2:y≥x2}∪{(x,y)∈R2: y≤−x2}∪{(x,y)∈R2: y=0}
as a subset of It can be seen that while However, where are convex components of as follows
A1={(x,y)∈R2:y≥x2}, A2={(x,y)∈R2: y≤−x2},
A3={(x,y)∈R2: y=0}, A4={(x,y)∈R2: x=0}.
The following theorem shows that the topological dual of is the algebraic dual of . In the proof of this theorem, we use the topology which induces on . This topology, denoted by , is as follows:
τ0={∪Γ∈P(X) : Γ⊆Ψ},
where
Ψ={A⊆X : A=f−11(I1)∩f−12(I2)∩...∩f−1n(In)\textmdforsomen∈N,\textmdsome
\textmdopenintervalsI1,I2,...,In⊆R\textmdandsomef1,f2,...,fn∈X′}.
[10]
Proof.
Let denote the topology which induces on . By [26, Theorem 3.10], makes into a locally convex TVS and . By Theorem 3.1, . Hence .∎
The following result provides a characterization of finite-dimensional spaces utilizing and the topology induced by on .
Theorem 3.6.
is finite-dimensional if and only if
Proof.
Assume that is finite-dimensional. Since there is only one topology on which makes this space a TVS, we have
To prove the converse, by indirect proof assume that . Let be an ordered basis of ; and denote the vector of coordinates of with respect to the basis . It is easy to show that, for each , where
l1(I)={{ti}i∈I⊆R : ∥{ti}∥1=∑i∈I∣ti∣<∞}.
Define by Since is a norm on , thus is also a norm on We denote this norm by .
Since is the strongest locally convex topology on , we have , where stands for the unit ball with On the other hand, every -open set containing origin, contains an infinite-dimensional subspace of (see [26]). Hence, This implies , and the proof is completed. ∎
Theorem 3.7 demonstrates that the vectorial closure () for relatively solid convex sets, in vector space , is a topological closure coming from . The closure of with respect to is denoted by
Theorem 3.7.
Let be a convex and relatively solid set. Then
Proof.
Since is a TVS, it is easy to verify that To prove the converse, let and . Without loss of generality, we assume , and then we have and , where stands for the algebraic interior of with respect to the subspace . We claim that To show this, assume that ; then there exists a functional such that and for each . Therefore the set
U:={z∈X: f(z)>12}
is a open neighborhood of with which contradicts Now, we restrict our attention to subspace . By Theorem 3.2, , and thus, by [18, Lemma 1.32], , which means . Therefore , and the proof is completed. ∎
Corollary 3.1.
Let be a nonempty subset of with finite many convex components. Then
clc(A)=n⋃i=1vcl(Ai)
where are convex components of .
Proof.
According to Theorem 3.7, we have
clc(A)=n⋃i=1clc(Ai)=n⋃i=1vcl(Ai).
The equality may not be true in general; even if each convex component, , is relatively solid. For example, consider as the set of rational numbers in . The convex components of are singleton, which are relatively solid. Therefore , while
⋃q∈Qvcl({q})=⋃q∈Q{q}=Q≠R=clc(Q).
A TVS is called metrizable if there exists a metric such that -open sets in coincide with -open sets in . Now, we are going to show that is not metrizable when is infinite-dimensional. Lemma 3.4 helps us to prove it. This lemma shows that every algebraic basis of the real vector space is far from the origin with respect to topology.
Lemma 3.4.
Let be an algebraic basis of . Then
Proof.
Define
A:={∑i∈Fλixi−∑i∈Fμixi:∑i∈F(λi+μi)=1, λi,μi>0; ∀i∈F, \textmd and F\textmdisa\textmdfinitesubsetofI}.
We claim that the following propositions hold true;
a. ,
b. ,
c. .
To prove (a), assume that and Then there exist positive scalars and finite sets such that;
x=∑i∈Fλixi−∑i∈Fμixi, y=∑i∈S¯¯¯λixi−∑i∈S¯¯¯μixi,
and .
Hence,
tx+(1−t)y=∑i∈F∪Sˆλixi−∑i∈F∪Sˆμixi,
where
ˆλi=⎧⎪ ⎪⎨⎪ ⎪⎩tλi+(1−t)¯¯¯λi i∈F∩Stλi i∈F∖S(1−t)¯¯¯λi i∈S∖F ˆμi=⎧⎪⎨⎪⎩tμi+(1−t)¯¯¯μi i∈F∩Stμi i∈F∖S(1−t)¯¯¯μi i∈S∖F
Furthermore,
∑i∈F∪Sˆλi+ˆμi=∑i∈F∩S[t(λi+μi)+(1−t)(¯¯¯λi+¯¯¯μi)]+∑i∈F∖St(λi+μi)+∑i∈S∖F(1−t)(¯¯¯λi+¯¯¯μi)=∑i∈Ft(λi+μi)+∑i∈S(1−t)(¯¯¯λi+¯¯¯μi)=t+(1−t)=1.
Therefore , and hence is a convex set.
To prove (b), first notice that for each ; then due to the convexity of , from (a) we get , because of Lemma 2.1.
We prove assertion (c) by indirect proof. If , then
xj=∑i∈Fλixi−∑i∈Fμixi
for some , and some finite set . Also, and values are positive and . Since is a linear independent set, we have , and also . Hence which is a contradiction. This completes the proof of assertion (c).
Now, we prove the lemma invoking . Since is convex, by theorem 3.2 and claim (b), is a open neighborhood of . On the other hand, (c) implies Thus,
Theorem 3.8.
If is infinite-dimensional, then is not metrizable.
Proof.
Suppose that is an infinite-dimensional metrizable TVS with metric Then for every choose such that and is linear independent. This process generates a linear independent sequence such that, (i.e. is convergent to zero). Furthermore, this sequence can be extended to a basis of , say . This makes a contradiction, according to Lemma 3.4, because
In every topological vector space, compact sets are closed and bounded, while the converse is not necessarily true. A topological vector space in which every closed and bounded set is compact, is called a Hiene-Borel space. Although it is almost rare that an infinite-dimensional locally convex space be a Hine-Borel space (for example, normed spaces), the following result proves that is a Hine-Borel space.
Theorem 3.9.
enjoys the Hine-Borel property. Moreover, every compact set in lies in some finite-dimensional subspace of
Proof.
Let be a -closed and -bounded subset of . First we claim that the linear space , i.e. the smallest linear subspace containing , is finite-dimensional. To see this, by indirect proof, assume that is an infinite-dimensional subspace of ; then there exists a linear independent sequence in . Thus, the sequence is a linear independent set in and also is -bounded. Hence, . Furthermore, has a basis , containing This makes a contradiction, according to Lemma 3.4, because Hence, is a closed and bounded set contained in a finite-dimensional space . Therefore, by traditional Hine-Borel theorem in finite-dimensional spaces, is a compact set in , and hence is a -compact set in
One of the important methods to realize the behavior of a topology defined on a nonempty set is to perceive (characterize) the convergent sequences. Assume that a sequence is -convergent to some (i.e, ). Thus, is a -compact set. Therefore, by Theorem 3.9, lies in a finite-dimensional subspace of . This shows that, convergent sequences of are exactly norm-convergent sequences contained in finite-dimensional subspaces of . Hence, every sequence with infinite-dimensional could not be convergent. So, the convergence in is almost strict; however, this is not surprising because strongest-topology () contains more number of open sets than any other topology which makes into a locally convex space.
4 Applications
In this section, we address some important results in convex analysis and optimization under topological vector spaces which can be directly extended to real vector spaces, utilizing the main results presented in the previous section. Some of these results can be found in the literature with different complicated proofs. Subsection 4.1 is devoted to some Hahn-Banach type separation theorems.
4.1 Separation
Theorem 4.1.
Assume that are two disjoint convex subsets of , and Then there exist some and some scalar such that
f(a)≤α≤f(b), ∀a∈A,b∈B. (2)
Furthermore,
f(a)<α, ∀a∈cor(A). (3)
Proof.
Since is a TVS and , by a standard separation theorem on topological vector spaces (see [26]), there exists some and some scalar satisfying (2) and (3). This completes the proof according to Theorem 3.5.∎
Theorem 4.2.
Two disjoint convex sets | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9889529943466187, "perplexity": 384.8593262552897}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370509103.51/warc/CC-MAIN-20200402235814-20200403025814-00467.warc.gz"} |
https://eccc.weizmann.ac.il/keyword/17923/ | Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > KEYWORD > UNCERTAINTY PRINCIPLE:
Reports tagged with Uncertainty principle:
TR12-031 | 4th April 2012
Tom Gur, Omer Tamuz
#### Testing Booleanity and the Uncertainty Principle
Revisions: 1
Let $f:\{-1,1\}^n \to \mathbb{R}$ be a real function on the hypercube, given
by its discrete Fourier expansion, or, equivalently, represented as
a multilinear polynomial. We say that it is Boolean if its image is
in $\{-1,1\}$.
We show that every function on the hypercube with a ... more >>>
TR18-016 | 25th January 2018
Naomi Kirshner, Alex Samorodnitsky
#### On $\ell_4$ : $\ell_2$ ratio of functions with restricted Fourier support
Revisions: 1
Given a subset $A\subseteq \{0,1\}^n$, let $\mu(A)$ be the maximal ratio between $\ell_4$ and $\ell_2$ norms of a function whose Fourier support is a subset of $A$. We make some simple observations about the connections between $\mu(A)$ and the additive properties of $A$ on one hand, and between $\mu(A)$ and ... more >>>
ISSN 1433-8092 | Imprint | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9302639961242676, "perplexity": 1022.7526358996577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00484.warc.gz"} |
https://codegolf.stackexchange.com/questions/12103/generate-a-universal-binary-function-lookup-table/28107 | # Generate a Universal-binary-function Lookup Table
This is tangentially related to my quest to invent an esoteric programming language.
A table of the binary numbers 0 .. 15 can be used to implement a Universal Binary Function using indexing operations. Given two 1-bit inputs X and Y, all 16 possible functions can be encoded in a 4-bit opcode.
X Y F|0 1 2 3 4 5 6 7 8 9 A B C D E F
- - - - - - - - - - - - - - - - - -
0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
0 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
1 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
- - - - - - - - -
0 ~X ~Y ^ & Y X | 1
ZERO NOT-Y AND OR
NOT-X XOR ONE
So this set of 16 functions can be applied to binary inputs as the function
U(f,x,y): (f >> ((x<<1) | y)) & 1,
or
U(f,x,y): (f / 2^(x×2 + y)) % 2,
or with indexing or matrix partitioning.
It will be useful to know the most compact way to represent or generate such a table of values for any possible languages to be built upon this type of binary operation.
## The goal:
Generate this exact text output:
0101010101010101
0011001100110011
0000111100001111
0000000011111111
That's it! Shortest-code wins.
• I had an intuition that the APL-family would do well here. :) – luser droog Jul 25 '13 at 9:09
• Also related: A simple logic gate calculator – FireFly Nov 7 '14 at 13:39
• Are leading or trailing newlines accepted? – Titus Sep 20 '16 at 17:10
• Yes, extra newlines are fine. – luser droog Sep 20 '16 at 17:32
## J, 10 (13?) characters
|.|:#:i.16
Number list:
i.16
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
to binary:
#:i.16
0 0 0 0
0 0 0 1
0 0 1 0
0 0 1 1
0 1 0 0
0 1 0 1
0 1 1 0
0 1 1 1
1 0 0 0
1 0 0 1
1 0 1 0
1 0 1 1
1 1 0 0
1 1 0 1
1 1 1 0
1 1 1 1
Transpose:
|:#:i.16
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
Reverse:
|.|:#:i.16
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
Do we need to remove the spaces? Looking at the other J answer it seems we do so we'll need to add 3 characters and borrow the 1": from Jan's answer.
• Very nice style of explanation. +1 (damn short, too!) – luser droog Jul 25 '13 at 8:55
• As soon as I saw Peter's Golfscript answer I knew I could have done much better. Well, you had already done it. – John Dvorak Jul 26 '13 at 3:56
• Nice to see something shorter than Golfscript... – fuenfundachtzig Jul 27 '13 at 6:12
• Reimplemented your solution in postscript – luser droog Mar 7 '14 at 6:45
• @luserdroog Wow. That's a lot of code. Much more readable than the J source code though. :-) Very cool. – Gareth Mar 7 '14 at 8:47
## Python 2, 40
for n in 1,2,4,8:print 8/n*('0'*n+'1'*n)
## APL (14)
Assuming ⎕IO=0 (that's a setting):
⎕D[⊖(4⍴2)⊤⍳16]
Explanation:
• ⍳16: numbers [0,16)
• (4⍴2)⊤: encode each number in base 2 using 4 digits
• ⊖: horizontal reverse (so the MSB ends up on top)
• ⎕D[...]: select these values from ⎕D which is the string 0123456789. (A numeric matrix is displayed with spaces between the values, a character matrix is not. So this converts each numerical bit to one of the chars '0' '1').
• Is the first character in the source supposed to look like a square, or am I still missing some fonts? – Tim Seguine Mar 20 '14 at 10:37
• @TimSeguine Yes, it's a square, called quad in the APL literature. Variable names beginning with quad are system variables which change the environment. IO = "index origin". – luser droog Nov 9 '14 at 8:33
• Save a byte: (4⍴2)⊤2⊥⍣¯1 – Adám Jun 16 '16 at 23:42
# Jelly, 42 7 bytes (non-competing)
⁴ḶBUz0Y
Try it online!
Thanks to Dennis for his help. Here is the first message, here is the last (other discussions also occured). With his help, I apparently (almost) square-rooted the score.
• Since the language is newer than question, I can't accept it as answer. Definitely in the running for the bounty, tho! – luser droog Sep 18 '16 at 1:41
• @luserdroog That's fine. But, I thought the challenge was newer. – Erik the Outgolfer Sep 18 '16 at 8:42
• I know what you mean, it doesn't feel like that long ago that I posted it. But even my own inca2, at 2 years old, is too young a language. – luser droog Sep 18 '16 at 20:38
• +1 for the 42 to 7 codegolf. That's something you don't see every day (unless done on purpose). – Kevin Cruijssen Sep 19 '16 at 14:18
• @KevinCruijssen Why should it be ever done on purpose? I'm just a Jelly newbie (I know Python 2 and 3 well), so I did it on a string way, while I "need to treat Jelly as an array-manipulating language". – Erik the Outgolfer Sep 19 '16 at 14:33
# ///, 51 bytes
Try it online
/a/0101/aaaa
/b/0011/bbbb
/z/0000//o/1111/zozo
zzoo
• Welcome to PPCG! You beat me to it. – Erik the Outgolfer Sep 15 '16 at 11:28
• @EriktheGolfer Feel free to improve, but I think that's the shortest possible version. :) – Cedric Reichenbach Sep 15 '16 at 11:30
• I'm porting this to Sprects. – Erik the Outgolfer Sep 15 '16 at 11:32
## GolfScript (18 17 15 chars)
(With thanks to Howard)
16,zip{','-~n}%
I don't understand why the 10-char
16,zip{n}/
doesn't work; I suspect that a bug in the standard interpreter is resulting in unsupported types on the stack.
An alternative at 18 characters which I understand fully is:
4,{2\?.2,*$8@/*n}% A more mathematical approach is a bit longer, at 28 chars: 4,{2.@??)2.4??.@/+2base(;n}/ A lot of that is for the base conversion and zero-padding. Without those, it drops to 19 chars, 4,{2.@??)2.4??\/n}/ with output 21845 13107 3855 255 • It was asked for exact text output - why should 16,zip{n}/ work then? – Howard Jul 25 '13 at 16:12 • On the other hand you can do 16,zip{','-~n}% – Howard Jul 25 '13 at 16:26 • @Howard, I think that zip should return an array of arrays, but it actually seems to return an array of Ruby arrays (is my best guess). Whatever the elements are, applying to them doesn't affect the way they print, which is unlike any of the 4 GolfScript data types. You're right that ','- seems to turn them into normal arrays: nice trick. – Peter Taylor Jul 25 '13 at 16:55 • Seems to output 4 extra lines of zeros here – aditsu May 16 '14 at 5:00 • @aditsu, works on the online demo. I wonder why the difference. Ruby version, maybe? – Peter Taylor May 16 '14 at 18:37 # CJam - 16 4,{G,f{\m>2%}N}/ Equivalent java code (as explanation): public class Lookup { public static void main(final String... args) { for (int i = 0; i < 4; ++i) { for (int j = 0; j < 16; ++j) { System.out.print((j >> i) % 2); } System.out.println(); } } } # Javascript (ECMA6), 67 s=(k,n)=>n-.5?s((k<<n/2)^k,n/2)+"0".repeat(n)+k.toString(2)+"\n":"" To use this, call s(255,8) Bitshift! And also XOR and a bit of recursion. The first thing to notice is that if we take any line and you shift it (# of continuous 0's) / 2 left, we get a nice XOR to get the next line up. For example, 0000000011111111 //line 4 0000111111110000 //shifted 4 to the left XOR these bitwise give us 0000111100001111 //XOR'ed. This is line 3! which is the next line up (line 3). Applying the same process for line 3, shift 2 left and we get... 0000111100001111 0011110000111100 XOR'ed gives 0011001100110011 which is line 2. Notice that the amount we shift halves each time. Now we simply call this function recursively, with 2 arguments. The integer value of this line, and N, which is how much we need to shift. When we do recursing just pass in the shifted XOR'ed value and n/2. "0".repeat(n) is to pad 0's to the beginning of each line because toString takes out leading 0's. • +1 Very cool. I had not noticed that pattern before. – luser droog Nov 17 '14 at 4:59 • A couple byes can be cut off this by bit shifting n instead of dividing it, and replacing the new line with a template string: s=(k,n)=>n?s((k<<n/2)^k,n>>1)+"0".repeat(n)+k.toString(2)+ :"" – Shaun H Sep 15 '16 at 19:05 ### J, 21 characters 1":<.2|(2^i.4)%~/i.16 • i.16 is a list of 0..15 • 2^i.4 is a list (1,2,4,8) • %~/ produces the table of divisions where the left argument forms rows but is the right argument to division • 2| calculates the remainder after dividing [each cell] by two • <. floors that value to 0 or 1 • 1": formats the table with one character per cell • I feel like the floor should not be necessary. The domain of 2| is already 0 or 1, right? – luser droog Apr 9 '15 at 18:20 • @luserdroog | operates on floats. 2|3.25 is 1.25. We don't want that. – John Dvorak Apr 9 '15 at 23:23 ### GolfScript, 19 chars Another GolfScript approach 4,{2\?{&!!}+16,%n}% # Ruby (44) Boring and long: Just printing the 0-padded binary representations of the numbers. [21845,13107,3855,255].map{|i|puts"%016b"%i} ## Postscript 1081771267774 70 [43690 52428 61680 65280] {16{dup 2 mod =only 2 idiv}repeat<>=}forall Reversed the values for a simpler mod- off method. ## 151131 119 Applying a more APL-ish approach. edit: replaced string chopping and array zipping with indexing and for-loops. [[0 1 15{}for]{16 add 2 5 string cvrs}forall]4 -1 1{0 1 15{2 index exch get 1 index 1 getinterval =only}for pop<>=}for Indented: [[0 1 15{}for]{16 add 2 5 string cvrs}forall] 4 -1 1{ % [] i 0 1 15{ % [] i j 2 index exch get % [] i [](j) 1 index 1 % [] i [](j) i getinterval % [] i [](j)<i> =only % [] i }for pop<>= % [] }for Reimplementing the functions used in the winning J answer leads to this (with a lot of support code). -1 16 i + #: |: |.{{==only}forall()=}forall i here is 1-based vector described in Iverson's Elementary Functions, hence the -1 ... + to produce 0 .. 15. # Perl (36+1) +1 for say, as usual. the double 0 is not a typo :) map say((00x$_,1x$_)x(8/$_)),1,2,4,8
• No need to add 1 for say. perl -e'...' is standard and this requires perl -E'...', no increase in byte count. Anyway, I think it was decided on Code Golf Meta that -M5.01 is free. – msh210 Sep 20 '16 at 17:07
# JavaScript (ECMA6), 108
Trying a different approach here. Even though it was encouraged to use binary operators, I allowed myself to submit this solution since the challange is also and I was thinking - how can I reduce the amount of code representing those values...? Bases.
['gut','a43','2z3','73'].forEach(n=>{a=parseInt(n,36).toString(2);
(Line break for convenience).
It's a shame I had to mess with padding with leading zeros, but the point of this code is simply representing the target binary result in Base 36, which are exactly those gut, a43, 2z3, 73 values.
Note: I realize it won't be anywhere near the winning answer, but just for the sake of the idea...
• I was about to do basically the same thing when I saw yours. I got it down to 92 bytes using the technique from my answer to a similar question: alert(['gut','a43','2z3',73].map(n=>(1e8+parseInt(n,36).toString(2)).slice(-16)).join('\n')). This approach uses newlines instead of four alert()s. – NinjaBearMonkey Nov 14 '14 at 2:09
# Sprects, 44 bytes
aaaa
bbbb
zozo
zzoo o1111 z0000 b0011 a0101
# MATL (non-competing), 8 bytes
16:qYB!P
Try it online!
### Explanation
16: % Generate range [1 2 ... 16]
q % Subtract 1, element-wise
YB % Convert to binary. Gives a 16×4 char array. Each original number is a row
! % Transpose
P % Reverse vertically. Implicitly display
# CJam (non-competing), 10 9 bytes
Thanks to @Dennis for 1 byte off!
Y4m*zW%N*
Try it online!
### Explanation
Y e# Push 2
4 e# Push 4
m* e# Cartesian power of 2 (interpreted as [0 1]) with exponent 4
z e# Zip
W% e# Reverse the order of rows
N* e# Join with newlines. Implicitly display
# JavaScript (ES6), 58 52 bytes
Builds the string recursively.
f=(n=64)=>n--?f(n)+(!n|n&15?'':
)+(n>>(n>>4)&1):''
### How it works
This recursion is based on the fact that the pattern is made of the vertical binary representation of nibbles 0x0 to 0xF:
0101010101010101 bit #0 <- Y = 0
0011001100110011 bit #1
0000111100001111 bit #2
0000000011111111 bit #3 <- Y = 3
----------------
0123456789ABCDEF
^ ^
X = 0 X = 15
Therefore, each position (X,Y) in this pattern can be expressed as the Y-th bit of X: X & (1 << Y). We can also isolate this bit with: (X >> Y) & 1. Rather than keeping track of X and Y, we iterate on a single variable n ranging from 0 to 63. So, the formula becomes: (n >> (n >> 4)) & 1. It's actually easier to iterate from 63 to 0, so the string is built in reverse order. In other words, character n-1 is appended to the left of character n.
As a side note, recursion doesn't bring anything here except shorter code.
Without the linebreaks, the code is 35 bytes long:
f=(n=64)=>n--?f(n)+(n>>(n>>4)&1):''
We need 17 more bytes to insert the linebreaks. This could be shortened to 14 bytes if a leading linebreak is acceptable.
### Demo
f=(n=64)=>n--?f(n)+(!n|n&15?'':
)+(n>>(n>>4)&1):''
console.log(f());
• In ideone with both languages JavaScript not compile in the exapme above there is one let more.... It is good the idea of one recursive function... – RosLuP Sep 18 '16 at 8:30
• What would it take to split after the 35 bytes? – Titus Sep 21 '16 at 9:25
• @Titus - Well. At first glance, I don't have any good solution for that. Here is a (very bad) attempt: (f=(n=64)=>n--?f(n)+(n>>(n>>4)&1):'')().match(/.{16}/g).join\n (63 bytes) – Arnauld Sep 21 '16 at 9:41
• hmm ... and .replace(/.{16}/g,"$0\n") has the same length. Too bad. – Titus Sep 21 '16 at 9:59 # Bash + coreutils, 65 bytes Not the shortest, but not the longest either: for i in {1,2,4,8};{ eval echo \$\[$${0..15}\&i$$/$i];}|tr -d \ (The last character is a space) ## NARS2000 APL, 22 "01"[⊖1+(4⍴2)⊤(⍳16)-1] Derived from marinus's APL answer, which doesn't seem to work on NARS2000. Generate vector ⍳16 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Change to zero-based (⍳16)-1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Generate shape for encode (4⍴2) 2 2 2 2 Encode (4⍴2)⊤(⍳16)-1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 Adjust for 1-based indexing 1+(4⍴2)⊤(⍳16)-1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 Reverse primary axis ⊖1+(4⍴2)⊤(⍳16)-1 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 Index "01"[⊖1+(4⍴2)⊤(⍳16)-1] 0101010101010101 0011001100110011 0000111100001111 0000000011111111 • You can set ⎕IO to 0, so that you don't have to adjust for 1-based indexing. That brings it down to 16 characters. – Elias Mårtenson Mar 24 '14 at 13:32 • Yes, but then I fear it's too similar to the other APL answer and wouldn't deserve to be here at all. – luser droog Apr 2 '14 at 4:44 # C, 73 chars i;main(){for(;i<64;)i&15||puts(""),putchar(48|1&~0xFF0F0F33335555>>i++);} This is just a general solution for outputting 64 bits in four 16-bit blocks; you just need to change the number 0xFF0F0F33335555 to output an other bit sequence. simplified & ungolfed: int main() { int i; for(i = 0; i < 64; i++) { if(i % 16 == 0) { puts(""); } int bit = ~0xFF0F0F33335555 >> i; bit &= 1; putchar('0' + bit); } } # Haskell, 73 Yikes, 73 chars! I can't for the love of god get this any smaller though. r=replicate f n=r(div 8n)("01">>=r n)>>=id main=mapM(putStrLn.f)[1,2,4,8] The real sad part about this is that if you were to echo the output using bash, you'd only need 74 characters. # JavaScript (ES5) 69 for(x="";4>x;x++){z="";for(n=0;16>n;)z+=1-!(n++&1<<x);console.log(z)} # inca2, 3327 24 4 16#(,2|(~16)%.2^~4){D This is based on Jan Dvorak's answer. inca2 is able to execute this as of yesterday's bugfixes. Technically invalid since the language was invented after the question, but invention of a language was part of my goal in posing the question. So here's some payback in gratitude to the other answers. :) Explanation: 4 16#(,2|(~16)%.2^~4){D (~16) integers 0 .. 15 2^~4 first 4 powers of 2: 1 2 4 8 (~16)%.2^~4 division table 2| mod 2 (and floor) transpose , ravel ( ){D map to chars '0'..'9' 4 16# reshape to 4x16 Some of the parentheses should be unnecessary, but apparently there are some remaining issues with my interpretation of the grammar. And "ravel => map => reshape" is really clumsy: map needs to be smarter. Edit: bugfixes allow elimination of parens. Factoring the base conversion into a separate function N:x|y%.x^~1+[]/x.y yields this 19 16 char version. 4 16#(,2N~16){D And while I'm cheating anyway here, I've gone ahead and made this a built-in function. But, even though it's a niladic function (not requiring an argument), there is no support for niladic functions, and it must be supplied with a dummy argument. # inca2, 2 U0 # Pyth 24 / 26 The shortest method was grc's answer translated to Pyth which I felt was cheap so I did my own method: ### Mine: 26 characters mpbjk*/8dS*d[0 1)[1 2 4 8 ### grc's: 24 characters Fd[1 2 4 8)*/8d+*\0d*\1d ## C++ 130 Converts hex to binary #define B std::bitset<16> #define C(x) cout<<x<<endl; void main(){ B a(0xFF),b(0xF0F),c(0x3333),d(0x5555); C(d)C(c)C(b)C(a) } # Haskell (Lambdabot), 47 bytes unlines$reverse$transpose$replicateM 4['1','0']
Kinda cheaty because it uses transpose from Data.List and replicateM from Control.Monad, however both are loaded by default from Lambdabot.
Also, I'm sure there is room for improvement, just wanted to share the idea
# Julia (39 Bytes)
Second script I've ever written in Julia, gotta admit I'm liking Julia, she's a pretty beast.
hcat(map(x->collect(bin(x,4)),0:15)...)
Returns
[0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1]
Explanation:
• bin(x,4) - Convert int to binary integer with padding to 4 characters.
• collect(_) - Split string into char array.
• map(x->_,0:15) - Do this for the first 16 digits in the range.
• hcat(_...) - Splat and horizontally concatenate into a matrix.
## C 83777674 71
x;f(n){for(;x<4;x++,puts(""))for(n=0;n<16;)putchar(49-!(n++&(1<<x)));}
Pretty straightforward.
x;
f(n){
for(;x<4;x++,puts(""))
for(n=0;n<16;)
putchar(49-!(n++&(1<<x)));
}
• There's an easy saving of 2 by not using ?:, and another saving of 1 by moving a ++. – Peter Taylor Jul 27 '13 at 12:10
• Saved 3 by changing main to f. lol – luser droog Nov 10 '16 at 0:57
# R, 53 41 bytes
A translation of @grc's python answer. Shaved off 12 bytes from the original translation through use of rep()'s each and length arguments (and partial argument matching), and by remembering that 0:1 is equivalent to c(0,1).
for(n in 2^(0:3))print(rep(0:1,e=n,l=16))
for(n in 2^(0:3))print(rep(c(rep(0,n),rep(1,n)),8/n))
You can also attempt a translation of @Gareth's J answer, something like this (34 bytes):
t(e1071::bincombinations(4))[4:1,]
However, it uses a function that's not part of base R, and outputs a matrix which is hard to format into exact printed text like in the specification. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2870066165924072, "perplexity": 976.4595426734991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.69/warc/CC-MAIN-20191116005339-20191116033339-00095.warc.gz"} |
http://math.stackexchange.com/questions/52481/how-do-we-prove-that-a-sphere-maximizes-the-volume-enclosed-among-all-simple-clo | How do we prove that a sphere maximizes the volume enclosed among all simple closed surfaces of given surface area?
How do we prove that among all closed surfaces with a given surface area, the sphere is the one that encloses the largest volume, and not do it by cases?
so far I've tried is that I know the formula for the surface of the sphere and volume of sphere
-
You certainly cannot prove this result simply by examining a bunch of closed surfaces and comparing volume enclosed. There are infinitely many different closed surfaces, so you cannot test them all. The problem is not entirely trivial (compare with the 2-dimensional case, with closed simple curves, area, and length). – Arturo Magidin Jul 19 '11 at 20:02
The is a really good elementary discussion of this problem at cut-the-knot.org/do_you_know/isoperimetric.shtml – John M Jul 20 '11 at 19:03
Perhaps use the calculus of variations and the Euler-Lagrange formula. – asmeurer Nov 14 '12 at 6:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9269478917121887, "perplexity": 370.9013345523405}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043062723.96/warc/CC-MAIN-20150728002422-00094-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://www.maplesoft.com/support/help/Maple/view.aspx?path=LinearAlgebra/StronglyConnectedBlocks | LinearAlgebra - Maple Programming Help
Home : Support : Online Help : Mathematics : Linear Algebra : LinearAlgebra Package : Solvers : LinearAlgebra/StronglyConnectedBlocks
LinearAlgebra
StronglyConnectedBlocks
compute the strongly connected blocks of a square Matrix
Calling Sequence StronglyConnectedBlocks(M, opts)
Parameters
M - n x n square matrix with some sparsity opts - options controlling the output
Description
• The StronglyConnectedBlocks(M) function computes and returns a list containing the nonzero strongly connected blocks contained in the input Matrix M, i.e. [M_1, ..., M_r]. The strongly connected blocks are square sub-matrices of the input Matrix after a sequence of row and column exchanges have been performed to minimize the size of the blocks. Note that these sub-matrices do not contain all the entries present in the input Matrix, but rather only those needed to compute the determinant, characteristic polynomial, or eigenvalues of the Matrix. In addition, any zero blocks are not output.
• In order for StronglyConnectedBlocks to provide a benefit, the input Matrix should have some sparsity. If the input Matrix is fully dense, the command will output a list containing only the input Matrix (without sparsity no break-down into blocks is possible).
• For an input n x n Matrix M, if we let m = sum( Row/ColumnDimension(M_i), i=1..r ) for the output Matrix list [M_1, ..., M_r], then m <= n, where the inequality is needed for any zero blocks contained in the matrix.
• The output blocks satisfy the following:
Determinant(M) is zero if m < n and product(Determinant(M_i), i=1..r) otherwise.
CharacteristicPolynomial(M,x) = x^(n-m) * product( CharacteristicPolynomial(M_i,x), i=1..r)
• The option returnsingular is true by default, but when set to false, the block matrices will not be formed when the input Matrix is singular (i.e. has zero blocks). This is useful for efficient computation of a determinant (as a zero block means the determinant is zero, so there is no sense in forming the block matrices).
• The option outputform is matrixlist by default, which returns a list of the strongly connected blocks in Matrix form as described above. If outputform is set to rowlist instead, it provides a list of the rows (columns) of the Matrix for each block.
Examples
> $A≔\mathrm{Matrix}\left(\left[\left[a,b,c,d,e\right],\left[f,g,h,i,j\right],\left[0,0,0,0,k\right],\left[0,0,0,v,w\right],\left[0,0,0,x,y\right]\right]\right):$
> $\mathrm{LinearAlgebra}:-\mathrm{StronglyConnectedBlocks}\left(A\right)$
$\left[\left[\begin{array}{cc}{y}& {x}\\ {w}& {v}\end{array}\right]{,}\left[\begin{array}{cc}{a}& {b}\\ {f}& {g}\end{array}\right]\right]$ (1)
> $\mathrm{LinearAlgebra}:-\mathrm{StronglyConnectedBlocks}\left(A,\mathrm{returnsingular}=\mathrm{false}\right)$
${0}$ (2)
> $\mathrm{LinearAlgebra}:-\mathrm{StronglyConnectedBlocks}\left(A,\mathrm{outputform}=\mathrm{rowlist}\right)$
$\left[\left[{5}{,}{4}\right]{,}\left[{3}\right]{,}\left[{1}{,}{2}\right]\right]$ (3) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238011240959167, "perplexity": 1114.2343934716987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257836399.81/warc/CC-MAIN-20160723071036-00322-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://codeforces.com/blog/entry/6815 | By RAD, 9 years ago, translation,
278B - New Problem
The total number of different strings of 2 letters is 262 = 676, but the total length of the input strings is no more than 600. It means that the length of answer is no more than 2. So just check all the strings of length 1 and 2.
277A - Learning Languages
Build bipartite graph with n nodes for employees and m nodes for languages. If an employee initially knows a language, than there will be an edge between corresponding nodes. Now the problem is simple: add the minimal number of edges in such a way, that all the n employees will be in the same connected component. Obviously, this number equals to the number of initially connected components, containing at least one employee, minus one. But there is one exception (pretest #4): if initially everyone knows no languages, we'll have to add n edges, because we can't add the edges between employees (remember that the graph is bipartite).
277B - Set of Points
For m = 3, n = 5 and m = 3, n = 6 there is no solution.
Let's learn how to construct the solution for n = 2m, where m ≥ 5 and is odd. Set up m points on a circle of sufficiently large radius. This will be the inner polygon. The outer polygon will be the inner polygon multiplied by 2. More precisely (1 ≤ i ≤ m):
If m is even, construct the solution for m + 1 and then delete one point from each polygon. If n < 2m, delete 2m - n points from the inner polygon.
Unfortunately, this solution doesn't work for m = 4, n = 7 and m = 4, n = 8.
Another approach is to set up m points on a convex function (for example, y = x2 + 107), and set up the rest n - m points on a concave function (for example, y = - x2 - 107). Take a look at rng_58's solution — 3210150.
277C - Game
At first, notice that horizontal and vertical cuts are independent. Consider a single horizontal line. It contains m unit segments. And in any game state it's always possible to decrease the number of uncut units as the player wants. Imagine, that she starts growing a segment from a border, increasing it's length by 1 at a time. Each time the total uncut length decreases by either 0 or 1. In the end it obviously reaches 0.
The same holds for vertical lines as well. So if there are no initial cuts, the game is a nim with n - 1 piles of m stones and m - 1 piles of n stones. Could be solved with simple formula.
Initial k cuts should be just a technical difficulty. For any vertical/horizontal line, which contains at least one of the cuts, it's pile size should be decreased by the total length of all segments on this line.
How to make a first move in nim: let res is the result of state (grundy function), and ai is the size of the i-th pile. Then the result of the game without i-th pile is . We want to replace ai with some x, so that . Obviously, the only possible . The resulting solution: find a pile for which , and decrease it downto .
277D - Google Code Jam
Suppose we have fixed set of inputs that we have to solve. Let's learn how to determine the optimal order. Obviously, Small inputs (and Large inputs with probFail = 0) won't fail in any case. It means that our penalty time is no less than submission time of last such safe'' inputs. So we will solve such inputs before all the others. Inputs with probFail = 1 are just a waste of time, we won't solve such inputs. Now we have only inputs with 0 < probFail < 1. Let i and j be two problems that we are going to solve consecutively at some moment. Let's check, if it is optimal to solve them in order i, j, or in reversed order. We can discard all the other inputs, because they don't affect on the relative order of these two.
(timeLargei + timeLargej)(1 - probFailj) + timeLargei(1 - probFaili)probFailj < (timeLargei + timeLargej)(1 - probFaili) + timeLargej(1 - probFailj)probFaili
- probFailj·timeLargej - timeLargei·probFailj·probFaili < - probFaili·timeLargei - timeLargej·probFaili·probFailj
timeLargei·probFaili(1 - probFailj) < timeLargej·probFailj(1 - probFaili)
timeLargei·probFaili / (1 - probFaili) < timeLargej·probFailj / (1 - probFailj)
Now we've got a comparator for sort, which will give us the optimal order. Note, that inputs with probFail = 0, 1 will be sorted by the comparator correctly as well, so it's not a corner case.
Let's return to the initial problem. First of all, sort problems with the optimal comparator (it's clear that any other order won't be optimal by time, and the score doesn't depend on the order). Calculate the DP: z[i][j] = pair of maximal expected total score and minimal expected penalty time with this score, if we've already decided what to do with the first i problems, and we've spent j real minutes from the contest's start. There are 3 options for the i-the problem:
1. skip: update z[i + 1][j] with the same expected values
2. solve the Small input: update z[i + 1][j + timeSmalli], the expected total score increases by scoreSmalli, and the expected penalty time increases by timeSmalli (we assume that this input is solved in the very beggining of the contest)
3. solve both inputs: update z[i + 1][j + timeSmalli + timeLargei], the expected total score increases by scoreSmalli + (1 - probFaili)scoreLargei, and the expected penalty time becomes timeSmalli + (1 - probFaili)(j + timeLargei) + probFaili·penaltyTime(z[i][j]), where penaltyTime(z[i][j]) is the expected penalty time from DP
The resulting answer is the best of z[n][i], (0 ≤ i ≤ t).
The expected total score could be a number around 1012 with 6 digits after decimal point. So it can't be precisely stored in double. And any (even small) error in calculating score may lead to completely wrong expected time (pretest #7). For example, you can multiply all the probabilities by 106 and store the expected score as integer number to avoid this error.
277E - Binary Tree on Plane
If there is no "binary" restriction, the solution is simple greedy. Each node of the tree (except the root) must have exactly 1 parent, and each node could be parent for any number of nodes.
Let's assign for each node i (except the root) such a node pi as a parent, so that ypi > yi and distance between i and pi is minimal possible. Renumerate all the nodes in order of non-increasing of y. Now it's clear that pi < i (2 ≤ i ≤ n). So we've just built a directed tree with all the arcs going downwards. And it has minimal possible length.
Let's recall the "binary" restriction. And realize that it doesn't really change anything: greedy transforms to min-cost-max-flow on the same distance matrix as edge's costs, but each node must have no more than 2 incoming flow units.
• +18
» 9 years ago, # | 0 There should be said a word about overlapping in the editorial on 278B. it s not obvious that it doesnt matter
» 9 years ago, # | 0 although there could be several different solutions for a particular problem, I guess using disjoint set data structure(including some modifications) would be an easier solution for 277A, isn't it?
• » » 9 years ago, # ^ | +10 The tutorial only talks about the idea, involving connected component, and of course you can use disjoint set to deal with it, while bfs,Floyd-Warshall,etc. also works.
» 9 years ago, # | 0 Please explain me, why in 277B there is no solution for n=6, m=3?{(0,0), (10,0), (0,10),(1,1),(3,2),(6,3)}
• » » 9 years ago, # ^ | ← Rev. 2 → 0 (0,0),(1,1),(3,2),(6,3),(10,0) makes a convex polygon of 5 vertices, which is more than m=3
• » » » 9 years ago, # ^ | 0 Thank you! :) I read it wrong...:(
» 9 years ago, # | +3 Could someone explain 227E — Binary Tree: how will the flow graph look like? Solution codes don't give me a hint :(
• » » 9 years ago, # ^ | +6 For each vertex of the tree you need to create two vertices, the first one having an incoming edge of capacity equal to 2 and cost equal to zero from the source, the second one must have an outcoming edge to the sink with unit capacity and, again, zero cost. Also, you should add edges from all vertices of the first cathegory to all vertices of the second cathegory (unit capacity and cost equal to the distance). If your maxflow equals to n — 1, then your answer is correct, else there is no suitable binary tree.
» 9 years ago, # | +5 in 278-B if the range in bigger like n<=10^6 and length of each string be atmost 1000 or 2000 then what will be the approach?
• » » 9 years ago, # ^ | 0 i think may be the use of hashing can be good.
• » » 9 years ago, # ^ | ← Rev. 2 → -11 There exists a very easy solution using suffix structures such as suffix tree, suffix automaton and suffix array. In the first two cases you need just to find a shortest and lexicografically smallest path, which is impossible to follow, in the last case you can build an LCP (largest common prefix) array and binsearch for the length of the answer (you can check whether the answer exists just by counting the number of different substrings of some length in linear time).UPD. I'd advise you to read yourself about these structures (you should start with suffix array).
• » » » 9 years ago, # ^ | ← Rev. 2 → +16 A solution involving a suffix structure is by definition not "very easy" ;)But consider all strings of length <= 5. There are over 11 million of them, and only around under 5 million of these can be substrings of any string from a collection with total size under 1000000. So just mark off the used substrings, and then pick the smallest unused one.
• » » » » 9 years ago, # ^ | +10 By the phrase "very easy" I meant: you don't need to think a lot to guess such a solution.But your solution is great and "very easy" in all means:).
» 9 years ago, # | 0 Hi,i'm solved Div 1. B too. But i cant understand how this problem check program work?..i'm very intrested of this..can sb tell me ?
» 9 years ago, # | 0 can anyone give me some proof according to problem 277B — Set of Points about rng_58's solution ?
» 9 years ago, # | 0 ah, maybe it is a little silly...I have a little problem to compute the expected with penalty. Denote the corresponding var with ti, pi, then we have 4 situations: probability penalty i WA, j WA pi*pj 0 i AC, j WA (1-pi)pj ti i WA, j AC pi(1-pj) ti+tj i AC, j AC (1-pi)(1-pj) ti+(ti+tj) then sum it up, it is ti(1-pi)+(ti+tj)(1-pj), not ti(1-pi)pj+(ti+tj)(1-pi). where am I wrong?
• » » 9 years ago, # ^ | +3 Read the problem statement: "By the Google Code Jam rules the time penalty is the time when the last correct solution was submitted."
• » » 9 years ago, # ^ | ← Rev. 5 → +3 Read problem statement more carefully: By the Google Code Jam rules the time penalty is the time when the last correct solution was submitted.so, the last line i AC, j AC (1-pi)(1-pj) ti+tj
• » » 9 years ago, # ^ | 0 ahhh, I thought it would be for each problem solved. many thx to all.
» 4 years ago, # | 0 https://ideone.com/e.js/xhi3fi why my code is getting runtime error for problem 277A
» 3 years ago, # | 0 Neat and Easy disjoint set union using solution 277A — Learning Languageshttps://github.com/joy-mollick/Problem-Solving-Solutions/blob/master/Codeforces-277A%20-%20Learning%20Languages.cpp
» 2 years ago, # | 0 Why in 277D GOOGLE CODE JAM, we are not doing time penalty calculation in an integer as doing the calculation in double may lead to precision error. Is it because the time is only 1560? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6151232123374939, "perplexity": 702.4007515275697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104542759.82/warc/CC-MAIN-20220705083545-20220705113545-00601.warc.gz"} |
https://dsp.stackexchange.com/questions/64342/the-complexity-of-such-function-run-in-matlab | # The complexity of such function run in Matlab
The below function is representing an algorithm, so how can I get its complexity? I don't mean the time of running by using the tic .. toc, I mean how many operation (Additions and multiplications) are performed in this loop.
for times=1:m;
for col=1:N;
product(col)=abs(T(:,col)'*r_n);
end
[val,pos]=max(product);
Aug_t=[Aug_t,T(:,pos)];
T(:,pos)=zeros(M,1);
aug_y=(Aug_t'*Aug_t)^(-1)*Aug_t'*Yy;
r_n=Yy-Aug_t*aug_y;
pos_array(times)=pos;
end
Size of parameters, m = 256 , N = 256, T= [256,256] and M = [256,1]
You should know what each operator (i.e., *) and each called function (i.e. product) does. Then add up those operations. For instance, I'm pretty sure that in Matlab, the way you're building up Aug_t means that Aug_t' * Aug_t generates a vector dot-product, so for each element in Aug_t there's a multiply-accumulate. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592966794967651, "perplexity": 1825.5759002975522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078021.18/warc/CC-MAIN-20210414185709-20210414215709-00149.warc.gz"} |
https://www.physicsforums.com/threads/bose-equilibrium-distribution-and-atomic-units.855304/ | # Bose Equilibrium Distribution and Atomic Units
Tags:
1. Feb 2, 2016
### Raptor112
1. The problem statement, all variables and given/known data
For my project I need to compute the average the number of photons given by the expression:
$\bar{n}= \frac{e^{-\bar{h}\omega/\kappa T}}{1-e^{-\bar{h} \omega / \kappa T}}$
where $\kappa$ is the Boltzmann constant and $\omega$ is the oscillator frequency. For the Hamiltonian in my project simulation, $\bar{h} =1$ so how would $\bar{n}$ be expressed?
2. Relevant equations
Is it as simple as $\bar{h} =1$ in the expression of $\bar{n}$ so:
$\bar{n}= \frac{e^{-\omega/\kappa T}}{1-e^{\omega / \kappa T}}$
but then doesn't the argument of the exponential has dimensions, as opposed to being dimensionless which is what it's supposed to be?
2. Feb 2, 2016
### Staff: Mentor
You have to express all quantities in atomic units. For instance, ω will be in units of the inverse of the atomic unit of time. There is no atomic unit of temperature, so T will still be in kelvin, but you have to calculate the correct value for the Boltzmann constant.
Last edited: Feb 2, 2016
3. Feb 2, 2016
### Raptor112
According to wikipedia it's just one by definition:
https://en.wikipedia.org/wiki/Boltzmann_constant
4. Feb 2, 2016
### Staff: Mentor
Draft saved Draft deleted
Similar Discussions: Bose Equilibrium Distribution and Atomic Units | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9173685312271118, "perplexity": 932.8675592481362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948599156.77/warc/CC-MAIN-20171217230057-20171218012057-00631.warc.gz"} |
https://codegolf.stackexchange.com/questions/99155/polyglot-anagrams-cops-thread/99202 | Your challenge is to choose an OEIS sequence and write two full programs in two different languages that produces that nth item in the sequence when given an n via STDIN, or an other forms of standard input, where n is any positive number. However your two programs must be anagrams, meaning each can be rearranged from the other's letters.
Programs must output the decimal of the number followed by optional whitespace to STDOUT. Programs may output to STDERR however it should be ignored and if the hidden case does so it must be clearly stated that this is the case.
If you wish you may also output by character code. However if you do so in your hidden solution you must state such in the body of your submission.
You will then present the OEIS number, the source code for and the name of one language it is in.
Robbers will crack your submission if they find an anagram of the original submission that runs in a language other than the one you already presented. To crack an answer they must only find any language and program which produces the sequence and is an anagram of the original, not necessarily the answer the you were thinking about.
Thus you are incentivized to make it as hard as possible to find any language that does the task using their list of symbols.
## Scoring
This is so the shortest un-cracked program is the winner.
## Languages
Languages will be considered different if the two proposed solutions do not complete the task in both languages. This will include different versions of the same language as long as neither the cop's solution or the robber's solution produce the correct output in the other's language.
i.e. If the there are two solutions 1 and 2 that are in language A and B respectively solution 2 must not produce the correct output in language A and solution 1 must not produce the correct output in language B.
# Safety
Once your submission has been uncracked for a week you may post your solution and declare your post safe. If after a week you choose not to post a solution your answer may still be cracked.
• To browse through random OEIS sequences for ideas, go to oeis.org/webcam – mbomb007 Nov 9 '16 at 18:15
• How would it work with languages that like to use flags to the interpreter, such as perl? Are they disqualified? Are flags counted as part of the code? Are flags "free" (not included in code or divulged at all)? – Emigna Nov 9 '16 at 19:19
• Can the hidden program exit with an error (after producing the output)? Should that be indicated in the answer? – Luis Mendo Nov 9 '16 at 19:34
• Not sure if this would be helpful to anyone else but this highlights any remaining missing characters or any duplicated ones: codepen.io/anon/pen/BQjxRK – Dom Hastings Nov 11 '16 at 8:12
• It'd be nice if there was a stack snippet to show uncracked answers, oldest first. – mbomb007 Nov 16 '16 at 15:57
# Python 2, 118 bytes, A042545Cracked
i=input();s=1/(801**.5-28);a=[0,1]
for p in range(i):a+=[a[-2]+a[-1]*int(s)];s=1/(s-int(s))
print a[i]#,,,.//000fhlmo|
I didn't feel like implementing a trivial sequence, so I decided to go with my PPCG user ID. I wrote this in the other language first, which should give you a clue about what that language is, though I'd bet 100 dollars that this will be cracked in a golfing language before it's cracked in the intended other language.
Note: Due to floating-point precision errors, this is only accurate up to an input of 14. The intended solution is the same way.
## Intended solution, JavaScript (ES7)
for(i=prompt(),s=1/(801**.5-28),a=[1,0];i--;s=1/(s-n))
n=s|0,a.unshift(a[1]+a[0]*n);
Works in pretty much the same way as the Python solution, though the sequence is stored largest-first rather than smallest-first due to the fact that JS does not support negative indexing.
• I can't get the case of A042545(15) to work. OEIS says that it is 53000053, but your program says that it is 27666361 (at least on my machine). – boboquack Nov 10 '16 at 22:38
• @boboquack The output for 16 is actually 53000053, but after that there doesn't seem to be any matching terms. I wonder why... – ETHproductions Nov 11 '16 at 1:33
• Maybe a floating point error that gets progressively worse? – boboquack Nov 11 '16 at 1:45
• Cracked. – Dennis Nov 11 '16 at 6:11
• Dammit, I was right! :( This was as close as I got: gist.github.com/dom111/bd9be933cb8ccd0e303601bf73d525b6 Thanks for the workout anyway, I needed |() but just couldn't get them! – Dom Hastings Nov 11 '16 at 14:34
# Brain-Flak, 24 bytes, A000290, Safe
Yet another square solution. This time there is nothing but parentheses
({(({}[()])()){}[()]}{})
The intended solution was in Brain-Flueue, a version of brain-flak that uses queues instead of stacks. The program was:
({(({})[()]){}}{})[()()]
The languages are considered distinct because neither of the two programs halt when run in the other language.
• This would work in Glypho if input/output using character code is allowed... – jimmy23013 Nov 9 '16 at 21:51
• @jimmy23013 what is Glypho? – Wheat Wizard Nov 9 '16 at 21:51
• esolangs.org/wiki/Glypho ((([{}{}{]]}[)))((){))(} – jimmy23013 Nov 9 '16 at 21:52
• @WheatWizard If it's cracked, can you edit the answer to show that? – mbomb007 Nov 11 '16 at 16:52
• @mbomb007 It is not cracked – Wheat Wizard Nov 11 '16 at 16:59
# Python 2, 38 bytes, A000290Cracked by Emigna
def e(X):return X*X
print e(input())##
This will probably be very easy to crack. I'm mostly posting this as a starting point.
Orignial solution in CJam:
ri:XX*e#def ()return X
e#pnt (input())
• Cracked – Emigna Nov 9 '16 at 15:56
# CJam, 7 bytes, A005843Cracked!
ri2*e#^
This is a basic 2*n sequence.
Explanation:
r e# read input
i e# convert to integer
2* e# multiply it by 2
e#^ e# this is a comment that is ignored by the interpreter
Try it online!
# Original Solution, Carrot
#^i*2er
Carrot is an esolang created by me. I have stopped developing it a long time ago. The reason I chose this is because I hoped that it would be hard for other languages to comment out the unnecessary parts of the code.
Explanation:
#^ This pushes the input to the stack (anything before the ^ is the stack)
i Convert stack to integer
*2 Multiply it by 2
er These are ignored because they are not Carrot commands
Implicit output
Try it online!
• ri#e^*2 would work in Jelly if * were multiplication instead of exponentiation. So close... – ETHproductions Nov 9 '16 at 17:31
• Cracked :). – Adnan Nov 9 '16 at 19:38
• I had everything but the r in pyth. Exciting to see the original code for this one. – Emigna Nov 9 '16 at 20:34
• @Emigna I added the original code – user41805 Nov 10 '16 at 7:45
# 2sable, 15 bytes, A000290, Cracked!
Hopping on the same n2 train :p.
*?"!#$&<=@\^{|} Try it online! • Almost looks like Malbodge with the symbols in order like that :P – ETHproductions Nov 9 '16 at 17:05 • Cracked :D – Conor O'Brien Nov 9 '16 at 23:55 • @ConorO'Brien Hahaha, I was pretty certain this wasn't possible in Jelly, Pyth, 05AB1E and MATL. Nice job! :) – Adnan Nov 10 '16 at 10:31 # Brain-Flak, 44 bytes, A000290Cracked <({({})({}[()])}{}))()()()turpentine/"*"*4splint> Try it online! ## Original solution, Python 2 print(input()**(len(set("{}{}{}[]()<>"))/4)) • Cracked :) – Adnan Nov 9 '16 at 16:00 • Now I'm really curious. What was your original intended solution? I can tell it's python because I see len set input and print (and because I know you like python) but I can't figure out how that squares a number – James Nov 9 '16 at 16:02 • @DrMcMoylex added – Wheat Wizard Nov 9 '16 at 16:05 # Excel, 12 bytes, A000012Cracked =IF(1=1,1,1) Maybe not the toughest, but a fun one to crack. • Cracked – Emigna Nov 9 '16 at 20:31 # Python 2, 25 bytes, A000583, cracked Y=input("");printY**4,X This program exits with an error after printing the output. My hidden code (substantially different from the cracked solution!): ## Actually, 25 bytes 4,n*Y")ii(*nppruttY;="X Try it online! Explanation: 4,n*Y")ii(*nppruttY;="X 4,n input, repeat 4 times *Y do * until the stack stops changing (fixed-point combinator) ")ii(*nppruttY;="X push this string and immediately pop and discard it • Cracked – Adnan Nov 10 '16 at 11:40 # Python, 118 bytes, A042545, Safe i=int(input());s=pow(801.0,0.5);a=[0|0,1] for Moshprtflmah in range(i):s=1./(s%1);a+=[a[-2]+a[-1]*int(s)]; print(a[i]) This time it works in both 2 and 3. And there's no comments! What will you do? Note: As with the old solution, this loses precision after the first 15 terms due to floating-point arithmetic errors. ## Intended solution, JavaScript (ES6) giiiiinnnnprt: i=prompt([n=+2]);s=Math.pow(801,.5);for(a=[1,0];i--;a.unshift(a[1]+a[0]*(s|0)))s=1/(s%1) alert(a[0]) Though I kept several old versions, I somehow managed to lose this copy, but fortunately piecing it together from the others wasn't too hard. I see now that I had an extraneous prt in both programs that could have been golfed out. Oh well. • I thought I would remind you that you can mark this as safe if you wish. – Wheat Wizard Jan 1 '17 at 23:39 • @WheatWizard Thanks, I've added my intended solution. – ETHproductions Jan 18 '17 at 18:28 # Python 2, 124 bytes, A144945, [Safe] Cracking this would have earned you a 500 rep bounty! Too late! Number of ways to place 2 queens on an n X n chessboard so that they attack each other. I hope it's not too easy. I arranged my code so the whitespace is clearly visible. Those are spaces and newlines only. Note: intended solution outputs via character code n=input();print((3+2)*n*n+~0*6*n+1)*n/3; +6; +7+7+7+7+7+7+7+7+7;+++++++++++++++9+9*9*9 Try it online ### Intended Solution, Headsecks: r2=ni***p** ( p((0 ;3+++3;+;/ ) i+++nn +)7 n n+++ 17+~ +)7;97++++7 69+9n+ ++7+n 69 +7+ ++7 **7+++tut This is equivalent to the following BF program: >>,[->>>+>>>+>>>+++++<<<<<<<<<]>>>->>>>>>-<<<[[>+<-]>[>>[<<<+>>+>-]<[>+<-]<-]<<<<]>>+++>[-<-[<+<<]<[+[->+<]<+<<]>>>>>]<<<.,. # Fuzzy Octo Guacamole, 26 bytes, A070627 [Safe] 49++*5^pm#]%:"?:.=:#,|"1:@ Test cases: 1 -> 1 3 -> 23 5 -> 1 Solution: ^::::|*?1=#@]","%.#49++5pm Works in Magistack. • Hello! Just reminding you that this answer can be marked as safe. No need to rush but no one has cracked it in a week. Good job, I am eager to see a solution – Wheat Wizard Nov 17 '16 at 19:51 • Cool, I'll do it and the other once when I get home. – Rɪᴋᴇʀ Nov 17 '16 at 21:30 # Pyth, 75 bytes, A004526Cracked, milk More of a playful test than anything, but: /Q/////////////////****22222 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2;;;;; Try it online! Milk's solution (Convex): 2/Q2 2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2*2*2*; 2*; 2; 2; 2; Try it online Intended solution (///): /*///;2/;// ///22/Q//2;///;//;***2222222222222222222222 Try it online Takes input in the form of 2's before the last semicolon and outputs the correct number of Q's. The sequence is 0-indexed (i.e. 0 is 0, 1 is 0, 2 is 1, ...) Found slight syntactical mistakes in the ///, so edited all solutions. • Cracked – milk Nov 10 '16 at 7:32 # MATL, 7 bytes, A000217, cracked :sp{1}x The sequence is n(n+1)/2 (triangular numbers), starting at input n=1 as specified by the challenge: 1, 3, 6, 10, ... (Output for input 0 is not guaranteed to be the same in the two programs). The program in the other language exits with an error (after producing the correct output in STDOUT). Try it online! : % Push [1 2 ... n], where n is implicit input s % Sum of that array. Gives the desired result p % Product of that. Gives the same number {1} % Push a cell array containing number 1 x % Delete it • Cracked! – Steven H. Nov 10 '16 at 3:38 • @StevenH.Well done! My original solution was x:ps{}1 – Luis Mendo Nov 10 '16 at 7:54 # Python 2, 37 bytes, A000290Cracked print(input()**(1+1)) "'10°3¢','m'" • Cracked – Emigna Nov 9 '16 at 16:31 # Python 3, 27 bytes, A000012, Cracked No input this time! if 1: if 1: print( '1' ) The indents are tabs, but not to save bytes - they are required for whitespace. I don't think it needs a TIO link or explanation! (Probably won't take long to crack in some way) Intended answer (Whitespace): -Start- if1:if1:print('1') -End- (Start and end not part of the program) Sorry, I forgot to add that it prints to STDERR: Try it online! • – milk Nov 10 '16 at 22:11 • I feel like this is supposed to be Whitespace, but that would print an error to STDERR as it's lacking the required linefeeds to end in [LF][LF][LF]. – Martin Ender Nov 10 '16 at 22:12 • @milk Not you again! :D – boboquack Nov 10 '16 at 22:16 • @boboquack It does work, but it does print to STDERR (which you can see by activating the Debug mode on TIO), and the challenge says that answers need to specify whether the hidden language writes to STDERR. – Martin Ender Nov 10 '16 at 22:23 • Cracked again. – Oliver Ni Nov 11 '16 at 5:01 # Fuzzy Octo Guacamole, 11 bytes, A001844 [Safe!] hha02^d+**+ A crack that sort-of works is dh*h++^2*0a, in Pyth. It's not the right output format though. My code is still out there! (and it's not in Pyth) Test Cases: 0 -> 1 1 -> 5 Solution: ^++d0ah*2*h In Jolf. • I swear, this looks like it was made for Jolf, but I just can't figure out that d... – ETHproductions Nov 10 '16 at 2:14 • Cracked (I hope...) – ETHproductions Nov 10 '16 at 2:33 • @ETHproductions ah, nice. Not sure if it counts though? See edit. – Rɪᴋᴇʀ Nov 10 '16 at 2:56 • My code prints a newline, but no space. Same with the valid code. – Rɪᴋᴇʀ Nov 10 '16 at 2:57 • @EasterlyIrk Despite what I said I do not consider the answer provided a crack. I everything I said still holds true however I do not consider leading whitespace valid output and I will amend the question to reflect that. – Wheat Wizard Nov 10 '16 at 3:58 # WinDbg, 39 bytes, A000007Cracked by jimmy23013 ~e.block{j(0>=@$t0)?@$t0+(1<7);??0}t":" The difficult sequence of 0**n. Input is done by passing a value in the pseudo-register $t0.
My original solution was C#:
@t=>(object)(0<@t?0:1)??"$$700~lk.{}+"; # JavaScript ES6, 38 bytes, A000290, Cracked J=>eval(Array(J).fill(J).join+)|2-2; This square train is pretty nifty, but isn't going anywhere fast. (Get it? square train? as in, wheels? no? okay, fine. critics.) Intended answer: Reticular (Try it online!), in2Jo;=>eval(Array(J).fill(J).j+)|-2 in take input, convert to number 2J raise to the second power o; output and terminate; ignores following chars • Your code is trying to convince me that the other language is J :P – ETHproductions Nov 9 '16 at 23:50 • – Wheat Wizard Nov 10 '16 at 0:37 # MATL, 13 bytes, A002275 Cracked! i:"@ax'1'] v! Try it online! Explanation: i % Grab input : % Push (range(1,input)) " % For each element in this range: @ % Push it a % Is is truthy? x % Delete it '1' % Push '1' ] % End loop v % Join all of these '1's together ! % Transpose and display • I feel like this is in Vim but for the life of me I can't crack it – Wheat Wizard Nov 9 '16 at 19:47 • Cracked :) – Wheat Wizard Nov 10 '16 at 1:13 # 2sable, 13 bytes, A002378, Cracked! Hoping I didn't miss something. Computes a(n) = n × (n + 1): >*?"!&)<=@\\} My version: ?"\>@&*})<\=! Or the unfolded version: ? " \ > @ & * } ) < \ = ! . . . . . . Note that the > in the top-left corner is unused (except for the 2sable program). I did this to confuse the robbers (but that obviously didn't work haha). Try it online! • Cracked. :) – Martin Ender Nov 10 '16 at 12:46 • @MartinEnder Nice job! I'll update my answer with the original submission :). – Adnan Nov 10 '16 at 13:08 # 2sable, 15 bytes, A087156 D1QiA0*<}.;2->> Try it online The sequence of non-negative numbers, except for 1. # Befunge 93, 14 bytes, A121377, Cracked by milk! &52* %68*+ .@Q Fun fact: The intended solution to this is the first time I've ever used that language. My solution in Pyth. &@ print an error, but that goes to STDERR which according to the OP is ignored. +%Q*5 2*6 8.&@ • Cracked – milk Nov 11 '16 at 6:55 # Python 2, 35 bytes, A048735, Safe print(lambda u:u&u<<1)(input())>>1 The original solution was in my own programming language Wise. :<<>&>print(lambda uuu1)(input())1 Most of the characters are irrelevant no-ops. The important characters are the first six. : creates two copies of the first item on the stack. <<> bit shifts twice to the left and once to the right which is equivalent to bit shifting once to the left. & takes the bitwise and of the top and second item (the original and the bit shifted copy). Lastly > bit shifts once to the right. # 05AB1E, 5 bytes, A000012, Safe ;1? Sequence of 1's. Try it online ### Intended Solution: Arcyou 1;$$?
Try it online. I couldn't find documentation for this language, so don't have an explanation of how it works exactly.
• Stupid semicolon... I could almost use Retina, but I can't have both the 1 and ;. – mbomb007 Nov 11 '16 at 16:47
• This has not been cracked – Wheat Wizard Nov 12 '16 at 17:44
• It looks like this answer can now be marked as safe. Since I spent quite a deal of time trying to crack this one I am quite eager to see the intended answer. – Wheat Wizard Nov 17 '16 at 19:59
• Shouldn't this answer be marked as "accepted" now? – mbomb007 Feb 9 '17 at 16:49
# Python 2, 70 Bytes, A000217Cracked!
I have a feeling this won't be cracked in the language I used for the other version, we will see :)
o=input()
v=0
i=1
while o:
v+=i
i+=1
print v
#| d00->1@@@++-^,,[
I realized afterwards I had incorrectly obfuscated the code (it doesn't change the posted answer's validity). Here's the code I started with in Haystack:
v
0
v
0
i
1
-
> d0[v
^-1@+@d+1@?,,o|
• – jimmy23013 Nov 9 '16 at 20:47
# 05AB1E, 9 bytes, A000042Cracked!
1×,1*-^$) This is the Unary representation of natural numbers (OEIS). So if the input was 3, for example, then output would be 111. Explanation: # implicit input 1 # pushes 1 to the stack × # pushes "1" × (the input) , # outputs the stack 1*-^$) # irrelevant
Try it online!
# Original Solution, Carrot
1^*$-1×^) Explanation 1^ Push "1" to the stack * Multiply the string by$-1 ...the input (as an integer) minus 1 times
×,) Ignored by the interpreter
The * multiplies the string by (n+1) times, so that a^*3 results in aaaa and not aaa. So that is why I subtracted 1 from the input.
Only now I realise that the ) has been irrelevant in both the languages :D
Try it online!
• Cracked. – Oliver Ni Nov 10 '16 at 17:39
• What was the original hidden language? – Wheat Wizard Nov 10 '16 at 17:53
• @WheatWizard Whoops, thanks for finding that. I added the language now – user41805 Nov 10 '16 at 17:54
# J, 2 bytes, A000290, Cracked
*~
Well, might as well start going for those two-byters. Yields n × n, or n2.
## intended solution, Jolf, 2 bytes
*~
Well. Yeah. This is my own language and I think it works because ~ looks for an extended character, but doesn't find one, so it just ignores it. ¯\_(ツ)_/¯ Oops.
• Cracked! – Steven H. Nov 11 '16 at 1:27
• @StevenH. nice job! I edited with intended solution. – Conor O'Brien Nov 11 '16 at 1:30
• @ConorO'Brien your intended solution was not a valid solution. In order for languages to be considered distinct neither the original or the solution can be a polyglot in both languages – Wheat Wizard Nov 11 '16 at 1:31
• @WheatWizard Oh. That's awkward. – Conor O'Brien Nov 11 '16 at 1:33
# ABCR, 24 bytes, A023443Cracked!
70: Quit xi. Classy queue!
There's a bunch of no-ops. Calculates n - 1.
# 05AB1E, 8 bytes, A000042, Cracked
F1}J,(1&
Test cases:
1 -> 1
2 -> 11
3 -> 111
## Ouroboros, 6 bytes, A000012Cracked
)49.o(
Always outputs 1.
• Cracked. – ETHproductions Nov 11 '16 at 17:41
• @DLosc sorry for posting in the wrong place but you haven't been on chat in ages ;_; – ASCII-only Nov 12 '16 at 0:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24772995710372925, "perplexity": 2262.416897989115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00287.warc.gz"} |
https://www.physicsforums.com/threads/can-we-measure-time-and-accelaration-at-the-same-time.185383/ | # Can we measure time and accelaration at the same time?
1. Sep 18, 2007
### goksen
1. The problem statement, all variables and given/known data
can we measure time and accelaration at the same time?
what exactly acceleration operator is?
2. Relevant equations
heisenberg's uncertainity principle
3. The attempt at a solution
I guess acc. operator is dp/dt i.e. $$\frac{\partial^2 }{\partial t \partial x}$$(ignoring constants)
and $$\delta a \delta t$$ is <p>
which say we may not measure
2. Sep 18, 2007
### dextercioby
To the first question one cannot answer in the context of quantum mechanics. The accelerator operator is
$$\hat{a}(t)=\frac{1}{m}\frac{d\hat{p}(t)}{dt}$$
in the Heisenberg picture. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9874891042709351, "perplexity": 3308.0877902378024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720153.61/warc/CC-MAIN-20161020183840-00270-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://quantumcomputing.stackexchange.com/questions/4163/how-scalable-are-quantum-computers-when-measurement-operations-are-considered/4168 | # How scalable are quantum computers when measurement operations are considered?
From a high-level point of view, given a quantum program, typically the last few operations are measurements.
In most cases, in order to extract a useful answer, it is necessary to run multiple times to reach a point where it is possible to estimate the probability distribution of the output q-bits with some level of statistical significance.
Even when ignoring noise, if the number of q-bits of the output increases, the number of required runs will increase too. In particular, for some output probability distributions it is likely that the number of required samples can be very high to obtain some sensible statistical power.
While increasing the number of q-bits seems to indicate an exponential growth, the number of measurements required (each run taking x nanoseconds) seem to limit how will quantum computation do scale.
Is this argument sensible or there is some critical concept that I am missing? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8992127180099487, "perplexity": 484.74265626603693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00274.warc.gz"} |
https://wattsworth.net/joule/cli.html | # Command Line Interface¶
The command line interface (CLI) can be used to interact with the local Joule service or any Joule node on the network. Arguments can often be supplied in both short and long forms, and many are optional. The documentation uses the following conventions:
• An argument that takes an additional parameter is denoted -f FILE.
• The syntax -f FILE, --file FILE indicates that either the short form (-f) or long form (--file) can be used interchangeably.
• Square brackets ([]) denote optional arguments.
• Pipes (A|B) indicate that either A or B can be specified, but not both.
• Curly braces ({}) indicate a list of mutually-exclusive argument choices.
Usage
joule ~ [--help] [-u] [-v] {module, stream, folder, data} ...
Arguments
-u URL, --url URL: default http://localhost:8088 Joule node URL. Must be specified before the subcommand --help: Print a help message with usage information on all supported command-line arguments. This can also be specified after the subcommand in which case the usage and arguments of the subcommand are shown instead -v, --version: print the joule CLI version subcommand: The subcommand followed by its arguments. This is required
## base¶
### info¶
show information about the joule service
Usage
joule info ~ [--help]
Arguments
none
Example
### info¶
Usage
joule module info ~ [--help] NAME
Arguments
NAME: module name (from configuration file)
Example
## stream¶
### list¶
show the contents of the stream database
Usage
joule stream list ~ [-l] [-s] [--help]
Arguments
-l, --layout: include stream layout -s, --status: include stream status
Example
### destroy¶
Completely remove the stream at the specified path
Usage
joule stream destroy ~ PATH
Arguments
PATH: path of stream to destroy
### move¶
Move a stream into a new folder.
Usage
joule stream move ~ PATH DESTINATION
Arguments
PATH: path of stream to move DESTINATION: path of destination folder
Notes
The folder will be created if it does not exist. A stream cannot be moved into a folder which has a stream with the same name
## folder¶
### move¶
move a folder into a new parent folder.
Usage
joule folder move ~ PATH DESTINATION
Arguments
PATH: path of folder to move DESTINATION: path of new parent folder
Note:
The parent folder will be created if it does not exist. A folder cannot be moved into a parent folder which has a folder with the same name
### remove¶
remove a folder
Usage
joule folder remove ~ [-r] PATH
Arguments
-r, --recursive: remove subfolders PATH: path of folder to remove
Notes
If the folder has subfolders, add -r to recursively remove them. The folder and its subfolders may not have any streams, if they do more or remove them first.
## data¶
### copy¶
copy data between streams
Usage
joule data copy ~ [-s] [-e] [-U] SOURCE DESTINATION
Arguments
-s, --start: timestamp or descriptive string, if omitted start copying at the beginning of SOURCE -e, --end: timestamp or descriptive string, if omitted copy to the end of SOURCE -U, --dest-url: destination URL if different than source (specify source URL with top level -u flag)
extract data from a stream
Usage
joule data read ~ [-s] [-e] [-r|-d] [-b] [-m] PATH
Arguments
-s, --start: timestamp or descriptive string, if omitted start reading at the beginning -e, --end: timestamp or descriptive string, if omitted read to the end -r: limit the response to a maximum number of rows (this will produce a decimated result) -d: specify a particular decimation level, may not be used with -r, default is 1 -b: include min/max limits for each row of decimated data -m: include [# interval break] tags in the output to indicate broken data intervals PATH: path of stream to read
Example
# write the last hour of data from /demo/random into data.txt $> joule data read /demo/random --start="1 hour ago" > data.txt$> head data.txt 1538744825370107 0.383491 0.434531 1538744825470107 0.317079 0.054972 1538744825570107 0.572721 0.875278 1538744825670107 0.350911 0.680056 1538744825770107 0.839264 0.189361 1538744825870107 0.259714 0.394411 1538744825970107 0.027148 0.963998 1538744826070107 0.828187 0.704508 1538744826170107 0.738999 0.082351 1538744826270107 0.828530 0.916019
### remove¶
remove data from a stream
Usage
joule data remove ~ [--from] [--to] STREAM
Arguments
-s, --start: timestamp or descriptive string, if omitted start reading at the beginning of SOURCE -e, --end: timestamp or descriptive string, if omitted read to the end of SOURCE | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4559093713760376, "perplexity": 14592.542569948908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999948.3/warc/CC-MAIN-20190625213113-20190625235113-00186.warc.gz"} |
https://arxiv.org/abs/1501.00855 | gr-qc
# Title:Closure constraints for hyperbolic tetrahedra
Abstract: We investigate the generalization of loop gravity's twisted geometries to a q-deformed gauge group. In the standard undeformed case, loop gravity is a formulation of general relativity as a diffeomorphism-invariant SU(2) gauge theory. Its classical states are graphs provided with algebraic data. In particular closure constraints at every node of the graph ensure their interpretation as twisted geometries. Dual to each node, one has a polyhedron embedded in flat space R^3. One then glues them allowing for both curvature and torsion. It was recently conjectured that q-deforming the gauge group SU(2) would allow to account for a non-vanishing cosmological constant Lambda, and in particular that deforming the loop gravity phase space with real parameter q>0 would lead to a generalization of twisted geometries to a hyperbolic curvature. Following this insight, we look for generalization of the closure constraints to the hyperbolic case. In particular, we introduce two new closure constraints for hyperbolic tetrahedra. One is compact and expressed in terms of normal rotations (group elements in SU(2) associated to the triangles) and the second is non-compact and expressed in terms of triangular matrices (group elements in SB(2,C)). We show that these closure constraints both define a unique dual tetrahedron (up to global translations on the three-dimensional one-sheet hyperboloid) and are thus ultimately equivalent.
Comments: 24 pages Subjects: General Relativity and Quantum Cosmology (gr-qc); Mathematical Physics (math-ph) Journal reference: Class.Quant.Grav. 32 (2015) 13, 135003 DOI: 10.1088/0264-9381/32/13/135003 Cite as: arXiv:1501.00855 [gr-qc] (or arXiv:1501.00855v1 [gr-qc] for this version)
## Submission history
From: Christoph Charles [view email]
[v1] Mon, 5 Jan 2015 13:34:13 UTC (154 KB) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8453622460365295, "perplexity": 1442.2975086334702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00193.warc.gz"} |
http://isiarticles.com/article/25963 | دانلود مقاله ISI انگلیسی شماره 25963
عنوان فارسی مقاله
مرزهای دقیق برای تجزیه و تحلیل حساسیت سازه با پارامترهای نامشخص ولی محدود شده
کد مقاله سال انتشار مقاله انگلیسی ترجمه فارسی تعداد کلمات
25963 2008 15 صفحه PDF سفارش دهید محاسبه نشده
خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.
عنوان انگلیسی
Exact bounds for the sensitivity analysis of structures with uncertain-but-bounded parameters
منبع
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Applied Mathematical Modelling, Volume 32, Issue 6, June 2008, Pages 1143–1157
کلمات کلیدی
حساسیت - عدم قطعیت - روش تجزیه و تحلیل فاصله - مقادیر ویژه سازه -
پیش نمایش مقاله
چکیده انگلیسی
Based on interval mathematical theory, the interval analysis method for the sensitivity analysis of the structure is advanced in this paper. The interval analysis method deals with the upper and lower bounds on eigenvalues of structures with uncertain-but-bounded (or interval) parameters. The stiffness matrix and the mass matrix of the structure, whose elements have the initial errors, are unknown except for the fact that they belong to given bounded matrix sets. The set of possible matrices can be described by the interval matrix. In terms of structural parameters, the stiffness matrix and the mass matrix take the non-negative decomposition. By means of interval extension, the generalized interval eigenvalue problem of structures with uncertain-but-bounded parameters can be divided into two generalized eigenvalue problems of a pair of real symmetric matrix pair by the real analysis method. Unlike normal sensitivity analysis method, the interval analysis method obtains informations on the response of structures with structural parameters (or design variables) changing and without any partial differential operation. Low computational effort and wide application rang are the characteristic of the proposed method. Two illustrative numerical examples illustrate the efficiency of the interval analysis.
مقدمه انگلیسی
The purpose of sensitivity analysis is to work out the structure response or the variety of performance through the transformation of parameters or designing variables [1]: equation(1) u=u(b1,b2,…,bn),u=u(b1,b2,…,bn), Turn MathJax on where b1,b2,…,bnb1,b2,…,bn are structure parameters or designing variables. Thus, via partial differential operation and bring b0=(b10,b20,…,bn0)Tb0=(b10,b20,…,bn0)T into Eq. (1), we have equation(2) View the MathML source∂u∂bb0=∂u(b10,b20,…,bn0)∂b. Turn MathJax on The absolute value of Eq. (2) denotes the sensitivity degree of structure responses or capability to structure parameters. On condition that View the MathML source∂u∂bb0>0 the structure response or performance u is monotone increased around the parameter b0=(b10,b20,…,bn0)Tb0=(b10,b20,…,bn0)T. If View the MathML source∂u∂bb0<0 the structure response or performance u is monotone degressive around the parameter b0=(b10,b20,…,bn0)Tb0=(b10,b20,…,bn0)T. Bear in mind Eq. (2), we also gain the transformation of the structure response or performance as equation(3) View the MathML sourceδu=∂u∂bδb. Turn MathJax on In engineering practice, very often difference operation methods are used instead of differential operation methods, but the results are often unreliable. However, the above-mentioned normal sensitivity analysis method has many problems: Case I. The mathematical foundation of the normal sensitivity analysis method is the differential calculus of real analysis. In terms of differential calculus principle, based on partial differential, the sensitivity analysis result is only local information. Namely, in this local bound the structure response or performance is most sensitive to this parameter; but in another local bound, the structure response or performance is likely least sensitive to this parameter. In the same way, the structure response or performance is monotone increased to this parameter in this local bound; but in another local bound, the structure response or performance is likely monotone degressive to this parameter. However, in practice analysis and designing, people were concerned with the sensitivity information of the structure response or performance in a certain large bound. The sensitivity analysis which based on differential calculus cannot satisfy the requirement of such global information. Although we can process normal sensitivity analysis many times and receive the sensitivity information in a certain large bound, the efficiency of the calculation will decrease seriously. Case II. For most practical engineering, it is impossible to present the parse expressions of structure response or performance in virtue of complexity and legion dimensions. So, usually we use difference or perturbation analysis instead of differential calculus. However, in engineering practice, the variety of parameters or designing variables oversize or undersize will all impact the precision of the sensitivity analysis and present complete incorrect information. As shown in Fig. 1, we have equation(4) View the MathML sourceδu1δb1=u(b1)-u(b0)b1-b0=u1-u0b1-b0>0 Turn MathJax on and equation(5) View the MathML sourceδu2δb2=u(b2)-u(b0)b2-b0=u2-u0b2-b0<0. Turn MathJax on It presents an opposite sensitivity information. So, sometimes the results of the difference sensitivity analysis method are without confidence to the distinct nonlinearity structures. Full-size image (7 K) Fig. 1. Distinct nonlinearity function. Figure options Case III . Having found the results of the sensitivity analysis, how can we work out the variation of structure responses or performances through the variation of parameters or designing variables? The normal sensitivity analysis calculate the variation by Eq. (3), but it is only present the local information and δb should not be too large, otherwise the result will be without confidence. For instance, as shown in Fig. 1, we process a calculation according to δu=(∂u/∂b)δbδu=(∂u/∂b)δb as following: equation(6) View the MathML sourceu1=u0+δu1δb1δb1. Turn MathJax on From Eq. (6) we get that u 1 is increased when compared with u 0. Also, we process another calculation according to δu=(∂u/∂b)δbδu=(∂u/∂b)δb as following: equation(7) View the MathML sourceu2=u0+δu2δb2δb2. Turn MathJax on From Eq. (7) we work out that u2 is decreased when compared with u0. Case IV. In the difference calculation of sensitivity, the result that calculated by δu is the distance u(b)-u(b0)u(b)-u(b0) of structure responses or parameters between b and b 0 rather than the distance umax-uminumax-umin of structure responses or parameters between b and b 0. Where u(b)≠umaxu(b)≠umax and u(b0)≠uminu(b0)≠umin will be correct with distinct nonlinearity problem. Therefore, in engineering practice the variation of parameters or design variables δb=b-b0δb=b-b0 oversize or undersize will all impact the precision of sensitivity analysis and present complete incorrect information indeed. Case V. If the structure parameters in the expressions of structure responses, performances or mathematical calculations are uncertain, especially in the large-scale structure, the result of the sensitivity analysis is often without confidence due to the cumulation of the uncertainty. If we use interval mathematics to process sensitivity analysis, the above problem of normal sensitivity analysis will be completely solved.
نتیجه گیری انگلیسی
In this paper, considering the properties of the stiffness matrix and the mass matrix in structural engineering, making use of the structural parameters and the non-negative decomposition of a matrix, we present a new method to determine the lower and upper bounds on the sensitivity due to uncertain-but-bounded parameters for the interval sensitivity analysis problem. Without any partial differential operations, the interval analysis method obtained the information on the response of the structure with the structural parameters (or design variables) changing. The effectiveness and correctness of the algorithm was illustrated by two numerical examples. For large-scale structures with interval parameters, a fast computing technique for obtaining the approximate sensitivity and the corresponding errors is desirable.
خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8568086624145508, "perplexity": 1817.4525644678658}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607786.59/warc/CC-MAIN-20170524035700-20170524055700-00057.warc.gz"} |
https://annals.math.princeton.edu/2013/178 | Volume 178
## The Witten equation, mirror symmetry, and quantum singularity theory
Pages 1-106 by Huijun Fan, Tyler Jarvis, Yongbin Ruan | From volume 178-1
## Traveling waves for nonlinear Schrödinger equations with nonzero conditions at infinity
Pages 107-182 by Mihai Maris | From volume 178-1
## A bounded linear extension operator for $L^{2,p}(\mathbb{R}^2)$
Pages 183-230 by Arie Israel | From volume 178-1
## A class of superrigid group von Neumann algebras
Pages 231-286 by Adrian Ioana, Sorin Popa, Stefaan Vaes | From volume 178-1
## Disparity in Selmer ranks of quadratic twists of elliptic curves
Pages 287-320 by Zev Klagsbrun, Barry Mazur, Karl Rubin | From volume 178-1
## Quasi-isolated blocks and Brauer’s height zero conjecture
Pages 321-384 by Radha Kessar, Gunter Malle | From volume 178-1
## Small eigenvalues of the Laplacian for algebraic measures in moduli space, and mixing properties of the Teichmüller flow
Pages 385-442 by Artur Avila, Sébastien Gouëzel | From volume 178-2
## Optimal asymptotic bounds for spherical designs
Pages 443-452 by Andriy Bondarenko, Danylo Radchenko, Maryna Viazovska | From volume 178-2
## On irreducible representations of compact $p$-adic analytic groups
Pages 453-557 by Konstantin Ardakov, Simon Wadsley | From volume 178-2
## Solving the KPZ equation
Pages 559-664 by Martin Hairer | From volume 178-2
## The survival probability and $r$-point functions in high dimensions
Pages 665-685 by Remco van der Hofstad, Mark Holmes | From volume 178-2
## Anosov flows and dynamical zeta functions
Pages 687-773 by Paolo Giulietti, Carlangelo Liverani, Mark Pollicott | From volume 178-2
## Cantor systems, piecewise translations and simple amenable groups
Pages 775-787 by Kate Juschenko, Nicolas Monod | From volume 178-2
## Stratifications of Newton polygon strata and Traverso’s conjectures for $p$-divisible groups
Pages 789-834 by Eike Lau, Marc-Hubert Nicole, Adrian Vasiu | From volume 178-3
## Representations of semisimple Lie algebras in prime characteristic and the noncommutative Springer resolution (with an Appendix by Eric Sommers)
Pages 835-919 by Roman Bezrukavnikov, Ivan Mirković, Eric Sommers | From volume 178-3
## Counting local systems with principal unipotent local monodromy
Pages 921-982 by Pierre Deligne, Yuval Z. Flicker | From volume 178-3
## A problem on completeness of exponentials
Pages 983-1016 by Alexei Poltoratski | From volume 178-3
## Stationary measures and invariant subsets of homogeneous spaces (III)
Pages 1017-1059 by Yves Benoist, Jean-François Quint | From volume 178-3
## Finite time singularities for the free boundary incompressible Euler equations
Pages 1061-1134 by Angel Castro, Diego Córdoba, Charles Fefferman, Francisco Gancedo, Javier Gómez-Serrano | From volume 178-3
## Characters of relative $p’$-degree over normal subgroups
Pages 1135-1171 by Gabriel Navarro, Pham Huu Tiep | From volume 178-3
## Topologies and structures of the Cremona groups
Pages 1173-1198 by Jérémy Blanc, Jean-Philippe Furter | From volume 178-3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5832290053367615, "perplexity": 14749.166860589989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057366.40/warc/CC-MAIN-20210922132653-20210922162653-00175.warc.gz"} |
https://arxiv.org/abs/1210.3857 | Skip to main content
# Mathematics > Analysis of PDEs
arXiv:1210.3857 (math)
# Title:Regularity criterion for 3D Navier-Stokes Equations in Besov spaces
Download PDF
Abstract: Several regularity criterions of Leray-Hopf weak solutions $u$ to the 3D Navier-Stokes equations are obtained. The results show that a weak solution $u$ becomes regular if the gradient of velocity component $\nabla_{h}{u}$ (or $\nabla{u_3}$) satisfies the additional conditions in the class of $L^{q}(0,T; \dot{B}_{p,r}^{s}(\mathbb{R}^{3}))$, where $\nabla_{h}=(\partial_{x_{1}},\partial_{x_{2}})$ is the horizontal gradient operator. Besides, we also consider the anisotropic regularity criterion for the weak solution of Navier-Stokes equations in $\mathbb{R}^3$. Finally, we also get a further regularity criterion, when give the sufficient condition on $\partial_3u_3$.
Comments: arXiv admin note: text overlap with arXiv:1005.4463 by other authors Subjects: Analysis of PDEs (math.AP) Cite as: arXiv:1210.3857 [math.AP] (or arXiv:1210.3857v1 [math.AP] for this version)
## Submission history
From: Daoyuan Fang [view email]
[v1] Sun, 14 Oct 2012 23:32:49 UTC (17 KB)
Full-text links:
## Download:
Current browse context:
math.AP
Change to browse by: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424248695373535, "perplexity": 2097.5029771263175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413901.34/warc/CC-MAIN-20200601005011-20200601035011-00403.warc.gz"} |
https://brilliant.org/problems/fun-with-calculus-1/ | # Fun With Calculus 1
Calculus Level 4
Tangents to the curve $\large y=\dfrac{1+3 x^2}{3+x^2}$ drawn at the points for which $$y=1$$ intersect at $$(a,b)$$. Find the value of $${(a+b)}$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6826733946800232, "perplexity": 944.0633001963594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00507-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://blog.xogeny.com/blog/mbe030/ | Posted: May 12, 2014
# Announcing Modelica by Example
## Free Interactive Book: Modelica by Example
During Modelica'2014, I unveiled the very first public release of my new book, "Modelica by Example". Since then, the book has been available on a special site.
### Kickstarter
"Modelica by Example" started off as a Kickstarter project. As a result of the generous contributions, that project was funded. At the recent Modelica'2014 conference an "Early Access" version of the book was released.
This was a great example of a community driven project to make Modelica more accessible.
The book is distributed under a creative commons license. This means that the contents of the book an be freely redistributed (as long as it is done on a non-commercial basis and the contents of the book are not changed).
I'm also pleased to announce that there are plans for translations of the book into Spanish, Chinese and Italian (and hopefully more languages in the future).
### Free HTML Version
A free HTML version of the book is available at http://book.xogeny.com/. This book can be used by anyone interested in learning Modelica.
### Electronic Formats
I am also producing ePub and PDF versions of the book. These are available for download to those who have purchased a copy of the book. Pricing of the electronic formats is "Pay what you can". The idea here is to make the book affordable for students by allowing them to set their own price. My hope is that other people using the book for professional purposes will pay what they think the book is worth to them.
### Integrated Browser Simulation
The free HTML version of the book features "Integrated Browser Simulation". In practice, what this means is that example models from the book can be simulated directly in the browser. This allows readers to adjust parameter values given in the example models to see how the response of the example will change for different parameter values.
We will be using this same technologies in some new products we are currently working on. To learn more, sign up for the Xogeny mailing list and we'll keep you up to date on all the things we are working on.
### Early Access Version
This version is an "Early Access" version. The book is mostly complete. There are a few unfinished sections that I hope to wrap up soon. But the core material is there so anybody interested in learning Modelica can start now.
The goal of the "Early Access" version is to get this material into the hands of people who can use it as quickly as possible and to collect feedback on the book before publishing a print version of the book.
But today I'm releasing v0.3.0 of the book with the following changes since the initial public beta release (v0.2.0):
#### Enhancements
• Moved the site to book.xogeny.com with redirects from beta site.
• Lots of cleanup of annotations by @tbeu and @dietmarw
• Switched back to using MathJax (looks nicer, but requires JS)
• Updated the README to help orient people who want to contribute.
• Incorporated a bunch of excellent fixes and improvements from @mrtiller related to #42.
#### Bug Fixes
• Merged a the following pull requests from @tbeu: #143, #142, #141, #139, #137, #117 and #93
• Merged a change from @tbeu regarding a heat transfer example in the discussion on packages.
• Merged a bunch of changes from @tbeu that improve the external function examples and clean up a few other things.
• Fixed an error in the source code for the 1D heat conduction equation examples and fixed a recurring error (see issue #53) in the equations presented along with the source code.
• Fixed issue #61 which involved a misplaced annotation.
• Fixed an error in one of the Lotka-Volterra equations raised in issue #50.
• Corrected the explanation on the unit attribute raised in #59
### GitHub
The source material is available on GitHub. If you find an issue with the book, you can report it on the GitHub issue tracker or, even better, submit a pull request with a suggested fix.
### Conclusion
My goal is to provide a definitive reference for Modelica that is widely accessible and affordable. Today marks release 0.3.0 and I'll be working toward a 1.0 release that we can publish in print form. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16403339803218842, "perplexity": 1803.226018276634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578681624.79/warc/CC-MAIN-20190425034241-20190425060054-00070.warc.gz"} |
https://www.oist.jp/news-center/photos/m%C3%B6bius-strip | # Möbius strip
6 Oct 2020
When two Möbius strips (pictured) come together, they form a Klein bottle, an unusual bottle-like object, which intersects with itself.
Free for anyone to re-use, but must be credited to OIST. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9006353616714478, "perplexity": 8001.432108000537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00320.warc.gz"} |
https://math.libretexts.org/Bookshelves/PreAlgebra/Book%3A_Prealgebra_(OpenStax)/10%3A_Polynomials | $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
# 10: Polynomials
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
Expressions known as polynomials are used widely in algebra. Applications of these expressions are essential to many careers, including economists, engineers, and scientists. In this chapter, we will find out what polynomials are and how to manipulate them through basic mathematical operations.
• 10.1: Add and Subtract Polynomials
In this section, we will work with polynomials that have only one variable in each term. The degree of a polynomial and the degree of its terms are determined by the exponents of the variable. Working with polynomials is easier when you list the terms in descending order of degrees. When a polynomial is written this way, it is said to be in standard form. Adding and subtracting polynomials can be thought of as just adding and subtracting like terms.
• 10.2: Use Multiplication Properties of Exponents (Part 1)
In this section, we will begin working with variable expressions containing exponents. Remember that an exponent indicates repeated multiplication of the same quantity. You have seen that when you combine like terms by adding and subtracting, you need to have the same base with the same exponent. But when you multiply and divide, the exponents may be different, and sometimes the bases may be different, too. We’ll derive the properties of exponents by looking for patterns in several examples.
• 10.3: Use Multiplication Properties of Exponents (Part 2)
All the exponent properties hold true for any real numbers, but right now we will only use whole number exponents. The product property of exponents allows us to multiply expressions with like bases by adding their exponents together. The power property of exponents states that to raise a power to a power, multiply the exponents. Finally, the product to a power property of exponents describes how raising a product to a power is accomplished by raising each factor to that power.
• 10.4: Multiply Polynomials (Part 1)
In this section, we will begin multiplying polynomials with degree one, two, and/or three. Just like there are different ways to represent multiplication of numbers, there are several methods that can be used to multiply a polynomial by another polynomial. The Distributive Property is the first method that you have already encountered and used to find the product of any two polynomials.
• 10.5: Multiply Polynomials (Part 2)
The FOIL method is usually the quickest method for multiplying two binomials, but it works only for binomials. When you multiply a binomial by a binomial you get four terms. Sometimes you can combine like terms to get a trinomial, but sometimes there are no like terms to combine. Another method that works for all polynomials is the Vertical Method. It is very much like the method you use to multiply whole numbers.
• 10.6: Divide Monomials (Part 1)
In this section, we will look at the exponent properties for division. A special case of the Quotient Property is when the exponents of the numerator and denominator are equal. It leads us to the definition of the zero exponent, which states that if a is a non-zero number, then a^0 = 1. Any nonzero number raised to the zero power is 1. The quotient to a power property of exponents states that to raise a fraction to a power, you raise the numerator and denominator to that power.
• 10.7: Divide Monomials (Part 2)
We have now seen all the properties of exponents. We'll use them to divide monomials. Later, you'll use them to divide polynomials. When we divide monomials with more than one variable, we write one fraction for each variable. Once you become familiar with the process and have practiced it step by step several times, you may be able to simplify a fraction in one step.
• 10.8: Integer Exponents and Scientific Notation (Part 1)
The negative exponent tells us to re-write the expression by taking the reciprocal of the base and then changing the sign of the exponent. Any expression that has negative exponents is not considered to be in simplest form. We will use the definition of a negative exponent and other properties of exponents to write an expression with only positive exponents.
• 10.9: Integer Exponents and Scientific Notation (Part 2)
When a number is written as a product of two numbers, where the first factor is a number greater than or equal to one but less than 10, and the second factor is a power of 10 written in exponential form, it is said to be in scientific notation. It is customary to use × as the multiplication sign, even though we avoid using this sign elsewhere in algebra. Scientific notation is a useful way of writing very large or very small numbers. It is used often in the sciences to make calculations easier.
• 10.E: Polynomials (Exercises)
• 10.S: Polynomials (Summary)
• 10.10: Introduction to Factoring Polynomials
Earlier we multiplied factors together to get a product. Now, we will be reversing this process; we will start with a product and then break it down into its factors. Splitting a product into factors is called factoring. In The Language of Algebra we factored numbers to find the least common multiple (LCM) of two or more numbers. Now we will factor expressions and find the greatest common factor of two or more expressions. The method we use is similar to what we used to find the LCM.
Figure 10.1 - The paths of rockets are calculated using polynomials. (credit: NASA, Public Domain) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109172224998474, "perplexity": 290.28459746955343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00580.warc.gz"} |
http://mathhelpforum.com/algebra/176029-simulatenous-equations-quadratic-linear.html | # Math Help - Simulatenous equations (Quadratic + linear)
1. ## Simulatenous equations (Quadratic + linear)
If (2,1) is a solution fo the simultaneous equations,
x^2 + xy + ay = b
2ax + 3y = b
A) Find the value of a and of b. [a=1, b =7]
B) Find also the other solution [x=-7 and y = 7)
I have solved both parts, but I face a problem.
After part a), I substituted values of a and b into the equation to obtain
x^2 + xy + y = 7 and 2x + 3y = 7
Substituting x = (7-3y)/2 into the equation x^2 + xy + y = 7
I obtain the quadratic expression y^2 - 9y + 14 = 0
Hence, I get y = 7 and y =2 ; x = -7 and x = 1/2
Why do I reject y =2 and x = 1/2 combination ?
I know that substituting the above into 2x + 3y = 7 satisfies the equation
But it doesn't satisfy x^2 + xy + y = 7. How would I know?
Thanks
2. Originally Posted by Drdj
If (2,1) is a solution fo the simultaneous equations,
x^2 + xy + ay = b
2ax + 3y = b
A) Find the value of a and of b. [a=1, b =7]
B) Find also the other solution [x=-7 and y = 7)
I have solved both parts, but I face a problem.
After part a), I substituted values of a and b into the equation to obtain
x^2 + xy + y = 7 and 2x + 3y = 7
Substituting x = (7-3y)/2 into the equation x^2 + xy + y = 7
I obtain the quadratic expression y^2 - 9y + 14 = 0
Hence, I get y = 7 and y =2 ; x = -7 and x = 1/2
Why do I reject y =2 and x = 1/2 combination ?
I know that substituting the above into 2x + 3y = 7 satisfies the equation
But it doesn't satisfy x^2 + xy + y = 7. How would I know?
Thanks
you don't ... extraneous solutions show up when you sub them back into the original equations
for example ...
$\sqrt{x+2} = x$
$x+2 = x^2$
$0 = x^2 - x - 2$
$0 = (x - 2)(x + 1)$
$x = 2$ , $x = -1$
$x = -1$ is an extraneous solution caused by the resulting quadratic ... $\sqrt{whatever} \ge 0$
3. Originally Posted by Drdj
If (2,1) is a solution fo the simultaneous equations,
x^2 + xy + ay = b
2ax + 3y = b
A) Find the value of a and of b. [a=1, b =7]
B) Find also the other solution [x=-7 and y = 7)
I have solved both parts, but I face a problem.
After part a), I substituted values of a and b into the equation to obtain
x^2 + xy + y = 7 and 2x + 3y = 7
Substituting x = (7-3y)/2 into the equation x^2 + xy + y = 7
I obtain the quadratic expression y^2 - 9y + 14 = 0
Hence, I get y = 7 and y =2 ; x = -7 and x = 1/2
Why do I reject y =2 and x = 1/2 combination ?
I know that substituting the above into 2x + 3y = 7 satisfies the equation
But it doesn't satisfy x^2 + xy + y = 7. How would I know?
Thanks
There are no extraneous solutions. Your quadratic is incorrect. I get $y^2 - 8y + 7 = 0$
-Dan
4. Thanks skeeter ! So, for all quadratic equations, I have to substitute the final answers back, in order to check if both sets of equations are valid! Many thanks for your quick reply !
Originally Posted by skeeter
you don't ... extraneous solutions show up when you sub them back into the original equations
for example ...
$\sqrt{x+2} = x$
$x+2 = x^2$
$0 = x^2 - x - 2$
$0 = (x - 2)(x + 1)$
$x = 2$ , $x = -1$
$x = -1$ is an extraneous solution caused by the resulting quadratic ... $\sqrt{whatever} \ge 0$
5. Hello, Drdj!
A simple algebra error . . .
$\text{If }(2,1)\text{ is a solution of the simultaneous equations:}$
. . $\begin{array}{c} x^2 + xy + ay \;=\;b \\ 2ax + 3y \;=\;b \end{array}$
$\text{(A) Find the values of }a\text{ and }b.\;\;[a=1,\:b =7]$
$\text{(B) Find also the other solution. }\;\;[x=-7,\:y = 7)$
$\text{I have solved both parts, but I face a problem. }$
$\text{After part (A), I substituted values of }a\text{ and }b\text{ into the equations}$
. . $\text{to obtain: }\:x^2 + xy + y \:=\: 7\:\text{ and }\:2x + 3y \:=\: 7$ . Yes!
$\text{Substituting }x \:=\:\frac{7-3y}{2}\text{ into the equation }x^2 + xy + y \:=\: 7$
. . $\text{I obtain the quadratic expression: }\:y^2 - 9y + 14 \:=\: 0$ . No
You should have had: . $\left(\dfrac{7-3y}{2}\right)^2 + \left(\dfrac{7-3y}{2}\right)\!y \,+\, y \;=\;7$
. . . . . . . . . . . . . . . $\displaystyle \frac{49-42y + 9y^2}{4} + \frac{7y - 3y^2}{2} + y \;=\;7$
Multiply by 4: . $49 - 42y + 9y^2 + 14y - 6y^2 + 4y \;=\;28$
. . which simplifies to: . $3y^2 - 24y + 21 \:=\:0$
Divide by 3: . $y^2 - 8y + 7 \:=\:0$
Factor: . $(y - 1)(y - 7) \:=\:0$
Hence, we have: . $y \;=\;1,\:7$
. . .which yields: . $x \;=\;2,\:\text{-}7$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8659614324569702, "perplexity": 525.7272820797175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021878262/warc/CC-MAIN-20140305121758-00009-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/problem-involving-moment-components-and-angles.712375/ | # Problem involving moment components and angles
1. Sep 24, 2013
### Loopas
1
Gripper C of the industrial robot is accidentally subjected to a 60 lb side load directed perpendicular to BC (see attachment). The lengths of the robot's links are AB = 22 in and BC = 18 in. By using the moment components method, determine the moment of the force about the center of joint A.
2
M = Fd (d is the lever arm)
3
I'm not sure what angle to use when calculating the components of the 60 lb force. Is the 60 lb force at a 140° angle with the coordinate plane given at point A? Since this is the first step of the problem, I want to make sure I'm doing it correctly.
#### Attached Files:
• ###### 20130924_160837.jpg
File size:
23.1 KB
Views:
257
Last edited: Sep 24, 2013
2. Sep 24, 2013
### SteamKing
Staff Emeritus
I don't know where you get 140 deg. from. Most right angles are 90 deg., and BC is 55 deg. above the horizontal.
3. Sep 24, 2013
### Loopas
The 140 degrees was referring to the direction of the force (in relation to the xy coordinate frame thats given at point A). I don't know which angles to use to calculate the force components. Would I use 55 degrees?
4. Sep 24, 2013
### SteamKing
Staff Emeritus
You want the moment calculated about A in component form. I suggest you find out what the components of the 60-lb Force are relative to the x-y axis. You know what angle BC makes with the x-axis, and the force is acting a right angles to BC.
5. Sep 24, 2013
### Loopas
So the correct angle would be:
180-90-55 = 35 degrees?
I think I may have just over thought this...
6. Sep 25, 2013
### SteamKing
Staff Emeritus
Take a look at the force vector. Look at the position of the arrow head relative to the opposite end of the vector. What quadrant is the arrow head in. Is that the same quadrant that an angle of 35 degrees would be in?
If these things are confusing, you can always use a protractor to check yourself. They are handy tools for this sort of work.
7. Sep 25, 2013
### Loopas
35 should be the reference angle, which means that 145 would be the real angle, since Fx needs to be negative. Right?
8. Sep 25, 2013
### SteamKing
Staff Emeritus
Draft saved Draft deleted
Similar Discussions: Problem involving moment components and angles | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8109554648399353, "perplexity": 1222.2394318264735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823731.36/warc/CC-MAIN-20171020044747-20171020064747-00217.warc.gz"} |
http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.565868 | Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.565868
Title: Exceptional Lebesgue densities and random Riemann sums
Author: Grahl, J.
Awarding Body: University College London (University of London)
Current Institution: University College London (University of London)
Date of Award: 2012
Availability of Full Text:
Access through EThOS: Access through Institution:
Abstract:
We will examine two topics in this thesis. Firstly we give a result which improved a bound for a question asking which values the Lebesgue density of a measurable set in the real line must have (joint work with Toby O'Neil and Marianna Csörnyei). We also show how this result relates to the results obtained by others. Secondly, we give several results which indicate when a Lebesgue measurable function has a random Riemann integral which converges, in either the weak and strong sense. A Lebesgue measurable set A, subset of R, has density either 0 or 1 at almost every point. Here the density at some point x refers to the proportion of a small ball around x which belongs to A, in the limit as the size of the ball tends to 0. Suppose that A is not either a nullset, which has density 0 at every single point, or the complement of a nullset, which similarly has density 1 everywhere. Then there are certain restrictions on the range of possible values at those exceptional points where the density is neither 0 nor 1. In particular, it is now known that if δ < 0.268486 ..., where the exact value is the positive root of 8δ^3+8 δ^2+ δ1 = 0, then there must exist a point at which the density of A is between δ and 1 - δ, and that this does not remain true for any larger value of δ. This was proved in a recent paper by Ondrej Kurka. Previous to his work our result given in this thesis was the best known counterexample. We give the background to this, construct the counterexample, and discuss Kurka's proof of the exact bound. The random Riemann integral is defined as follows. Given a Lebesgue measurable function f : [0, 1] \rightarrow R and a partition of [0, 1] into disjoint intervals, we can choose a point belonging to each interval, independently and uniformly with respect to Lebesgue measure. We then use these random points to form a Riemann sum, which is itself a random variable. We are interested in knowing whether or not this random Riemann sum converges in probability to some real number. Convergence in probability to r means that the probability that Riemann sum differs from r by more than \varepsilon, is less than \varepsilon, provided that the maximum length of an interval in the partition is sufficiently small. We have previously shown that this type of convergence does take place provided that f is Lebesgue integrable. In other words, the random Riemann integral, defined as the limit in probability of the random Riemann sums, has at least the power of the Lebesgue integral. Here we prove that the random Riemann integral of f does not converge unless |f|^1-e is integrable for e > 0 arbitrarily small. We also give another, more technical, necessary condition which applies to functions which are not Lebesgue integrable but are improper Riemann integrable. We have also done some work on the question of almost sure convergence. This works slightly differently. We must choose, in advance, a sequence of partitions (Pn)n∞=1, with the size of the intervals of Pn tending to zero. We form a probability space on which we can take random Riemann sums independently on each partition of the sequence. Almost sure convergence means that the sequence of random Riemann sums converges to some (unique) limit with probability 1 in this space. There are two complementary results; firstly that almost sure convergence holds if the function is in Lp and the sequence of partition sizes is in l^p-1 for some p \ge 1. Secondly, we have a partial converse which only applies to nonnegative functions, and if the ratio between the lengths smallest and biggest intervals in each partition is bounded uniformly. This says that if for some p \ge 1 f is not in L^p and the partition sizes are not in l^p-1, then the sequence of Riemann sums diverges with probability 1.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.565868 DOI: Not available
Share: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9790269732475281, "perplexity": 293.15967435907083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820487.5/warc/CC-MAIN-20171016233304-20171017013304-00071.warc.gz"} |
http://mathoverflow.net/questions/108926/the-complete-list-of-continued-fractions-like-the-rogers-ramanujan | The complete list of continued fractions like the Rogers-Ramanujan?
I have two questions about q-continued fractions, but a little intro first. Given Ramanujan's theta function,
$$f(a,b) = \sum_{n=-\infty}^{\infty}a^{n(n+1)/2}b^{n(n-1)/2}$$
then the following, $$A(q) = q^{1/8} \frac{f(-q,-q^3)}{f(-q^2,-q^2)}$$
$$B(q) = q^{1/5} \frac{f(-q,-q^4)}{f(-q^2,-q^3)}$$
$$C(q) = q^{1/3} \frac{f(-q,-q^5)}{f(-q^3,-q^3)}$$
$$D(q) = q^{1/2} \frac{f(-q,-q^7)}{f(-q^3,-q^5)}$$
$$E(q) = q^{1/1} \frac{f(-q,-q^{11})}{f(-q^5,-q^7)}$$
are q-continued fractions of degree $4,5,6,8,12$, respectively, namely,
$$A(q) = \cfrac{q^{1/8}}{1 + \cfrac{q}{1+q + \cfrac{q^2}{1+q^2 + \ddots}}},\;\;B(q) = \cfrac{q^{1/5}}{1 + \cfrac{q}{1 + \cfrac{q^2}{1 + \ddots}}}$$
$$C(q) = \cfrac{q^{1/3}}{1 + \cfrac{q+q^2}{1 + \cfrac{q^2+q^4}{1 + \ddots}}},\;\;\;\;D(q) = \cfrac{q^{1/2}}{1 + q +\cfrac{q^2}{1+q^3 + \cfrac{q^4}{1+q^5 + \ddots}}}$$
$$E(q) = \cfrac{q(1-q)}{1-q^3 + \cfrac{q^3(1-q^2)(1-q^4)}{(1-q^3)(1+q^6)+\cfrac{q^3(1-q^8)(1-q^{10})}{(1-q^3)(1+q^{12}) + \ddots}}}$$
The first three are by Ramanujan, the fourth is the Ramanujan-Gollnitz-Gordon cfrac, while the last is by Naika, et al (using an identity by Ramanujan). Let $q = e^{2\pi i \tau}$ where $\tau = \sqrt{-n}$ and these can be simply expressed in terms of the Dedekind eta function $\eta(\tau)$ as,
$$\tfrac{1}{A^4(q)}+16A^4(q) = \left(\tfrac{\eta(\tau/2)}{\eta(2\tau)}\right)^8+8$$
$$\tfrac{1}{B(q)}-B(q) = \left(\tfrac{\eta(\tau/5)}{\eta(5\tau)}\right)+1$$
$$\tfrac{1}{C(q)}+4C^2(q) = \left(\tfrac{\eta(\tau/3)}{\eta(3\tau)}\right)^3+3$$
$$\tfrac{1}{D(q)}-D(q) = \big(\tfrac{1}{A(q^2)}\big)^2$$
$$E(q) = \;???$$
Question 1: Does anybody know how to express $E(q)$ in terms of $\eta(\tau)$? (It's SO frustrating not to complete this list. I believe there might be a simple relationship between orders 6 and 12, just like there is between 4 and 8.) This cfrac can be found in "On Continued Fraction of Order 12", but the authors do not address this point.
Question 2: Excluding these five and the Heine cfrac which gives $\eta(\tau)/\eta(2\tau)$, are there any other q-continued fractions which yield an algebraic value at imaginary arguments?
-
[Edited again to give a second identity relating $E$ to eta products]
Continued fraction or not, an expression $q^{\frac{(r-s)^2}{8(r+s)}} f(\pm q^r, \pm q^s)$ is a modular form of weight $1/2$ for all integers $r,s$ with $r+s>0$, because it is a sum $\sum_{n=-\infty}^\infty \pm q^{(cn+d)^2}$ with rational $c,d$ and periodic signs. Therefore the quotient of two such expressions is a modular function, and takes algebraic valus at quadratic imaginary values.
The quotient $$E(q) = q - q^2 + q^6 - q^7 + q^8 - q^9 + q^{11} - 2q^{12} + 2q^{13} - 2q^{14} + 2q^{15} \cdots$$ looks like a modular unit $-$ its logarithmic derivative has small coefficients $-$ but not quite an eta product; instead it seems to be a quotient of Klein forms: $$E(q) = q \prod_{n=1}^\infty (1-q^n)^{\chi(n)},$$ where $\chi$ is the Dirichlet character of conductor $12$, given by
$$\chi(n) = \cases{ +1,& if n \equiv \pm 1 \bmod 12; \cr -1,& if n \equiv \pm 5 \bmod 12; \cr 0,& otherwise. }$$ Two identities relating $E$ to $\eta$ products, similar to but somewhat more complicated than the ones you give for $A,B,C,D,$ are $$\frac1{E(q)} - E(q) = \frac{\eta(2\tau)^2 \eta(6\tau)^4}{\eta(\tau)\eta(3\tau)\eta(12\tau)^4},$$ and (a bit simpler) $$\frac1{E(q)} + E(q) = \frac{\eta(4\tau)}{\eta(\tau)} \Bigl(\frac{\eta(3\tau)}{\eta(12\tau)}\Bigr)^3.$$
-
Thanks, Dr. Elkies. It is good to finally know a relation of $E(q)$ to an eta product! However, it may be possible to simplify this considering the other cfracs also have more complicated expressions. For example, in Duke's "Continued Fractions and Modular Functions", we have $\frac{1}{D^2(q)}+D^2(q) = \frac{\eta(4\tau)^2\eta(\tau)^4}{\eta(2\tau)^2\eta(8\tau)^4}+6$. I'll see if I can tweak your relation to find a reasonably simple one between $C(q)$ and $E(q)$. – Tito Piezas III Oct 6 at 15:55 The $E^{-1} - E$ relation might not simplify much, but the one I just added for $E^{-1} + E$ might come closer to what you're hoping for. – Noam D. Elkies Oct 7 at 21:05 [It seems that meanwhile Somos also called attention to $E^{-1} + E$...] – Noam D. Elkies Oct 7 at 21:07
Based on Elkies' answer and an email by Michael Somos, we can give an alternative expression to my Question 1. If a sum is used, instead of a difference,
$$u = \frac{1}{E(q)}+E(q)$$
then,
$$\frac{u(u-4)^3}{(u-1)^3} = \left(\frac{\eta(\tau)}{\eta(4\tau)}\right)^8$$
Since this eta quotient and $C(q)$ appear in j-function formulas, we can relate the two. Let,
$$v = \frac{u}{2}-1$$ then, $$2\left(\frac{1}{v}+v+2\right) = \left(\frac{1}{C(q)}\right)^3$$
I knew there was a relatively simple relationship, with "relatively" being the operative word.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294124841690063, "perplexity": 317.8978513038462}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706624988/warc/CC-MAIN-20130516121704-00011-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://www.ask.com/question/how-many-miles-over-the-speed-limit-is-considered-reckless-driving-in-il | # How Many Miles over the Speed Limit Is Considered Reckless Driving in Il?
In Illinois, it is considered reckless driving when someone is driving 15 MPH over the speed limit. Reckless driving is described as a disregard for the safety of persons or property.
Q&A Related to "How Many Miles over the Speed Limit Is Considered..."
Tennessee law does not define a http://www.chacha.com/question/how-many-miles-over...
In most states its 15 miles over the speed limit. There may be variations on this such as if you are in a city, or a construction or school zone. http://www.kgbanswers.com/how-many-miles-over-the-...
Any amount over the speed limit is considered reckless driving. The DMV says even if you drive one mile more you can get a ticket. http://answers.yahoo.com/question/index?qid=200711...
16 mph you can be cited for reckless driving in WV. http://www.transportation.wv.gov/dmv/Dri…. http://answers.yahoo.com/question/index?qid=201203...
Similar Questions
Top Related Searches
Explore this Topic
The reckless driving depends on the speed limit which varies from place to place. Speaking in context of California, 20 miles in excess of the posted speed limit ...
In the state of California if you are caught speeding 26 miles over the speed limit, the amount of points that it will cost you on your driving record will depend ...
22 miles an hour over the speed limit is typically considered a major speed violation and can sometimes result in a license suspension not to mention the points. ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067685008049011, "perplexity": 1383.7883968906715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://velement.io/community-acupuncture-yewrru/codeforces-rating-system-b618a3 | Fun Facts About Savannah River, East River Trail Green Bay, Ocean Beach House Rentals, Lenovo Ideapad 330s Specs I5 8th Generation, How To Retrieve Data From Database In Php Using Mysqli, Michelangelo - Ryde, Conference Pear Calories, Cannondale Quick Cx 3 2019, " />
Technologies
My Blog
Scroll down to discover
### codeforces rating system
The problems were created and prepared by ssense and SlavicG for users with a rating range from 0 to 1400 but anyone is welcome to participate in the round! And moreover, it does not even solve the problem of one-contest profiles, just moves it 100 points lower. See here: https://codeforces.com/blog/entry/77890. A least on top level. Its quite disencouraging tbh. ;). That is very rude! That is how Elo works. The most obvious example are the massive ties at $$0$$$points, whereas performances drawn from logistic or normal distributions would have no such bias. Most of this seems to be due to the weakest members not having converged, since they participate less. (No personal issues). It allows you to Hide/Show problems' tags in an easier way (i.e. We may see marked increase in participation in the upcoming Div2 round as people who are confident enough might not want to wait for 5-6 round to get to their deserved rating. As of 2018, it has over 600,000 registered users. - Codeforces contests (rounds and gym) - Rating and Ranks - How to register in a contest ? The only programming contests Web 2.0 platform. For example, getting red in 2015 became much easier than in 2013. He probably just googled some keywords, stumbled on this blog, and commented. See screenshots, read the latest customer reviews, and compare ratings for Codeforces Notifier. Because you are reading a ten year old blog, and there's a more recent addition to rating calculation — Link. Also, I think it discourages you to care if you start with a super low rating too, like on atcoder I tend to not care about the contests at all since I haven't done many so my rating usually goes up no matter how much I mess around. Is it fair? I spent some time learning Rust and implementing "TrueSkill from SPb" and now I have a couple of questions about your rating system. Can you add an option to Codeforces that will tell the contestant what rank is he/she expected to take? Since 2013, Codeforces claims to surpass Topcoder in terms of active contestants. Contribute to ffutop/Codeforces-Rating-System-Simulation development by creating an account on GitHub. Or in other words, is $$\Sigma\Delta$$$ in each round around 0? In simpler words there are no sudden variations in rating. I didn't find an implementation of this rating system and didn't tested it on CF history, but I think that I'll try to do it next week if it would be interesting. See screenshots, read the latest customer reviews, and compare ratings for Codeforces Notifier. So for new participants most graphs will show increasing at the start, generally. 1, based on Moscow Team Olympiad) 2 days This could promote people to make new account. See your rank in the contest. I think that good and bad days happen because of the way we competing. Close • Posted by 2 minutes ago. The top ratings... are actually less than they were with $$\sigma=350$$$! I had created an account 4 month earlier. at the same time, It may discourage freshers. It's better to choose a better one. Tell me this if solving Practice Problems create and improve rating or not or just by participating in rated contests does? The displayed rating is $$1500 - 2(\sigma - 100)$$$, where $$\sigma$$$starts at $$350$$$ and approaches $$100$$$as members become more experienced. Word Capitalization2 2 Problem 2B. Read the blog once more... and you will find out this : As at TopCoder all users are divided into two divisions: the first (rating over 1500 1650) and the second (rating not more than 1500 1650)._. 2) MikeMirzayanov → Codeforces Rating System 23K likes. 1 + Div. Thanks for the discussion and experiment ideas! i took part in some contest. It's mildly annoying and I'd prefer to avoid that. Yes , but this might encourage cheating . Before stream 13:01:51. What is the approximately maximum value of d1+500 + d2+ 350 + ...+ d6+ 50. You will hold codeforces for ages. winning probability), which is worse IMO. Best Luck to participants for the next contest rounds. By this probabilities we can count your approximate place(seed), then get your real place(rank) and find change of rate, based on them. As is, it's pretty difficult for rating predictor extensions, which are used by a significant portion of the community, to accurately evaluate projected rating changes. I think you shouldn't add remaining promotions once the displayed rating increases more than 1400 otherwise people would become master just by giving div3/div4 rounds. It's explained properly in this blog i think. To get correct expected place one should calculate Elo-based probabilities of losing versus every other contestant and add these values (and also add 1 to result, because standings are 1-based). As users compete more, they usually improve and become better than their rating, so they win more than the system expects, taking away rating points from others. But does Codeforces discourage it? The rating distribution might converge more slowly with this method, but it seems to end up in the same place. Codeforces. It reduces inflation (but doesn't eliminate it), reduces the influence of new members, limits the size of rating jumps for experienced contestants, and produces (I believe) a more reasonable distribution of ratings (see the table here for a comparison). If we peek top 10% or top 25% users before round, their rating sum after round should be less than before round. Run spider_txt.py (for C++) or spider_json.py (for Python) with argument contest_id to get Codeforces rating data for test. If you can't be patient, and are using Chrome, add the Codeforces Enhancer to Chrome. ), where the starting rating is just an additive constant which does not affect any computations (just because 1500 rating looks prettier than 0). Participant’s current rating will determine the division he’s able to … But, what the answer says is two things: When you add something to starting rating globally (for all players, not our case), it just adds this constant to all ratings and does not affect anything else. You also didn't provide any reasons for this change. NOOOOOOOOOOOOOOOOOOOOBSS, LEEEEEEEEEEEEEEEETS 1337 X 1337 X 1337 X 1337 ... HACK BUG NOOB. Greate Boss! $$rating_{shown} = rating_{true} - f(competitions)$$$, where $$f(x)$$$is a magic function that starts at $$1200$$$ and tends to $$0$$$as $$x$$$ tends to infinty. But I think your rating will remain low for 6 rounds only. Your skill is like a sum of problems you know how to solve and your performance is roughly the sum of solved problems minus the sum of mistakes you made. With low $$\sigma$$\$, my system might become even slower than others since it puts less weight on surprising performances (outliers). And if I remember and understand it correctly, the thing you are reinventing is TrueSkill (apart from the displayed rating feature, which is nice, but not so important now). The expected rank is calculated with 2 people ' rating? An implementation of Codeforces rating system as described on http://codeforces.com/contest/1/submission/13861109 Yeah, I get your point. You guys helped me in more than 7 projects and recruited critical resources with expertise in niche SAP skills. No, it does not. changes in the ranking of contestants are multiplied by a correction factor such that allows the sum of ratings of the participants to remain unchanged (before and after the round). Народ дайте возможность менять ники я ток из за этого акк новый и создал, Maybe it's a good choice, but we may need time to adapt to it, bruh wtf im mad bro sir i lost my exopert im sad bro. 2, based on Zed Code Competition) Finished Practice Virtual contest is a way to take part in past contest, as close as possible to participation on time. Regular programming contests held on Codeforces are open to all registered users. The delta calculation is done in real time a̶n̶d̶ ̶i̶s̶ ̶1̶0̶0̶%̶ ̶a̶c̶c̶u̶r̶a̶t̶e̶ (see note below). However, all of you who wish to take part and have rating … I suggest initial rating to 500, in practice as well as during calculations. The Application of Lagrange Interpolation in Mathematics. Regular programming contests held on Codeforces are open to all registered users. Thanks a ton, Mike. See my rating graph, you will get the idea! So orz Flakire. I am brand new to code forces. This is how your question sounds. I don't know why you got so much downvote in that comment . Codeforces is a website that hosts competitive programming contests. Then who will be person B? I have given seven contests till date and in each one my rating has only fallen, even though ive solved at least two problems in each contest(apart from one). The models are similar, but I'll point out differences: My model uses logistic distributions instead of Gaussians: a major theme of my paper is to look at the implications of this. 1) Register another account 2) Login & enter DIV.2 contest 3) Read DIV2-C .. DIV2-E problems 4) Decide, login main account or not. If the former happens, one can get a really high rating from participating at Div3 and Div2, which is not fair. Edit2: I added some math on page 8 of the paper for a slight modification that I want to test out soon. In a couple of last contests I've solved 1 problems each.but instead of increasing, my rating is decreasing.I can find any specific reason for that.Can anyone help me to understand whats going on? This effect shouldn't be visible when CF rating was initially expanding (before 2017 or so I think). Your position in this list shows your expected position in the contest. This is not correct:) Your expected position depends not only on your position in list, sorted by rating, but also on all other ratings. Please try again later. Have you tried to model CF competitions and make computational experiments? Not cruel, approx. Of course unparticipated will be 0. Yes force each account to also provide a credit card/bank account number to Mike to "guarantee no fake accounts". Just look on the list of registered users before the contest and sort them by rating in descending order. 1500 was a 'starting point' and it can be any number. It will be completely non-transparent to assign division by true ratings but show displayed ratings. No, you need to participate in at least 2 contest and get a rank < 5. If you do attempt and don't manage to solve anything, you'll likely get a huge negative delta, which will balance out the displayed rating bias increase. For an active contest: Carrot calculates rating changes according the current standings when you open the ranklist, and displays them in a new column. Your rating change is calculated based solely in your position in the ranking and the expected position according to the rating you had before the contest. Programming competitions and contests, programming community. Ask a question, post a review, or report the script. What is a reason for leaving rating unchanged of guys who have registered for a contest but do not make any submissions? Adding more to this, new accounts became masters just by solving 2 problems last night.
01. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2634490728378296, "perplexity": 2410.196773763563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360107.7/warc/CC-MAIN-20210228024418-20210228054418-00079.warc.gz"} |
https://math.stackexchange.com/questions/2804580/proving-x3-9-0-has-no-solutions-in-mathbbz-31-mathbbz/2804582 | # Proving $x^3 - 9 = 0$ has no solutions in $\mathbb{Z}/31\mathbb{Z}$
This is from Paolo Aluffi's book "Algebra: Chapter 0". First, find the order of $[9]_{31}$ in the group $( \mathbb{Z}/31\mathbb{Z})^*$. Then, does the equation $x^3 - 9 = 0$ have any solutions in $\mathbb{Z}/31\mathbb{Z}$?
The order of $9$ is $15$: Repeated squaring shows $9^{16} = 9$. The order has to divide the order of the group, which is $30$, so that looks fine.
But I get stuck trying to prove $x^3=9$ does not have a solution. I tried to say something like:
"$9^{16} = 9$ but $3 \nmid 16$ so there is no solution"
... but that does not seem correct. Can I get a hint (not solution) on how to solve this? I am learning Group Theory, and have not yet gotten to Rings, Fields, or Modules.
• What's $9^{10}$? Jun 1 '18 at 17:17
Knowing that the (multiplicative) order of $9$ modulo $31$ is $15$, suppose there is an element $x$ such that $x^3 = 9$. What will its order be? You should find a contradiction.
• This and Lord Shark The Unknown helped out a lot. Since $x^{30} = 9^{10} \neq 1$, but $a^{30} = 1$ for all $a$ in the group, there cannot be a solution for $x$ in this group. Jun 1 '18 at 17:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8742470741271973, "perplexity": 124.96052409745289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00410.warc.gz"} |
http://blog.computationalcomplexity.org/2010/07/seventh-mil-problem.html | ## Monday, July 26, 2010
### A Seventh Mil. Problem
Richard Lipton had a wonderful post asking for a seventh Millennium Prize now that Poincare's conjecture has been solved. I posted a suggestion on his blog but got no comments. I'll expand on it and see if I get any comments here.
HISTORY: The original proof of VDW's theorem , in 1927, yields INSANE (not primitive recursive ) bounds on the VDW numbers. (Shelah (1988) later got primitive recursive bounds and Gowers (2001) got bounds you can actually write down!) Inspired by VDW's proof Erdos and Turan (1936), made two conjectures:
1. If A is a subset of N of positive upper density then A has arbitrarily long arithmetic sequences. Proven by Szemeredi in 1975 (see here for more.)
2. If &Sigmax ∈ A 1/x diverges then A has arbitrarily long arithmetic sequences. (This conjecture implies the first one.)
A proof of either of these yields a proof of VDW theorem. The hope was that it would lead to a proof with better bounds. Szemeredi's proof of the first conjecture did not yield better bounds; however, Gowers proved the first conjecture a different way that did yield better bounds on the VDW numbers.
The second conjecture is still worthwhile since it may yield even better bounds and because it is interesting in its own right. So, I propose the second conjecture of Erdos-Turan as the 7th Millennium problem. (It might need a snazzier name. The Divergence Conjecture? The k-AP Conjecture? Suggestions are welcome!)
1. Greene and Tao have already shown that the primes have arbitrarily large arithmetic progressions.
2. The work that has gone into Szemeredi's theorem and the Greene-Tao theorem spanned many areas of mathematics. Hence this is not just an isolated problem.
3. The problem has been open since 1936. Hence it is a hard problem.
4. Will more connections to other parts of math be made? Is the problem too hard? A NO answer to both of these would make it not that good a problem.
5. The converse to the conjecture is not true. Note the following set:
A = &cupk&isin N {2^k + i : 0 &le i < k }
The set A has arbitrarily long arithmetic sequences but If &Sigmax ∈ A 1/x converges.
6. Is there a plausible condition that characterizes the sets that have arbitrarily long arithmetic sequences?
7. There is already (I think) a 3000 dollar bounty on the second conjecture. So the Clay Math Institute will have to just give 997,000 dollars.
1. how about finding the probability that nobody will ever find proofs for any one of those conjectures, not now, not ever?
2. I think it's a great problem for
generating public attention,
as it is easy to understand
even for non mathematicians.
As the Tao-Green-Theorem was
widley recognized as a big
breaktrough in mathematics,
i would like to ask, if there are
other sets where a proof of the
special case would be similar
important.
This could be an advantage for
choosing the conjecture, as even
if the main problem is to hard,
many impotant special cases
would benefit from the status of a
millenium problem.
I think this might lower the risk of
working on such a problem and
making no progress whatsoever.
3. Now we need 2 new problems!
And i'll get \$1.200.000!
Just awesome | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9467771649360657, "perplexity": 1192.979490580712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462751.85/warc/CC-MAIN-20150226074102-00177-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://forum.allaboutcircuits.com/threads/problem.2041/#post-13326 | # problem
Joined Dec 29, 2004
83
Hi,
I am lost with this problem.
A bottle containig air is closed with a watertight yet smoothly moving piston. The bottle with its air has a total mass of 0.30kg. At the surface of a body of water whose temperature is a uniform 285 K throughout, the volume of air contained in the bottle is 1.5L.
Recall that the pressure of water increases with depth below the surface, D, as p=po+ρ *g*D, where po is the surface pressure and ρ=1.0 kg/L.
The bottle is submerged.
a- What is the volume of the air in the bottle as a function of depth?
b- Calculate the buoyant force on the bottle as a function of depth?
There are other questions but I need to understand thoses first.
B
#### CoulombMagician
Joined Jan 10, 2006
37
a) Boyle's law P*V/T = constant. T is constant here so at a depth D the pressure is p and the piston compresses the gas inside until the internal pressure is also p. At this point the new volume V is
V = V0*p0/p = (p0/p)*1.5L
B) The bouyant force is the weight of the displaced water less the weight of the bottle
F = 1.0kg/L * 1.5L * (p0/p) -0.3kg
When Archimedes discovered this principle while he was bathing he was reportedly so excited that he ran naked thorough the city streets.
Joined Dec 29, 2004
83
Originally posted by CoulombMagician@Jan 21 2006, 04:12 AM
a) Boyle's law P*V/T = constant. T is constant here so at a depth D the pressure is p and the piston compresses the gas inside until the internal pressure is also p. At this point the new volume V is
V = V0*p0/p = (p0/p)*1.5L
B) The bouyant force is the weight of the displaced water less the weight of the bottle
F = 1.0kg/L * 1.5L * (p0/p) -0.3kg
When Archimedes discovered this principle while he was bathing he was reportedly so excited that he ran naked thorough the city streets.
[post=13305]Quoted post[/post]
Thank you,
I understand your reasonning but how can we find the p0??
#### CoulombMagician
Joined Jan 10, 2006
37
It is one atmosphere, ~15 psi if my memory is working. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8443030118942261, "perplexity": 1616.5253339845876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00309.warc.gz"} |
http://biophysics.fr/2019/05/08/seplot-a-simple-command-line-and-python-plotting-tool/ | # Seplot : a simple command line (and python) plotting tool
Welcome seplot, a plotting tool to be used from the command line, or from a python script. It is a front-end for my favorite plotting program, PyX.
To install it, just use :
$pip3 install seplot Then you can use it to plot data stored in a text file or in a csv file ; for instance : $ seplot data.txt
will plot the second column of file data.txt as a function of the first column.
That is equivalent to (using Python’s indexing convention starting at 0)
$seplot data.txt x=0 y=1 out=plot.pdf By default, seplot exports to plot.pdf, but any .pdf, .eps, or .svg filename can be specified. But one might want to do more, for example plot a function of the input data, and plot error bars : $ seplot data.txt x='sqrt(A[:,0])/2' dy='sqrt(y)'
You can also plot according to a condition, e.g. y>0 :
$seplot data.txt if='y>0' And to do a bit more, specify a style according to y values : $ seplot data.txt if='y>0' color=red andif='y<=0' color=blue
Also, seplot supports LaTeX so you can label plots like :
$seplot data.txt y='sin(A[:,0])' title='$\sin{x}$' xlabel='$v$in$\mu m / s$' You can also plot an arbitrary function y(x). For instance, to get the image above, the command was : $ seplot data.txt y='abs(y)' title='$\sqrt{x^2}$' style=o color=blue dy=2 function='y(x)=x' ylabel='velocity $v$ (m s$^-1$)' xlabel='time $t$ (s)'
You can also use seplot from Python :
import seplot plot=seplot.Splotter(xlabel='$v$') plot.add_plot(file='data.txt',cond='A[:,0]>0') plot.make_and_save(out='nice_data_plot.svg')
For all the possibilities, see the README.
This entry was posted in Uncategorized. Bookmark the permalink. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7367222905158997, "perplexity": 13887.969560447551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525187.9/warc/CC-MAIN-20190717121559-20190717143559-00403.warc.gz"} |
https://www.talks.cam.ac.uk/talk/index/170972 | # Homotopical Lagrangian Monodromy
• Noah Porcelli, Cambridge
• Wednesday 09 March 2022, 16:00-17:00
• MR13.
Given a Lagrangian submanifold L in a symplectic manifold X, a natural question to ask is: what diffeomorphisms f:L → L can arise as the restriction of a Hamiltonian diffeomorphism of X? Assuming L is relatively exact, we will extend results of Hu-Lalonde-Leclercq about the action of f on the homology of L, and deduce that f must be homotopic to the identity if L is a sphere or K(\pi, 1). The proof will use various moduli spaces of pseudoholomorphic curves as well as input from string topology. While motivated by HLL ’s Floer-theoretic proof, we will not encounter any Floer theory.
This talk is part of the Differential Geometry and Topology Seminar series. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598932862281799, "perplexity": 824.032954211317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00176.warc.gz"} |
https://ompf2.com/viewtopic.php?p=5132&sid=ed15daf50aa33180ef86a7250b2773e5 | ## Generalizing the Golden Ratio Sequence to Higher Dimensions
Practical and theoretical implementation discussion.
Paleos
Posts: 16
Joined: Tue Apr 08, 2014 1:32 am
### Generalizing the Golden Ratio Sequence to Higher Dimensions
I am trying to figure out a way to generalize the golden ratio sequence to higher dimensions in a way that is extensible(does not make any assumption about the amount of samples that are going to taken), as well as scaling well to 6 or more dimensions.
The golden ratio sequence is desirable because it uses the irrationality of golden ratio to avoid the structured noise and moire artifacts of other sequences, and it is the cheapest low discrepancy sequence known, requiring only an addition and check for overflow.
update: I have tried doing 2 steps for the second, 1 for first, the result is that the diagonal resulting from using the same values for both x and y just repeats twice instead of once.
The original paper https://www.graphics.rwth-aachen.de/pub ... /2/jgt.pdf uses a permutation like the faure sequence to generalize sequence to 2 dimensions,
which assumes that i know how many samples are going to be taken, which I do not want to know.
A later paper http://www.researchgate.net/profile/Col ... 000000.pdf generalizes the sequence using a Hilbert Curve,
which I am not so confident about the performance in 32 bit precision, especially 6 or more dimensions(tell me if I am wrong) or the quality of it mapped to a sphere compared to the original.
One idea for the unit square i have is to take inspiration from the Halton sequence
What i would prefer is one generalization for the unit square and another one specifically for the unit sphere.
Last edited by Paleos on Fri Feb 27, 2015 12:55 am, edited 6 times in total.
hobold
Posts: 56
Joined: Wed Dec 21, 2011 6:08 pm
### Re: Generalizing the Golden Ratio Sequence to Higher Dimensi
I cannot back this up with a citation or even only with plausible reasoning. But my first experiment would be bit interleaving, i.e. Z-order, also known as Morton code. In other words, instead of the Hilbert-Curve, I'd try a cheaper mapping from the unit interval to the unit square. I feel that the golden ratio's "maximal irrationality" should ensure that sample points are spread evenly across the "tiles" of any recursively defined self-similar curve. I would be surprised if that wasn't true on all scales of that fractal curve.
You need high precision arithmetic for the golden ratio constant and its wrap-around addition. If you want a mapping to 64 bit wide X and Y coordinates on the unit square, then 128 bits are required on the unit interval.
Paleos
Posts: 16
Joined: Tue Apr 08, 2014 1:32 am
### Re: Generalizing the Golden Ratio Sequence to Higher Dimensi
hobold
Posts: 56
Joined: Wed Dec 21, 2011 6:08 pm
### Re: Generalizing the Golden Ratio Sequence to Higher Dimensi
I gave golden ratio sampling along a Z-curve a try. It does cover the unit square nicely, but the sample spacing is less regular than true 2D low discrepancy sequences. I suspect that is due to the discontinuities of the Z-curve; the example images with the Hilbert curve looked a bit better in that regard. But overall the difference was not large. The Z-curve might well be good enough, depending on what exactly you need it for. It is certainly much better than purely random sample points; neither clusters nor gaps are nearly as bad.
More random remarks:
- Bit interleaving generalizes to an arbitrary number of dimensions, but the required precision for the golden ratio accumulator variable will grow fairly quickly.
- Independent threads of computation can be decorrelated nicely in the case of golden ratio sampling: just initialize each thread's accumulator independently at random. This is particularly neat for GPU computing, as each thread will continue to run identical code.
- Unfortunately, inverse bit interleaving isn't exactly cheap: http://graphics.stanford.edu/~seander/b ... bleObvious
- Some exotic RISC ISA extensions used to provide support for bit interleaving back in the 1990s. Not sure if any current (i.e. 'x86) machines still do.
- GPUs used to lay out textures in Z order. I don't know if they still do that, and if that address computation is available to programmers.
Paleos
Posts: 16
Joined: Tue Apr 08, 2014 1:32 am
### Re: Generalizing the Golden Ratio Sequence to Higher Dimensi
What I have figured out that I could do is take the Golden point set as described in the original paper and and then assume that a total of, for instance 2^64 will be generated, and then choose the index within the period with the van der corput sequence. The problem is however is given the how second coordinate is selected through a permutation, how to query the permuted index directly? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8993270397186279, "perplexity": 965.9759482128779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00279.warc.gz"} |
https://www.arxiv-vanity.com/papers/hep-ex/9908054/ | # Uctp-112-99 Recent Charm Results From Fermilab Experiment E791
A.J. SCHWARTZ
Department of Physics, University of Cincinnati,
Cincinnati, Ohio 45221 (USA)
\abstracts
Fermilab experiment E791 studied weak decays of , , and mesons produced in collisions of 500 GeV/ negative pions with Pt and C targets. The experiment collected over 200 000 fully reconstructed charm decays. Four recent results are discussed here: (a) measurement of the form factor ratios , , and in and decays; (b) measurement of the difference in decay widths between the two mass eigenstates; (c) search for rare and forbidden decays to dilepton final states; and (d) search for a “Pentaquark,” a bound state of .
## 1 Introduction
Fermilab E791 is a charm hadroproduction experiment studying the weak decays of charmed mesons and baryons. The experiment took data from September, 1991 to January, 1992, recording over interactions and reconstructing over 200 000 charm decays. This large sample has led to numerous published results. [1] Four recent results are discussed here: (a) measurement of the form factor ratios , , and in and decays; [2] (b) measurement of the difference in decay widths between the two mass eigenstates; [3] (c) search for rare and forbidden decays; [4] and (d) search for a “Pentaquark,” a bound state of . [5, 6]
The E791 collaboration comprises approximately 70 physicists from 17 institutions. 111 C.B.P.F. (Brazil), Tel Aviv (Israel), CINVESTAV (Mexico), Puebla (Mexico), U.C. Santa Cruz, University of Cincinnati, Fermilab, Illinois Institute of Technology, Kansas State, University of Massachusetts, University of Mississippi, Princeton, University of South Carolina, Stanford, Tufts, University of Wisconsin, and Yale. The experiment produced charmed mesons and baryons using a beam of momentum 500 GeV/ incident on five thin target foils (one platinum, four carbon). The foils were separated along the beamline by approximately 1.5 cm such that most charm decays occurred in air rather than in solid material. Immediately downstream of the target was a silicon strip vertex detector consisting of 17 planes of silicon strips oriented along the and directions, where and point from the vertical direction (). Following the vertex detector was a spectrometer consisting of two large-aperture dipole magnets providing kicks of 212 MeV/ and 320 MeV/, and 37 planes of wire drift chambers and proportional chambers. Downstream of the second magnet were two threshold Čerenkov counters used to discriminate among pions, kaons, and protons. Following the Čerenkov counters was a Pb/liquid-scintillator calorimeter used to measure the energy of electrons and photons, and an Fe/plastic-scintillator calorimeter used to measure the energy of hadrons. Downstream of the calorimeters was approximately 1.0 m of iron to range out any remaining hadrons, and following the iron were two stations of plastic scintillator – one -measuring and one -measuring – to identify muons. The experiment used a loose transverse energy trigger ( GeV) that was almost fully efficient for charm decays.
After events were reconstructed, those with evidence of a decay vertex separated from the interaction vertex were retained for further analysis. The analyses presented here selected events using numerous kinematic and quality criteria; the most important of these are listed in Table 1. An especially effective criterion for enhancing signal over background was “SDZ,” the longitudinal distance between the production and decay vertices divided by the total measurement error in this quantity.
## 2 D+ and D+s Form Factors
Semileptonic decays such as and proceed via spectator diagrams. As such, all hadronic effects are parametrized by four Lorentz-invariant form factors: , , , and . Unfortunately, the limited size of current data samples precludes measurement of the dependence, and we assume this dependence to be given by a nearest pole dominance model: , where GeV/ for the vector form factor and 2.5 GeV/ for the axial-vector form factors . Because appears in every term in the differential decay rate, we factor out and measure the ratios , , and . These ratios are insensitive to the total decay rate and to the weak mixing matrix element .
To select and decays, we identify 3-track vertices in which one track is identified as a kaon and one track as a lepton. We cut on the “transverse” mass , where ( ). In this expression, is the momentum of the neutrino transverse to the direction of the as inferred by momentum balance. The distribution of semileptonic decays forms a Jacobian peak with an endpoint at , and thus we require that lie in the range 1.6–2.0 GeV/ (1.7–2.1 GeV/) for the () sample. We also require that either or . The resulting samples contain very little background, and we do a maximum likelihood fit for the form factors using a likelihood function based on three angles: , the polar angle in the () rest frame between the () and the (); , the polar angle in the rest frame between the and the (); and , the azimuthal angle in the () rest frame between the () and decay planes. The results of the fit to the data for the samples combined are: and . We measure from the sample alone. The results for are: and . These results are compared with theoretical predictions in Table 2; the errors in the measurements are smaller than the spread in theoretical predictions.
## 3 D0-¯¯¯¯¯D0 Mixing and ΔΓ
E791 has published a limit on the - mixing rate using semileptonic decays [19] and hadronic and decays. [20] The flavor of the or when produced is determined by combining the with a low momentum pion to reconstruct a or decay. The semileptonic decays yield a 90% C.L. limit % [where , while the hadronic decays yield a 90% C.L. limit %. The latter limit assumes no violation in the mixing and no violation in a doubly-Cabibbo-suppressed (DCS) amplitude which also contributes to the rate. However, violation is allowed in the interference between the mixing and DCS amplitudes. Since the DCS amplitude is in fact substantially larger than that expected from mixing, the presence of “wrong-sign” decays in the hadronic data – while a signature for mixing – is more easily interpreted as evidence for DCS decays. If we assume no mixing, then the numbers of wrong-sign decays observed in our data, corrected for acceptance, imply ratios of DCS decays to Cabibbo-favored decays of % and %.
Since , where and are the differences between the masses and decay widths of the mass eigenstates, the upper limit for implies an upper limit for the difference in widths: ps. E791 has made a direct measurement of using and decays. Since the former results in a -even eigenstate, only the -even component contributes and the lifetime distribution is proportional to . The final state, however, is a admixture and the lifetime distribution is proportional to . [21] Over the range of lifetimes for which the experiment has sensitivity, and thus: .
Our samples of and are shown in Fig. 1a. We bin these events by reduced proper lifetime, which is the distance traveled by the candidate beyond that required to survive our selection criteria, multiplied by mass and divided by momentum. For each bin of reduced lifetime we fit the mass distribution for the number of signal events. Plotting this number (corrected for acceptance) as a function of reduced lifetime gives the distributions shown in Fig. 1b. Fitting these distributions to exponential functions yields ps, ps, and thus ps. This implies ps at 90% C.L., which is more stringent than the constraint resulting from .
## 4 Rare and Forbidden D Decays
E791 has searched for rare and forbidden dilepton decays of the , , and . The decay modes can be classified as follows:
1. flavor-changing neutral current decays and , in which is a pion or kaon;
2. lepton-flavor violating decays , , and , in which the leptons belong to different generations; and
3. lepton-number violating decays , in which the leptons belong to the same generation but have the same sign charge.
Decay modes belonging to (1) occur within the Standard Model via higher-order diagrams, but the branching fractions are estimated [22] to be only to . This is below the sensitivity of current experiments. Decay modes belonging to (2) and (3) do not conserve lepton number and thus are forbidden within the Standard Model. However, a number of theoretical extensions to the Standard Model predict lepton number violation, [23] and the observation of a signal in these modes would indicate new physics. We have searched for 24 different rare and forbidden decay modes and have found no evidence for them. We therefore present upper limits on their branching fractions. Eight of these modes have no previously reported limits, and fourteen are reported with substantial improvements over previously published results.
For this study we used a “blind” analysis technique. Before our selection criteria were finalized, all events having masses within a window around the mass of the , , or were masked so that the presence or absence of potential signal candidates would not bias our choice of selection criteria. All criteria were then chosen by studying signal events generated by Monte Carlo simulation and background events obtained from the data. The background events were chosen from mass windows above and below the signal window . The criteria were chosen to maximize the ratio , where and are the numbers of signal and background events, respectively. Only after this procedure were events within the signal window unmasked. The signal windows used for decay modes containing electrons are asymmetric around to allow for the bremsstrahlung low-energy tail.
We normalize the sensitivity of our search to topologically similar Cabibbo-favored decays. For the modes we use ; for the modes we use ; and for the we use . The upper limit on the branching fraction for decay mode is:
BX=NXNnormεnormεX⋅Bnorm , (1)
where is the upper limit on the mean number of signal events, is the number of normalization events, and and are overall detection efficiencies. The geometric acceptances and reconstruction efficiencies are found from Monte Carlo simulation, and the particle identification efficiencies are measured from data.
The background consists of random combinations of tracks and vertices, and reflections from more copious hadronic decays. The former is essentially flat in the reconstructed invariant mass, and we estimate this background by scaling the level from mass regions above and below the signal region . The hadronic decay background in which a is misidentified as a lepton is explicitly removed via a or invariant mass cut. The hadronic background in which a is misidentified as a lepton cannot be removed in this manner, as the reflected mass and true mass are too close and such a cut would remove a substantial fraction of signal events. We thus estimate this background by multiplying the number of , , or decays falling within the signal region by the rate for double particle misidentification , or . The misidentification rates were measured from data using decays misidentified as . Because the latter samples have substantial feedthrough background from the former (which is Cabibbo-favored), we do not attempt to establish a limit for decays. Rather, we use the observed signals to measure the lepton misidentification rates under the assumption that all decays observed arise from misidentified . Most of our final event samples are shown in Fig. 2, and all results are tabulated in Table 3.
## 5 Search for the Pentaquark P0¯csuud
E791 has searched for a “Pentaquark” , which is a bound state of five quarks having flavor quantum numbers . This state was originally proposed by Lipkin [25] and Gignoux et al.[26] over ten years ago, but no experimental searches have been undertaken. The is predicted to have a mass below the threshold for strong decay ( GeV/) by 10–150 MeV/. The lifetime is expected to be similar to that of the shortest-lived charm meson, 0.4–0.5 ps. We have searched for decays into and final states.
The analysis proceeds by first selecting four-track vertices in which one track is identified as a proton and two opposite-sign tracks are identified as kaons. We require that either or and remove events in which either or the or momentum projects back to the production vertex. We normalize the sensitivity of the search to and decays; these are topologically similar to and (except for the proton) and several systematic errors cancel. After all selection criteria are applied, we observe no excess of events above background in either decay channel. We thus obtain upper limits for the product of production cross section and branching fraction, relative to that for the . The expression used is (here for ):
σP⋅BP→ϕpπσDs⋅BDs→ϕπ = NP→ϕpπNDs→ϕπ εDs→ϕπεP→ϕpπ , (2)
where is the upper limit on the mean number of decays, and is the number of events observed in the normalization channel. All numbers and the resulting limits are listed in Table 4. When calculating acceptance, we assume the lifetime to be 0.4 ps. The limits are given for two possible values of ; the difference in the limits is due mainly to the difference in the numbers of events observed in the mass spectrum around these mass values. Our upper limits are 2–4% of that for the corresponding decay, which is similar to the theoretical estimate ( 1%).
## 6 Summary
We have presented four recent results from Fermilab experiment E791: a measurement of the form factors governing and decays; a measurement of the difference in decay widths between the two mass eigenstates of ; new limits on two dozen rare and forbidden dilepton decays of , , and ; and a limit on for a “Pentaquark” relative to that for the . Almost all measurements and limits are superior to previously published results. In the case of the and eight of the rare and forbidden dilepton decays, our limits are the first such limits reported.
## References
• [1]
• [2] E. M. Aitala et al., Phys. Lett. B 440, 435 (1998); Phys. Lett. B 450, 294 (1999).
• [3] E. M. Aitala et al., Phys. Rev. Lett. 83, 32 (1999).
• [4] E. M. Aitala et al., FERMILAB Pub-99/183-E, to appear in Phys. Lett. B.
• [5] E. M. Aitala et al., Phys. Rev. Lett. 81, 44 (1998).
• [6] E. M. Aitala et al., Phys. Lett. B 448, 303 (1999).
• [7] D. Scora and N. Isgur, Phys. Rev. D 52, 2783 (1995).
• [8] M. Wirbel, B. Stech, and M. Bauer, Z. Phys. C 29, 637 (1985).
• [9] J. G. Körner and G. A. Schuler, Phys. Lett. B 226, 185 (1989).
• [10] T. Altomari and L. Wolfenstein, Phys. Rev. D 37, 681 (1988);
F. J. Gilman and R. L. Singleton Jr., Phys. Rev. D 41, 142 (1990).
• [11] B. Stech, Z. Phys. C 75, 245 (1997).
• [12] C. W. Bernard et al., Phys. Rev. D 47, 998 (1993); Phys. Rev. D 45, 869 (1992).
• [13] V. Lubicz et al., Phys. Lett. B 274, 415 (1992).
• [14] A. Abada et al., Nucl. Phys. B 416, 675 (1994).
• [15] C. R. Alton et al., Phys. Lett. B 345, 513 (1995).
• [16] K. C. Bowler et al., Phys. Rev. D 51, 4905 (1995).
• [17] P. Ball, V. M. Braun, and H. G. Dosch, Phys. Rev. D 44, 3567 (1991).
• [18] T. Bhattacharya and R. Gupta, Nucl. Phys. B 47, 481 (1996).
• [19] E. M. Aitala et al., Phys. Rev. Lett. 77, 2384 (1996).
• [20] E. M. Aitala et al., Phys. Rev. D 57, 13 (1998).
• [21] Here we have neglected the very small rate of (DCS) decays.
See: A. J. Schwartz, U. of Cincinnati Report UCTP-104-99.
• [22] A. J. Schwartz, Mod. Phys. Lett. A8, 967 (1993);
P. Singer and D.-X. Zhang, Phys. Rev. D 55, 1127 (1997).
• [23] See for example: S. Pakvasa, hep-ph/9705397 (1997); Chin. J. Phys. 32, 1163 (1994).
• [24] C. Caso et al. (Particle Data Group), Eur. Phys. J. C 3, 1 (1998).
• [25] H. J. Lipkin, Phys. Lett. B 195, 484 (1987).
• [26] C. Gignoux et al., Phys. Lett. B 193, 323 (1987). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9863521456718445, "perplexity": 1594.8540903816072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057403.84/warc/CC-MAIN-20210922223752-20210923013752-00531.warc.gz"} |
https://math.stackexchange.com/questions/4064539/define-a-relation-r-on-mathbbz-by-arb-iff-3a-5b-is-even-prove-r | Define a relation $R$ on $\mathbb{Z}$ by $aRb$ iff $3a - 5b$ is even. Prove $R$ is an equivalence relation and describe equivalence classes.
I think I got most of the proof, but feel free to critique anything you would like :).
First, notice that $$aRa$$ means $$3a - 5a = a(3 - 5) = 2(-a)$$. This is even, thus, $$R$$ is reflexive.
Second we show that $$aRb \implies bRa$$. Well, $$aRb$$ means that $$3a - 5b$$ is even. Then we have $$3a - 5b = 2k$$ for some integer k. Rearrange like so $$$$\begin{split} 3a - 5b &= 2k \\ -5b &= 2k - 3a \\ -5b &= 2(k - a) + (-a) \end{split}$$$$ Since, we stated that the LHS and RHS were both even in the beginning and still are (i think?), this forces $$a$$ and $$b$$ to be even. Thus, $$3b - 5a$$ is even because we are dealing with two even numbers. Therefore $$bRa$$ which means $$R$$ is symmetric.
Now for transitive. We have show that $$(aRb \wedge bRc) \implies aRc$$. Since $$aRb \wedge bRc$$, we have $$3a - 5b$$ even and $$3b - 5c$$ even. Note when we add, the sum is even. Observe $$(3a - 5b) + (3b - 5c) \\ 3a - 2b - 5c \\ 2(-b) + (3a - 5c)$$ $$2(-b)$$ is even and this forces $$3a - 5c$$ to be even. Therefore $$R$$ is transitive. QED.
For the equivalences classes we know that if $$a$$ is even we have that $$b$$ must be even. Similarly, if $$a$$ is odd then $$b$$ is odd. Therefore, the equivalence classes are the set of odd integers and the set of even integers.
My question is on proving that $$R$$ is symmetric. Is it mathematically sound?
• I see. Question @JMoravitz, how did you go from $3b - 8b + 8b - 5a +8a - 8a$ to $(5a - 3b) - 8a + 8b = 2(k- 4a + 4b)$? That is, it seems that you simply changed the sign of $3b$ and $5a$ to make it from $3b - 5a \to 5a - 3b$. – Owen Mar 16 at 20:22
• We started with $3b-5a$ since this is what we want to show is even. We then "added zero" twice which is always allowed and doesn't change anything, zero here being $-8b+8b$ and $8a-8a$ since something minus itself is zero. We then used part of those expressions for zero to be grouped with the initial expression to make it look like $3a-5b$ since we know that is even and grouped the remaining portions of the expression, and factoring a $2$ out of everything at the end... showing that $3b-5a$ is also $2$ times an integer given that $3a-5b$ was as well. – JMoravitz Mar 16 at 20:24
• and I seem to have a typo in the above, I meant of course $(3a-5b)-8a+8b$, not $(5a-3b)-8a+8b$. The point though still holds... you should be showing that $3b-5a$, the expression you are interested in, is equal to $2$ times an integer. – JMoravitz Mar 16 at 20:26
• @JMoravitz, Ah, yes, that makes sense! That is where I was a little confused! Thank you for the explanation! – Owen Mar 16 at 20:27
• It boils down to $3a - 5b = a -b + 2(a-2b)$ is even iff $a-b$ is even iff $a,b$ are both odd or both even. – lhf Mar 16 at 21:08
Suppose that $$a\sim b$$. That is to say, $$3a-5b$$ is even. That is to say, $$3a-5b$$ is equal to $$2$$ times some integer, we'll call it $$k$$ so $$3a-5b=2k$$
We ask whether or not this implies that $$b\sim a$$, that is if $$3b-5a$$ can be written as $$2$$ times an integer as well (note, not necessarily the same integer as before)
$$\begin{array}{l|l}~~~~3b-5a&\text{original}\\=3b+0-5a+0&\text{add zero twice}\\=3b+(-8b+8b)-5a+(8a-8a)&\text{replace zeroes}\\=(3b-8b)+8b+(-5a+8a)-8a&\text{adjust parentheses}\\=(3a-5b)-8a+8b&\text{simplify and rearrange}\\=2k-8a+8b&\text{use hypothesis}\\=2(k-4a+4b)&\text{factor out two}\end{array}$$
This shows that $$3b-5a$$ can also be written as $$2$$ times an integer and so is also even.
Similarly, for transitivity, we suppose $$3a-5b$$ is even and $$3b-5c$$ is even and we ask about $$3a-5c$$.
$$3a-5c = 3a+0-5c=3a-5b+3b+2b-5c = (3a-5b)+(3b-5c)+2b$$ is the sum of three even numbers and thus even as well.
3a-5b is even implie 3a=5b[2]
• a$$R$$a implie $$3a=5a[2]\Rightarrow -2a=0[2]$$ so R is reflexive
• a$$R$$b$$\Rightarrow 3a=5b[2]\Rightarrow 9a=15b[2] \Rightarrow 5a=3b[2]\Rightarrow$$b$$R$$a
(because $$9a=5a[2]$$ and $$15b=3b[2]$$)
So R is symmetry
• a$$R$$b$$\Rightarrow 3a=5b[2]$$
and
b$$R$$c$$\Rightarrow 3b=5c[2]$$ that implies $$9b=15c[2]\Rightarrow 5b=5c[2]$$
So $$3a=5c[2]\Rightarrow$$ a$$R$$c
So a$$R$$b and b$$R$$c $$\Rightarrow$$a$$R$$c
So R is Transitive
Finally After (reflexivity, symmetry, transitivity) we Kan see R is equivalence relation | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 60, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9888927340507507, "perplexity": 196.49973911447174}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613453.9/warc/CC-MAIN-20210614201339-20210614231339-00254.warc.gz"} |
https://brilliant.org/problems/combinatronics-jee/ | # Combinatronics JEE
All four digit numbers of the form $$\overline{x_1 x_2 x_3 x_4}$$ form by using the digits 1 to 9 such that $$x_1 \leq x_2 \leq x_3 \leq x_4$$ and are list in ascending order (smallest to biggest).
If the number with rank 460 is of the form $$\overline{abcd}$$, find $$\frac {a+b+c+d}{4} - 2$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5461835861206055, "perplexity": 514.3873369129052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891706.88/warc/CC-MAIN-20180123032443-20180123052443-00584.warc.gz"} |
https://unapologetic.wordpress.com/2007/12/18/ | # The Unapologetic Mathematician
## Movie news
I just heard this:
• MGM and New Line will co-finance and co-distribute two films, The Hobbit and a sequel to The Hobbit. New Line will distribute in North America and MGM will distribute internationally.
• Peter Jackson and Fran Walsh will serve as Executive Producers of two films based on The Hobbit. New Line will manage the production of the films, which will be shot simultaneously.
• Peter Jackson and New Line have settled all litigation relating to the Lord of the Rings Trilogy.
So great. We’re going to finally have a Hobbit movie. And a seq…what?
Look, I love Tolkien as much as the next guy, and in an intellectual (opp. fantasy fanboy) way. I grew up with it, I’ve dabbled in Quenya and Sindarin, I’ve read the archives of Vinyar Tengwar, and I wasn’t horribly disappointed by the LotR trilogy. But honestly, people, there’s just not that much there in the Hobbit. It won’t really support two movies on its own. So either they’re reeeeeeeeeally stretching the script to squeeze the money out; they’re bringing in a lot of stuff from Unfinished Tales, or maybe even HoME (unlikely given how much of LotR proper was cut); or they’re creating new Hobbit material out of whole cloth.
A friend of mine says that he trusts Jackson’s vision. I trusted Lucas’ vision, and see where that got us. This may be good, but buckle up just in case.
December 18, 2007 Posted by | Uncategorized | 5 Comments
## Real-Valued Functions of a Single Real Variable
At long last we can really start getting into one of the most basic kinds of functions: those which take a real number in and spit a real number out. Quite a lot of mathematics is based on a good understanding of how to take these functions apart and put them together in different ways — to analyze them. And so we have the topic of “real analysis”. At our disposal we have a toolbox with various methods for calculating and dealing with these sorts of functions, which we call “calculus”. Really, all calculus is is a collection of techniques for understanding what makes these functions tick.
Sitting behind everything else, we have the real number system $\mathbb{R}$ — the unique ordered topological field which is big enough to contain limits of all Cauchy sequences (so it’s a complete uniform space) and least upper bounds for all nonempty subsets which have any upper bounds at all (so the order is Dedekind complete), and yet small enough to exclude infinitesimals and infinites (so it’s Archimedean).
Because the properties that make the real numbers do their thing are all wrapped up in the topology, it’s no surprise that we’re really interested in continuous functions, and we have quite a lot of them. At the most basic, the constant function $f(x)=1$ for all real numbers $x$ is continuous, as is the identity function $f(x)=x$.
We also have ways of combining continuous functions, many of which are essentially inherited from the field structure on $\mathbb{R}$. We can add and multiply functions just by adding and multiplying their values, and we can multiply a function by a real number too.
• $\left[f+g\right](x)=f(x)+g(x)$
• $\left[fg\right](x)=f(x)g(x)$
• $\left[cf\right](x)=cf(x)$
Since all the nice properties of these algebraic constructions carry over from $\mathbb{R}$, this makes the collection of continuous functions into an algebra over the field of real numbers. We get additive inverses as usual in a module by multiplying by $-1$, so we have an $\mathbb{R}$-module using addition and scalar multiplication. We have a bilinear multiplication because of the distributive law holding in the ring $\mathbb{R}$ where our functions take their values. We also have a unit for multiplication — the constant function $1$ — and a commutative law for multiplication. I’ll leave you to verify that all these operations give back continuous functions when we start with continuous functions.
What we don’t have is division. Multiplicative inverses are tough because we can’t invert any function which takes the value zero anywhere. Even the identity function $f(x)=\frac{1}{x}$ is very much not continuous at $x=0$. In fact, it’s not even defined there! So how can we deal with this?
Well, the answer is sitting right there. The function $\frac{1}{x}$ is not continuous at that point. We have two definitions (by neighborhood systems and by nets) of what it means for a function between two topological spaces to be continuous at one point or another, and we said a function is continuous if it’s continuous at every point in its domain. So we can throw out some points and restrict our attention to a subspace where the function is continuous. Here, for instance, we can define a function $f:\mathbb{R}\setminus\{0\}\rightarrow\mathbb{R}$ by $f(x)=\frac{1}{x}$, and this function is continuous at each point in its domain.
So what we should really be considering is this: for each subspace $X\subseteq\mathbb{R}$ we have a collection $C^0(X)$ of those real-valued functions which are continuous on $X$. Each of these is a commutative $\mathbb{R}$-algebra, just like we saw for the collection of functions continuous on all of $\mathbb{R}$.
But we may come up with two functions over different domains that we want to work with. How do we deal with them together? Well, let’s say we have a function $f\in C^0(X)$ and another one $g\in C^0(Y)$, where $Y\subseteq X$. We may not be able to work with $g$ at the points in $X$ that aren’t in $Y$, but we can certainly work with $f$ at just those points of $X$ that happen to be in $Y$. That is, we can restrict the function $f$ to the function $f|_Y$. It’s the exact same function, except it’s only defined on $Y$ instead of all of $X$. This gives us a homomorphism of $\mathbb{R}$-algebras $\underline{\hphantom{X}}|_Y:C^0(X)\rightarrow C^0(Y)$. (If you’ve been reading along for a while, how would a category theorist say this?)
As an example, we have the identity function $f(x)=x$ in $C^0(\mathbb{R})$ and the reciprocal function $g(x)=\frac{1}{x}$ in $C^0(\mathbb{R}\setminus\{0\})$. We can restrict the identity function by forgetting that it has a value at ${0}$ to get another function $f|_{\mathbb{R}\setminus\{0\}}$, which we will also denote by $x$. Then we can multiply $f|_{\mathbb{R}\setminus\{0\}}g$ to get the function $1\in C^0(\mathbb{R}\setminus\{0\})$. Notice that the resulting function we get is not the constant function on $\mathbb{R}$ because it’s not defined at ${0}$.
Now as far as language goes, we usually drop all mention of domains and assume by default that the domain is “wherever the function makes sense”. That is, whenever we see $\frac{1}{x}$ we automatically restrict to nonzero real numbers, and whenever we combine two functions on different domains we automatically restrict to the intersection of their domains, all without explicit comment.
We do have to be a bit careful here, though, because when we see $\frac{x}{x}$, we also restrict to nonzero real numbers. This is not the constant function $1:\mathbb{R}\rightarrow\mathbb{R}$ because as it stands it’s not defined for $x=0$. Clearly, this is a little nutty and pedantic, so tomorrow we’ll come back and see how to cope with it.
December 18, 2007 Posted by | Analysis, Calculus | 2 Comments
## The Orbit Method
Over at Not Even Wrong, there’s a discussion of David Vogan’s talks at Columbia about the “orbit method” or “orbit philosophy”. This is the view that there is — or at least there should be — a correspondence between unitary irreps of a Lie group $G$ and the orbits of a certain action of $G$. As Woit puts it
This is described as a “method” or “philosophy” rather than a theorem because it doesn’t always work, and remains poorly understood in some cases, while at the same time having shown itself to be a powerful source of inspiration in representation theory.
What he doesn’t say in so many words (but which I’m just rude enough to) is that the same statement applies to a lot of theoretical physics. Path integrals are, as they currently stand, prima facie nonsense. In some cases we’ve figured out how to make sense of them, and to give real meaning to the conceptual framework of what should happen. And this isn’t a bad thing. Path integrals have proven to be a powerful source of inspiration, and a lot of actual, solid mathematics and physics has come out of trying to determine what the hell they’re supposed to mean.
Where this becomes a problem is when people take the conceptual framework as literal truth rather than as the inspirational jumping-off point it properly is.
December 18, 2007 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 55, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169996738433838, "perplexity": 297.5160438087805}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121618.46/warc/CC-MAIN-20160428161521-00179-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://mathhelpforum.com/statistics/143277-how-did-they-get.html | # Math Help - How did they get this?
1. ## How did they get this?
My textbook makes a calculation and just says "with some simplications", and then gets the answer.
We have L0=(1/sqrt(2pi))^n*(e^(-1/2*sum(xi-m0)^2)...i.e. this the max likelihood function for a normal with variance 1 and mean m0. Sum is from 1 to n. We have L1 is the same except replace m0 with m1.
Then they compute L0/L1 and get
e^((n/2)*(m1^2-m0^2)+(m0-m1)*sum(xi))
I do not see where this step comes from. Any ideas?
Thank you.
2. Originally Posted by zhupolongjoe
My textbook makes a calculation and just says "with some simplications", and then gets the answer.
We have L0=(1/sqrt(2pi))^n*(e^(-1/2*sum(xi-m0)^2)...i.e. this the max likelihood function for a normal with variance 1 and mean m0. Sum is from 1 to n. We have L1 is the same except replace m0 with m1.
Then they compute L0/L1 and get
e^((n/2)*(m1^2-m0^2)+(m0-m1)*sum(xi))
I do not see where this step comes from. Any ideas?
Thank you.
This is tedious to type out, but I literally have it sitting in my notes...
What part are you stuck on?
3. Just how they get that form of Lo/L1...I understand the analysis that follows using the Neyman Pearson lemma, but I just don't understand this, and maybe it's an algebraic thing I'm missing, but I don't see how they are getting the e^((n/2)*(m1^2-m0^2)+(m0-m1)*sum(xi)) from the two likelihood functions.....Maybe there is some property of sums I am missing, but I am not sure.
4. $\frac{f_0}{f_1} = \frac{\exp{[-\frac{1}{2\sigma^2}\sum^n (X_i - \mu_0)^2]}}{\exp{[-\frac{1}{2\sigma^2}\sum^n (X_i - \mu_1)^2]}}$ Apply the property $\frac{e^a}{e^b} = e^{a-b}$ and cancel.
$= \exp\{-\frac{1}{2\sigma^2}\sum^n[(X_i - \mu_0)^2 - (X_i - \mu_1)^2)]\}$ Now expand this.
$= \exp{[2\bar X(\mu_0 - \mu_1) + \mu_0^2 - \mu_1^2]}$
Let me know if you have any specific questions. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9259684085845947, "perplexity": 712.6947006987433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701168076.20/warc/CC-MAIN-20160205193928-00290-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=35B53 | # American Mathematical Society
My Account · My Cart · Customer Services · FAQ
Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS
You are here: Home > Publications
AMS eContent Search Results
Matches for: msc=(35B53) AND publication=(all) Sort order: Date Format: Standard display
Results: 1 to 9 of 9 found Go to page: 1
[1] Sanjiban Santra. On the Lazer-McKenna conjecture and its applications. Proc. Amer. Math. Soc. Abstract, references, and article information View Article: PDF [2] Alessia E. Kogoj. A Liouville-type Theorem on half-spaces for sub-Laplacians. Proc. Amer. Math. Soc. 143 (2015) 239-248. Abstract, references, and article information View Article: PDF [3] Amir Moradifam. Sharp counterexamples related to the De Giorgi conjecture in dimensions $4\leq n \leq 8$. Proc. Amer. Math. Soc. 142 (2014) 199-203. Abstract, references, and article information View Article: PDF [4] Luciano Mari and Daniele Valtorta. On the equivalence of stochastic completeness and Liouville and Khas'minskii conditions in linear and nonlinear settings. Trans. Amer. Math. Soc. 365 (2013) 4699-4727. Abstract, references, and article information View Article: PDF [5] Roberta Filippucci. A Liouville result on a half space. Contemporary Mathematics 595 (2013) 237-252. Book volume table of contents View Article: PDF [6] Lorenzo D’Ambrosio and Enzo Mitidieri. An application of Kato's inequality to quasilinear elliptic problems. Contemporary Mathematics 595 (2013) 205-218. Book volume table of contents View Article: PDF [7] A. I. Nazarov and N. N. Ural′tseva. The Harnack inequality and related properties for solutions of elliptic and parabolic equations with divergence-free lower-order coefficients. St. Petersburg Math. J. 23 (2012) 93-115. MR 2760150. Abstract, references, and article information View Article: PDF [8] Alberto Farina, Yannick Sire and Enrico Valdinoci. Stable solutions of elliptic equations on Riemannian manifolds with Euclidean coverings. Proc. Amer. Math. Soc. 140 (2012) 927-930. MR 2869076. Abstract, references, and article information View Article: PDF [9] Xiaobao Zhu. Hamilton’s gradient estimates and Liouville theorems for fast diffusion equations on noncompact Riemannian manifolds. Proc. Amer. Math. Soc. 139 (2011) 1637-1644. MR 2763753. Abstract, references, and article information View Article: PDF
Results: 1 to 9 of 9 found Go to page: 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8577122092247009, "perplexity": 4656.909261181077}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981969.11/warc/CC-MAIN-20150728002301-00057-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://scales.r-lib.org/reference/unit_format.html | This function is kept for backward compatiblity; you should either use label_number() or label_number_si() instead.
unit_format(
accuracy = NULL,
scale = 1,
prefix = "",
unit = "m",
sep = " ",
suffix = paste0(sep, unit),
big.mark = " ",
decimal.mark = ".",
trim = TRUE,
...
)
## Arguments
accuracy A number to round to. Use (e.g.) 0.01 to show 2 decimal places of precision. If NULL, the default, uses a heuristic that should ensure breaks have the minimum number of digits needed to show the difference between adjacent values. Applied to rescaled data. A scaling factor: x will be multiplied by scale before formatting. This is useful if the underlying data is very small or very large. Symbols to display before and after value. The units to append. The separator between the number and the unit label. Symbols to display before and after value. Character used between every 3 digits to separate thousands. The character to be used to indicate the numeric decimal point. Logical, if FALSE, values are right-justified to a common width (see base::format()). Other arguments passed on to base::format().
## Examples
# Label with units
demo_continuous(c(0, 1), labels = unit_format(unit = "m"))
#> scale_x_continuous(labels = unit_format(unit = "m"))# Labels in kg, but original data in g
km <- unit_format(unit = "km", scale = 1e-3, digits = 2)
demo_continuous(c(0, 2500), labels = km)
#> scale_x_continuous(labels = km) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2964458465576172, "perplexity": 4171.510273794462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00303.warc.gz"} |
https://grocid.net/2017/02/14/bsides17-delphi/ | # BSides’17 – Delphi
In this challenge, we are given a server which accepts encrypted commands and returns the resulting output. First we define our oracle `go(cmd)`.
```import urllib2
def go(cmd):
```
This simply return the status from the server. It is common for this kind of CTF challenges to use some block-cipher variant such as some of the AES modes.
The first guess I had was that AES-CBC was being used. That would mean that if we try to flip some bit in a block somewhere in the middle of the ciphertext, the first decrypted part would remain intact, whilst the trailing blocks would get scrambled.
Assume that we have four ciphertext blocks $C_0, C_1, C_2, C_3$ and the decryption is $\textsf{dec}_k : C_0\| C_1\| C_2\| C_3 \mapsto P_0\|P_1\|P_2\|P_3$. Now, we flip a bit in $C_1$ so that we get $C_1'$, then we have $\textsf{dec}_k : C_0\| C'_1\| C_2\| C_3 \mapsto P_0\|P'_1\|P'_2\|P'_3$. (This is not true, thanks to hellman for pointing that out in the comments).
Turns out this is not the case. In fact, the error did only propagate one block and not further, i.e.,$\textsf{dec}_k : C_0\| C'_1\| C_2\| C_3 \mapsto P_0\|P'_1\|P'_2\|P_3$. Having a look at the Wikipedia page, I found that this is how AES-CFB/(CBC) would behave (image from Wikipedia):
Since $\textsf{dec}_k(C_0) \oplus C_1 = P_1$, we can inject some data into the decrypted ciphertext! Assume that we want $P'_1 = Q$. Then, we can set $C'_1 = C_1 \oplus P_1 \oplus Q$, since then $\textsf{dec}_k(C_0) \oplus C'_1 = P_1\oplus P_1 \oplus Q = Q$. Embodying the above in Python, we might get something like
```def xor(a, b):
return ''.join(chr(ord(x) ^ ord(y))
for x, y in zip(a, b))
response = ' to test multiple-block patterns' # the block we attack
split_blocks = [cmd[i * 32: i * 32 + 32]
for i in range(len(cmd) / 32)]
block = 3 # this is somewhat arbitrary
# get command and pad it with blank space
append_cmd = ' some command'
append_cmd = append_cmd + '\x20' * (16 - len(append_cmd))
new_block = xor(split_blocks[block].decode("hex"),
response).encode('hex')
new_block = xor(new_block.decode("hex"),
append_cmd).encode('hex')
split_blocks[block] = new_block
cmd = ''.join(split_blocks)
#print cmd
print go(cmd)
```
We can verify that this works. Running the server, we get
`This is a longer string th\x8a\r\xe4\xd9.\n\xde\x86\xb6\xbd*\xde\xf8X\x15I some command e-block patterns\n`
OK, so the server accepts it. Nice. Can we exploit this? Obviously — yes. We can guess that the server does something like
```echo "{input string}";
```
First, we break off the echo statement. Then we try to `cat` the flag and comment out the rest. We can do this in one block! Here is how:
```append_cmd = '\"; cat f* #'
```
Then, the server code becomes
```echo "{partial + garbage}"; cat f* #{more string}";
```
The server gives the following response:
```This is a longer string th:\xd7\xb1\xe8\xc2Q\xd7\xe8*\x02\xe8\xe8\x9c\xa6\xf71\n
FLAG:a1cf81c5e0872a7e0a4aec2e8e9f74c3\n```
Indeed, this is the flag. So, we are done!
## 2 thoughts on “BSides’17 – Delphi”
1. Nice writeup!
In CBC decryption also, the modified block is “scrambled”, the next block is simply bitflipped and all the following blocks are untouched too. Since the attack is basically the same (bit flips), are you sure it was CFB actually? 🙂
1. You are of course right, an error does not propagate in decryption of CBC either — only in encryption. Fixed! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884464502334595, "perplexity": 1197.7099377094128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00464.warc.gz"} |
http://careyoukeep.com/ginseng-nootropic-top-10-brain-pills.html | The hormone testosterone (Examine.com; FDA adverse events) needs no introduction. This is one of the scariest substances I have considered using: it affects so many bodily systems in so many ways that it seems almost impossible to come up with a net summary, either positive or negative. With testosterone, the problem is not the usual nootropics problem that that there is a lack of human research, the problem is that the summary constitutes a textbook - or two. That said, the 2011 review The role of testosterone in social interaction (excerpts) gives me the impression that testosterone does indeed play into risk-taking, motivation, and social status-seeking; some useful links and a representative anecdote:
The abuse of drugs is something that can lead to large negative outcomes. If you take Ritalin (Methylphenidate) or Adderall (mixed amphetamine salts) but don’t have ADHD, you may experience more focus. But what many people don’t know is that the drug is very similar to amphetamines. And the use of Ritalin is associated with serious adverse events of drug dependence, overdose and suicide attempts [80]. Taking a drug for another reason than originally intended is stupid, irresponsible and very dangerous.
“Cavin’s personal experience and humble writing to help educate, not only people who have suffered brain injuries, but anyone interested in the best nutritional advice for optimum brain function is a great introduction to proper nutrition filled with many recommendations of how you can make a changes to your diet immediately. This book provides amazing personal insight related to Cavin’s recovery accompanied with well cited peer reviewed sources throughout the entire book detailing the most recent findings around functional neurology!
Blinding stymied me for a few months since the nasty taste was unmistakable and I couldn’t think of any gums with a similar flavor to serve as placebo. (The nasty taste does not seem to be due to the nicotine despite what one might expect; Vaniver plausibly suggested the bad taste might be intended to prevent over-consumption, but nothing in the Habitrol ingredient list seemed to be noted for its bad taste, and a number of ingredients were sweetening sugars of various sorts. So I couldn’t simply flavor some gum.)
These are some of the best Nootropics for focus and other benefits that they bring with them. They might intrigue you in trying out any of these Nootropics to boost your brain’s power. However, you need to do your research before choosing the right Nootropic. One way of doing so is by consulting a doctor to know the best Nootropic for you. Another way to go about selecting a Nootropic supplement is choosing the one with clinically tested natural Nootropic substances. There are many sources where you can find the right kind of Nootropics for your needs, and one of them is AlternaScript.
Actually, researchers are studying substances that may improve mental abilities. These substances are called "cognitive enhancers" or "smart drugs" or "nootropics." ("Nootropic" comes from Greek - "noos" = mind and "tropos" = changed, toward, turn). The supposed effects of cognitive enhancement can be several things. For example, it could mean improvement of memory, learning, attention, concentration, problem solving, reasoning, social skills, decision making and planning.
Related to the famous -racetams but reportedly better (and much less bulky), Noopept is one of the many obscure Russian nootropics. (Further reading: Google Scholar, Examine.com, Reddit, Longecity, Bluelight.ru.) Its advantages seem to be that it’s far more compact than piracetam and doesn’t taste awful so it’s easier to store and consume; doesn’t have the cloud hanging over it that piracetam does due to the FDA letters, so it’s easy to purchase through normal channels; is cheap on a per-dose basis; and it has fans claiming it is better than piracetam.
Modafinil is a prescription smart drug most commonly given to narcolepsy patients, as it promotes wakefulness. In addition, users indicate that this smart pill helps them concentrate and boosts their motivation. Owing to Modafinil, the feeling of fatigue is reduced, and people report that their everyday functions improve because they can manage their time and resources better, as a result reaching their goals easier.
I never watch SNL. I just happen to know about every skit, every line of dialogue because I'm a stable genius.Hey Donnie, perhaps you are unaware that:1) The only Republican who is continually obsessed with how he or she is portrayed on SNL is YOU.2) SNL has always been laden with political satire.3) There is something called the First Amendment that would undermine your quest for retribution.
Still, the scientific backing and ingredient sourcing of nootropics on the market varies widely, and even those based in some research won't necessarily immediately, always or ever translate to better grades or an ability to finally crank out that novel. Nor are supplements of any kind risk-free, says Jocelyn Kerl, a pharmacist in Madison, Wisconsin.
On the other end of the spectrum is the nootropic stack, a practice where individuals create a cocktail or mixture of different smart drugs for daily intake. The mixture and its variety actually depend on the goals of the user. Many users have said that nootropic stacking is more effective for delivering improved cognitive function in comparison to single nootropics.
There are a number of treatments for the last. I already use melatonin. I sort of have light therapy from a full-spectrum fluorescent desk lamp. But I get very little sunlight; the surprising thing would be if I didn’t have a vitamin D deficiency. And vitamin D deficiencies have been linked with all sorts of interesting things like near-sightedness, with time outdoors inversely correlating with myopia and not reading or near-work time. (It has been claimed that caffeine interferes with vitamin D absorption and so people like me especially need to take vitamin D, on top of the deficits caused by our vampiric habits, but I don’t think this is true34.) Unfortunately, there’s not very good evidence that vitamin D supplementation helps with mood/SAD/depression: there’s ~6 small RCTs with some findings of benefits, with their respective meta-analysis turning in a positive but currently non-statistically-significant result. Better confirmed is reducing all-cause mortality in elderly people (see, in order of increasing comprehensiveness: Evidence Syntheses 2013, Chung et al 2009, Autier & Gandini 2007, Bolland et al 2014).
“Cavin’s enthusiasm and drive to help those who need it is unparalleled! He delivers the information in an easy to read manner, no PhD required from the reader. 🙂 Having lived through such trauma himself he has real empathy for other survivors and it shows in the writing. This is a great read for anyone who wants to increase the health of their brain, injury or otherwise! Read it!!!”
…Phenethylamine is intrinsically a stimulant, although it doesn’t last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent.
20 March, 2x 13mg; first time, took around 11:30AM, half-life 3 hours, so halved by 2:30PM. Initial reaction: within 20 minutes, started to feel light-headed, experienced a bit of physical clumsiness while baking bread (dropped things or poured too much thrice); that began to pass in an hour, leaving what felt like a cheerier mood and less anxiety. Seems like it mostly wore off by 6PM. Redosed at 8PM TODO: maybe take a look at the HRV data? looks interestingly like HRV increased thanks to the tianeptine 21 March, 2x17mg; seemed to buffer effects of FBI visit 22 March, 2x 23 March, 2x 24 March, 2x 25 March, 2x 26 March, 2x 27 March, 2x 28 March, 2x 7 April, 2x 8 April, 2x 9 April, 2x 10 April, 2x 11 April, 2x 12 April, 2x 23 April, 2x 24 April, 2x 25 April, 2x 26 April, 2x 27 April, 2x 28 April, 2x 29 April, 2x 7 May, 2x 8 May, 2x 9 May, 2x 10 May, 2x 3 June, 2x 4 June, 2x 5 June, 2x 30 June, 2x 30 July, 1x 31 July, 1x 1 August, 2x 2 August, 2x 3 August, 2x 5 August, 2x 6 August, 2x 8 August, 2x 10 August, 2x 12 August: 2x 14 August: 2x 15 August: 2x 16 August: 1x 18 August: 2x 19 August: 2x 21 August: 2x 23 August: 1x 24 August: 1x 25 August: 1x 26 August: 2x 27 August: 1x 29 August: 2x 30 August: 1x 02 September: 1x 04 September: 1x 07 September: 2x 20 September: 1x 21 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 28 September: 2x 29 September: 2x 5 October: 2x 6 October: 1x 19 October: 1x 20 October: 1x 27 October: 1x 4 November: 1x 5 November: 1x 8 November: 1x 9 November: 2x 10 November: 1x 11 November: 1x 12 November: 1x 25 November: 1x 26 November: 1x 27 November: 1x 4 December: 2x 27 December: 1x 28 December: 1x 2017 7 January: 1x 8 January: 2x 10 January: 1x 16 January: 1x 17 January: 1x 20 January: 1x 24 January: 1x 25 January: 2x 27 January: 2x 28 January: 2x 1 February: 2x 3 February: 2x 8 February: 1x 16 February: 2x 17 February: 2x 18 February: 1x 22 February: 1x 27 February: 2x 14 March: 1x 15 March: 1x 16 March: 2x 17 March: 2x 18 March: 2x 19 March: 2x 20 March: 2x 21 March: 2x 22 March: 2x 23 March: 1x 24 March: 2x 25 March: 2x 26 March: 2x 27 March: 2x 28 March: 2x 29 March: 2x 30 March: 2x 31 March: 2x 01 April: 2x 02 April: 1x 03 April: 2x 04 April: 2x 05 April: 2x 06 April: 2x 07 April: 2x 08 April: 2x 09 April: 2x 10 April: 2x 11 April: 2x 20 April: 1x 21 April: 1x 22 April: 1x 23 April: 1x 24 April: 1x 25 April: 1x 26 April: 2x 27 April: 2x 28 April: 1x 30 April: 1x 01 May: 2x 02 May: 2x 03 May: 2x 04 May: 2x 05 May: 2x 06 May: 2x 07 May: 2x 08 May: 2x 09 May: 2x 10 May: 2x 11 May: 2x 12 May: 2x 13 May: 2x 14 May: 2x 15 May: 2x 16 May: 2x 17 May: 2x 18 May: 2x 19 May: 2x 20 May: 2x 21 May: 2x 22 May: 2x 23 May: 2x 24 May: 2x 25 May: 2x 26 May: 2x 27 May: 2x 28 May: 2x 29 May: 2x 30 May: 2x 1 June: 2x 2 June: 2x 3 June: 2x 4 June: 2x 5 June: 1x 6 June: 2x 7 June: 2x 8 June: 2x 9 June: 2x 10 June: 2x 11 June: 2x 12 June: 2x 13 June: 2x 14 June: 2x 15 June: 2x 16 June: 2x 17 June: 2x 18 June: 2x 19 June: 2x 20 June: 2x 22 June: 2x 21 June: 2x 02 July: 2x 03 July: 2x 04 July: 2x 05 July: 2x 06 July: 2x 07 July: 2x 08 July: 2x 09 July: 2x 10 July: 2x 11 July: 2x 12 July: 2x 13 July: 2x 14 July: 2x 15 July: 2x 16 July: 2x 17 July: 2x 18 July: 2x 19 July: 2x 20 July: 2x 21 July: 2x 22 July: 2x 23 July: 2x 24 July: 2x 25 July: 2x 26 July: 2x 27 July: 2x 28 July: 2x 29 July: 2x 30 July: 2x 31 July: 2x 01 August: 2x 02 August: 2x 03 August: 2x 04 August: 2x 05 August: 2x 06 August: 2x 07 August: 2x 08 August: 2x 09 August: 2x 10 August: 2x 11 August: 2x 12 August: 2x 13 August: 2x 14 August: 2x 15 August: 2x 16 August: 2x 17 August: 2x 18 August: 2x 19 August: 2x 20 August: 2x 21 August: 2x 22 August: 2x 23 August: 2x 24 August: 2x 25 August: 2x 26 August: 1x 27 August: 2x 28 August: 2x 29 August: 2x 30 August: 2x 31 August: 2x 01 September: 2x 02 September: 2x 03 September: 2x 04 September: 2x 05 September: 2x 06 September: 2x 07 September: 2x 08 September: 2x 09 September: 2x 10 September: 2x 11 September: 2x 12 September: 2x 13 September: 2x 14 September: 2x 15 September: 2x 16 September: 2x 17 September: 2x 18 September: 2x 19 September: 2x 20 September: 2x 21 September: 2x 22 September: 2x 23 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 27 September: 2x 28 September: 2x 29 September: 2x 30 September: 2x October 01 October: 2x 02 October: 2x 03 October: 2x 04 October: 2x 05 October: 2x 06 October: 2x 07 October: 2x 08 October: 2x 09 October: 2x 10 October: 2x 11 October: 2x 12 October: 2x 13 October: 2x 14 October: 2x 15 October: 2x 16 October: 2x 17 October: 2x 18 October: 2x 20 October: 2x 21 October: 2x 22 October: 2x 23 October: 2x 24 October: 2x 25 October: 2x 26 October: 2x 27 October: 2x 28 October: 2x 29 October: 2x 30 October: 2x 31 October: 2x 01 November: 2x 02 November: 2x 03 November: 2x 04 November: 2x 05 November: 2x 06 November: 2x 07 November: 2x 08 November: 2x 09 November: 2x 10 November: 2x 11 November: 2x 12 November: 2x 13 November: 2x 14 November: 2x 15 November: 2x 16 November: 2x 17 November: 2x 18 November: 2x 19 November: 2x 20 November: 2x 21 November: 2x 22 November: 2x 23 November: 2x 24 November: 2x 25 November: 2x 26 November: 2x 27 November: 2x 28 November: 2x 29 November: 2x 30 November: 2x 01 December: 2x 02 December: 2x 03 December: 2x 04 December: 2x 05 December: 2x 06 December: 2x 07 December: 2x 08 December: 2x 09 December: 2x 10 December: 2x 11 December: 2x 12 December: 2x 13 December: 2x 14 December: 2x 15 December: 2x 16 December: 2x 17 December: 2x 18 December: 2x 19 December: 2x 20 December: 2x 21 December: 2x 22 December: 2x 23 December: 2x 24 December: 2x 25 December: 2x ran out, last day: 25 December 2017 –>
###### Perceptual–motor congruency was the basis of a study by Fitzpatrick et al. (1988) in which subjects had to press buttons to indicate the location of a target stimulus in a display. In the simple condition, the left-to-right positions of the buttons are used to indicate the left-to-right positions of the stimuli, a natural mapping that requires little cognitive control. In the rotation condition, the mapping between buttons and stimulus positions is shifted to the right by one and wrapped around, such that the left-most button is used to indicate the right-most position. Cognitive control is needed to resist responding with the other, more natural mapping. MPH was found to speed responses in this task, and the speeding was disproportionate for the rotation condition, consistent with enhancement of cognitive control.
After I ran out of creatine, I noticed the increased difficulty, and resolved to buy it again at some point; many months later, there was a Smart Powders sale so bought it in my batch order, $12 for 1000g. As before, it made Taekwondo classes a bit easier. I paid closer attention this second time around and noticed that as one would expect, it only helped with muscular fatigue and did nothing for my aerobic issues. (I hate aerobic exercise, so it’s always been a weak point.) I eventually capped it as part of a sulbutiamine-DMAE-creatine-theanine mix. This ran out 1 May 2013. In March 2014, I spent$19 for 1kg of micronized creatine monohydrate to resume creatine use and also to use it as a placebo in a honey-sleep experiment testing Seth Roberts’s claim that a few grams of honey before bedtime would improve sleep quality: my usual flour placebo being unusable because the mechanism might be through simple sugars, which flour would digest into. (I did not do the experiment: it was going to be a fair amount of messy work capping the honey and creatine, and I didn’t believe Roberts’s claims for a second - my only reason to do it would be to prove the claim wrong but he’d just ignore me and no one else cares.) I didn’t try measuring out exact doses but just put a spoonful in my tea each morning (creatine is tasteless). The 1kg lasted from 25 March to 18 September or 178 days, so ~5.6g & \$0.11 per day.
28,61,36,25,61,57,39,56,23,37,24,50,54,32,50,33,16,42,41,40,34,33,31,65,23,36,29,51,46,31,45,52,30, 50,29,36,57,60,34,48,32,41,48,34,51,40,53,73,56,53,53,57,46,50,35,50,60,62,30,60,48,46,52,60,60,48, 47,34,50,51,45,54,70,48,61,43,53,60,44,57,50,50,52,37,55,40,53,48,50,52,44,50,50,38,43,66,40,24,67, 60,71,54,51,60,41,58,20,28,42,53,59,42,31,60,42,58,36,48,53,46,25,53,57,60,35,46,32,26,68,45,20,51, 56,48,25,62,50,54,47,42,55,39,60,44,32,50,34,60,47,70,68,38,47,48,70,51,42,41,35,36,39,23,50,46,44,56,50,39
Recent developments include biosensor-equipped smart pills that sense the appropriate environment and location to release pharmacological agents. Medimetrics (Eindhoven, Netherlands) has developed a pill called IntelliCap with drug reservoir, pH and temperature sensors that release drugs to a defined region of the gastrointestinal tract. This device is CE marked and is in early stages of clinical trials for FDA approval. Recently, Google announced its intent to invest and innovate in this space.
Of course, there are drugs out there with more transformative powers. “I think it’s very clear that some do work,” says Andrew Huberman, a neuroscientist based at Stanford University. In fact, there’s one category of smart drugs which has received more attention from scientists and biohackers – those looking to alter their own biology and abilities – than any other. These are the stimulants.
One curious thing that leaps out looking at the graphs is that the estimated underlying standard deviations differ: the nicotine days have a strikingly large standard deviation, indicating greater variability in scores - both higher and lower, since the means weren’t very different. The difference in standard deviations is just 6.6% below 0, so the difference almost reaches our usual frequentist levels of confidence too, which we can verify by testing:
70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires (70 \times 2) \times (2 \times 7) \times 2 = 3920 pills. I don’t even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of:
The resurgent popularity of nootropics—an umbrella term for supplements that purport to boost creativity, memory, and cognitive ability—has more than a little to do with the recent Silicon Valley-induced obsession with disrupting literally everything, up to and including our own brains. But most of the appeal of smart drugs lies in the simplicity of their age-old premise: Take the right pill and you can become a better, smarter, as-yet-unrealized version of yourself—a person that you know exists, if only the less capable you could get out of your own way.
Nootropics – sometimes called smart drugs – are compounds that enhance brain function. They’re becoming a popular way to give your mind an extra boost. According to one Telegraph report, up to 25% of students at leading UK universities have taken the prescription smart drug modafinil [1], and California tech startup employees are trying everything from Adderall to LSD to push their brains into a higher gear [2]. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7636612057685852, "perplexity": 12162.14475550827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525524.12/warc/CC-MAIN-20190718063305-20190718085305-00102.warc.gz"} |
https://www.quantumdiaries.org/2009/11/30/visiting-damtp-cambridge/ | ## View Blog | Read Bio
### Visiting DAMTP, Cambridge.
It’s been long since I have visited Cambridge before, probably it was already 4 years ago. I lived in Cambridge for half a year, as a visiting researcher then at DAMTP (Department of Applied Math and Theoretical Physics), universtiy of Cambridge. I have good memories on my days at Cambridge.
Last time when I was here at Cambridge, I couldn’t imagine that I would be working on nuclear physics. The talk I gave here 4 years ago was on ADHM construction of instantons and its D-brane realization, and the motivation was purely mathematical physics. But this time, again in my talk I told about ADHM construction, but now applied to nucleon nucleon interaction realized and computed in string theory. It is interesting that, although the mathematical tools are quite similar to each other (and almost the same), the motivations were totally different.
Discussions with my friends at DAMTP were joyful, and are always insightful. And, one more thing — winter in Cambridge, it is beautiful. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8924689292907715, "perplexity": 1342.459316041499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00463.warc.gz"} |
https://brilliant.org/practice/si-fluid-measure/ | ×
Classical Mechanics
# SI Fluid Measure
If a cylindrical container has a base area of $$12 \text{ cm}^2$$ and a height of $$6 \text{ cm}$$, how many mL of water can it hold?
Liquefied natural gas is filled in a horizontal cylindrical tank, as shown above. The diameter of the circular base and the length of the cylindrical tank are $$d=6\text{ m}$$ and $$l=14\text{ m},$$ respectively, and the height of the liquefied natural gas is $$h=3\text{ m}.$$ Find the volume of the liquefied natural gas in the tank.
$$18 0\text{ mL}$$ of water is poured into a cylindrical cup with base area $$30\text{ cm}^2$$ and height $$15 \text{ cm}.$$ What is the height of the water inside the cup?
In the figure above, a water reservoir has a rectangular base floor with dimensions $$3\text{ km}\times 2\text{ km}$$ and a depth of $$8 \text{ m}.$$ If this reservoir is half-full, what is the volume of the water in liters?
What is $$8\text{ L}$$ in $$\text{m}^3?$$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.942249596118927, "perplexity": 281.84112236712303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690376.61/warc/CC-MAIN-20170925074036-20170925094036-00265.warc.gz"} |
http://culebra.ch/2020/04/04/no-register-best-rated-dating-online-service-for-men-in-florida/ | # No Register Best Rated Dating Online Service For Men In Florida
Primary markets markets in which corpo- rations raise capital by issuing new securities. Perhaps more interesting, though, are the object and background removal tools. A rail pass allows you to ride many trains in italy without a reservation. These fan sites for the original aoc and it’s communities sponsor some large events, such as $20,000 nations cup or the$50,000 war is coming. He knows that he tells the truth, and he testifies so that you also may believe.“ { [[amalberga has learned that the bandits occupying the quiveron manse have obtained a quantity of nashachite, which they are trading for gil and arms. However, when he became a member no register best rated dating online service for men in florida of a breakdance troupe, van winkle’s stage name was „vanilla ice“ combining his nickname „vanilla“ with one of his breakdance moves; In short, you are promised for definite success with student-friendly preparatory solutions. The qmark hbb500 baseboard heater is wonderful for households with kids and pets. Finally take the monogrammed handkerchief and the sponge that is under it. This is the perfect webpage for anybody who hopes to find out about this topic. My clients are demanding, modern, sophisticated and understand that beachwear is not just a fad, and it makes a big difference. The residual heating value in the cooled rog is recovered in the burner. Neighbors read that dieta charlesa clarka kate walsh improves appearance. *also this will be my first vacation with my new step mom, so i want it to be special! P. graber, a. e. proudfoot, f. talabot, a. bernard, m. mckinnon, m. banks, d. fattah, r. solari, m. c. peitsch, and t. n. wells, j. biol. Sentiment analysis has been recently used for determining sentiment polarity of why-questions so as to find the intention of users with which he is looking for getting information related to products. Howard took off from too far away and threw it through the cylinder instead of dunking it. Prior to this case we were screening all patients for signs that could indicate the disease, however this is a fine art and an even finer balance between help and harm. On the ribbon, on the modelbuilder tab, in the insert group, click the tools button. Your request must be in writing, and we must verify your identity before allowing the requested access. Compare travel credit cards and you might find the perfect alternative. The police later stated that there were no drugs found in the room, with the caveat that they may have been there. [127] this only makes sense, considering his best friend salomonson was extremely close to the royal family. Sun-loungers, large sun towels and umbrellas are provided and there is plenty of space around the pool to relax. In mexico, the spanish language journal chiapas is an ongoing academic project dedicated to exploring various aspects of the rebellion. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17211304605007172, "perplexity": 4086.186077207337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194171.48/warc/CC-MAIN-20201127191451-20201127221451-00525.warc.gz"} |
https://physicshelpforum.com/threads/are-the-resistors-in-series-vs-in-parallel.12286/ | # Are the resistors in series vs in parallel?
#### SophiaL
Nov 2016
3
0
1. The problem statement, all variables and given/known data A metal wire of resistance R is cut into two pieces of equal length. The two pieces are connected together side by side. What is the resistance of the two connected wires?
2. Relevant equations RSeries = R1 + R2 + ... RParallel = (1/R1 + 1/R2 + ...)-1
3. The attempt at a solution Initially I thought that, being connected side by side, the same current would be passing through both halves of the wire thus making them resistors in series. However, the answer turns out to be R/4 as a because they are apparently in parallel. Can someone explain how we know?
Last edited:
#### HallsofIvy
Aug 2010
434
174
You say the wires are "connected together side by side". Are we to assume they are insulated so this does not just form one short wire? If so then, yes, the two wires are parallel so this is the same as to resistors "in parallel".
You say "the same current would be passing through both halves of the wire thus making them resistors in series." Where did you get the idea that "the same current" means they are "in series"?
Reactions: 1 person | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026190042495728, "perplexity": 476.60077002024343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251796127.92/warc/CC-MAIN-20200129102701-20200129132701-00044.warc.gz"} |
http://mymathforum.com/real-analysis/343727-following-functions-differentiable-x-1-a.html | My Math Forum Which of the following functions are differentiable at $x = 1$ ?
Real Analysis Real Analysis Math Forum
March 23rd, 2018, 07:32 PM #1 Senior Member Joined: Nov 2015 From: hyderabad Posts: 232 Thanks: 2 Which of the following functions are differentiable at $x = 1$ ? Which of the following functions are differentiable at $x = 1$ ? a) $f(x) = ||x| - 1/2|$ b) $f(x) = max${$|x-1|, |x+1|$} c) $f(x) =|x-1| + e^x$ d) $f(x) = |e^x -1|$ I have tried solving above functions but I didnt get any of them differentiable. I have doubt in solving option (a) incorrectly. Someone help on this!!! Thanks
March 23rd, 2018, 07:59 PM #2 Senior Member Joined: Sep 2016 From: USA Posts: 578 Thanks: 345 Math Focus: Dynamical systems, analytic function theory, numerics Answer is (d). Now can you see why ?
March 23rd, 2018, 08:01 PM #3 Senior Member Joined: Aug 2012 Posts: 2,193 Thanks: 645 When you graph (a) what does it do at x = 1?
March 23rd, 2018, 08:04 PM #4 Math Team Joined: Dec 2013 From: Colombia Posts: 7,616 Thanks: 2605 Math Focus: Mainly analysis and algebra The absolute value function $|x|$ has a derivative everywhere except where it is equal to zero. So you should be looking to find out whether any of the four functions contain an absolute value function that has value zero at $x=1$.The inner absolute value function has derivative except at $x=0$, so it's smooth for $x \gt 0$, where $\big| |x| - \frac12 \big| = \big| x - \frac12 \big|$. Where does this function not have a derivative? $\max{(|x-1|, |x+1|)} = |x+1|$ for all $x \gt 0$. And $|x+1| = x+1$ for all $x \gt -1$. Treat each half of the function separately. When does $e^x-1 = 0$? Last edited by v8archie; March 23rd, 2018 at 08:06 PM.
March 23rd, 2018, 09:17 PM #5
Senior Member
Joined: Nov 2015
Posts: 232
Thanks: 2
Quote:
Originally Posted by SDK Answer is (d). Now can you see why ?
I assume $|e^x - 1|$is same as$|x|$. So I took $-x$ (i.e.,$-e^x+1$) when $x \leq 1$ and $x$(i.e., $e^x-1$) when $x \geq 1$. In this way they are neither differentiable nor continuous.
Let me know how to do it 😢
March 24th, 2018, 06:00 AM #6
Math Team
Joined: Jan 2015
From: Alabama
Posts: 3,264
Thanks: 902
Quote:
Originally Posted by Lalitha183 I assume $|e^x - 1|$is same as$|x|$. So I took $-x$ (i.e.,$-e^x+1$) when $x \leq 1$ and $x$(i.e., $e^x-1$) when $x \geq 1$. In this way they are neither differentiable nor continuous. Let me know how to do it 😢
If you mean, by "is the same as |x|", that you can treat it in the same way, yes.
For any positive x, $e^x> 1$ so $e^x- 1> 0$ and $|e^x- 1|= e^x- 1$ which is differentiable. This problem would be more complicated if the question were about x= 0 rather than x= 1.
March 27th, 2018, 08:05 AM #7
Senior Member
Joined: Nov 2015
Posts: 232
Thanks: 2
Quote:
Originally Posted by Country Boy If you mean, by "is the same as |x|", that you can treat it in the same way, yes. For any positive x, $e^x> 1$ so $e^x- 1> 0$ and $|e^x- 1|= e^x- 1$ which is differentiable. This problem would be more complicated if the question were about x= 0 rather than x= 1.
I have finally solved all 4 problems. And the results are like this :
1. L.H.L = R.H.L = $\frac{1}{2}$
L.H.D = R.H.D = $1$ and $f(1) = \frac{1}{2}$
2. L.H.L = R.H.L = $2$
L.H.D = R.H.D = $0$ and $f(1) = 2$
3. L.H.L = R.H.L = $e$
L.H.D = R.H.D = $0$ and $f(1) = e > 1$
4. L.H.L = R.H.L = $e - 1$
L.H.D = R.H.D = 0 and $f(1) = e-1$
In this, $e-1 >0$ then how will it be equal to $0$ to say $|e^x-1|$ is differentiable at $x = 1$
March 29th, 2018, 07:38 AM #8 Math Team Joined: Jan 2015 From: Alabama Posts: 3,264 Thanks: 902 You say you have solved all four problems but the original problem was to determine whether or not each function is differentiable at x= 1 and nowhere do you answer that question! a)f(x)= ||x|- 1/2|. For x larger than 1/2, |x|= x and x- 1/2> 0 so ||x|- 1/2|= x- 1/2 for all x> 1/2. Near x= 1, ||x|- 1/2|= x- 1/2 which is differentiable (and has derivative 1) at x= 1. b)f(x)= max(|x- 1|, |x+1|). For x larger than 1, both x- 1 and x+ 1 are positive so |x- 1|= x- 1 and |x+ 1|= x+1. Obviously x+1 is larger than x- 1 so for x> 1, f(x)= x+ 1. If x< 1 then x- 1 is negative so |x- 1|= -(x- 1)= 1- x. So for x< 1, but close to 1, the function is max(1- x, x+ 1)= x+ 1 also. The "difference quotient" is (f(1+ h)- f(1))/h= h/h= 1 for x>1 or x< 1. The derivative at x= 1 is the limit of that as h goes to 0, 1. The function is differentiable at x= 1 (and has derivative 1). c) f(x)= |x- 1|+ e^x. For x larger than 1, |x- 1|= x- 1, for x less than 1, |x- 1|= -(x- 1)= 1- x. The difference quotient, (f(1+h)- f(1))/h is, for h positive, so that 1+h> 1, (1+ h- 1+ e^{1+h}- e)/h= 1+ (e^{1+ h}- e)/h. As h goes to 0, that goes to 1+ 1= 2. For h negative, so that 1+h< 1, the difference quotient is (1- (1+h)+ e^{1+h}- e)/h= -1+ (e^{1+h}- e)/h. As h goes to 0, that goes to -1+ 1= 0. The two limits are not the same so this function is not differentiable at x= 1. d) f(x)= |e^x- 1|. For x close to 1, e^x is close to e and e- 1> 0 so for x close to 1, f(x)= e^x- 1. That is differentiable so f(x) is differentiable for x= 1 (and the derivative is e- 1).
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post ZMD Calculus 0 April 23rd, 2017 05:23 AM DuncanThaw Real Analysis 7 September 27th, 2013 03:45 PM ziaharipur Calculus 1 March 4th, 2011 08:19 PM waytogo Calculus 1 October 16th, 2010 12:43 PM knowledgegain Calculus 1 May 9th, 2009 03:38 AM
Contact - Home - Forums - Cryptocurrency Forum - Top | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8929449915885925, "perplexity": 2458.2896112781114}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202003.56/warc/CC-MAIN-20190319163636-20190319185636-00351.warc.gz"} |
https://sinews.siam.org/Details-Page/the-jungle-of-stochastic-optimization | SIAM News Blog
SIAM News
# The Jungle of Stochastic Optimization
There is a vast range of problems that fall under the broad umbrella of making sequential decisions under uncertainty. While there is widespread acceptance of basic modeling frameworks for deterministic versions of these problems from the fields of math programming and optimal control, sequential stochastic problems are another matter.
Motivated by a wide range of applications, entire fields have emerged with names such as dynamic programming (Markov decision processes, approximate/adaptive dynamic programming, reinforcement learning), stochastic optimal control, stochastic programming, model predictive control, decision trees, robust optimization, simulation optimization, stochastic search, model predictive control, and online computation. Problems may be solved offline (requiring computer simulation) or online in the field, which opens the door to the communities working on multi-armed bandit problems. Each of these fields has developed its own style of modeling, often with different notation ($$x$$ or $$S$$ for state, $$x/a/u$$ for decision/action/control), and different objectives (minimizing expectations, risk, or stability). Perhaps most difficult is appreciating the differences in the underlying application. A matrix $$K$$ could be $$5 \times 5$$ in one class of problems, or $$50,000 \times 50,000$$ in another (it still looks like $$K$$ on paper). But what really stands out is how each community makes a decision.
Despite these differences, it is possible to pull them together in a common framework that recognizes that the most important (albeit not the only) difference is the nature of the policy being used to make decisions over time (we emphasize that we are only talking about sequential problems, consisting of decision, information, decision, information, …). We start by writing the most basic canonical form as
\begin{align} & {\min\nolimits_{\pi \in \Pi} \mathbb{E}^{\pi} \sum_{t=0}^T C(S_t, U_t^{\pi}(S_t))} \: \: \: \: \: \: \: \: \: (1)\\ & \text{where }S_{t+1} = S^M(S_t, u_t, W_{t+1}). \nonumber \end{align}
Here we have adopted the notational system where $$S_t$$ is the state (physical state, as well as the state of information, and state of knowledge), and $$u_t$$ is a decision/action/control (alternatives are $$x_t$$, popular in operations research, or $$a_t$$, popular in operations research as well as computer science). We let $$U_t^{\pi}(S_t)$$ be the decision function, or policy, which is one member in a set $$\Pi$$ where $$\pi$$ specifies both the type of function, as well as any tunable parameters $$\theta \in \Theta^\pi$$. The function $$S^M(S_t, u_t, W_{t+1})$$ is known as the transition function (or system model, state model, plant model, or simply “model”). Finally, we let $$W_{t+1}$$ be the information that first becomes known at time $$t+1$$ (control theorists would call this $$W_t$$, which is random at time $$t)$$.
Important problem variations include different operators to handle uncertainty; we can use an expectation in $$(1)$$, a risk measure, or worst case (robust optimization), as well as a metric capturing system stability. We can assume we know the probability law behind $$W_t$$, or we may just observe $$S_{t+1}$$ given $$S_t$$ and $$u_t$$ (model-free dynamic programming).
While equation $$(1)$$ is well-recognized in certain communities (some will describe it as “obvious”), it is actually quite rare to see $$(1)$$ stated as the objective function with anything close to the automatic writing of objective functions for deterministic problems in math programming or optimal control. We would argue that the reason is that there is no clear path to computation. While we have powerful algorithms to solve over real-valued vector spaces (as required in deterministic optimization), equation $$(1)$$ requires that we search over spaces of functions (policies).
Lacking tools for performing this search, we make the argument that all the different fields of stochastic optimization can actually be described in terms of different classes of policies. In fact, we have identified four fundamental (meta) classes, which are the following:
1. Policy function approximations (PFAs). These are analytical functions that map states to actions. PFAs may come in the form of lookup tables, parametric, or non-parametric functions. A simple example might be
$U^{\pi}(S_t\,|\,\theta) = \sum_{f\in F} \theta_f \phi_f(S_t) \: \: \: \: \: \: \: \: \: (2)$
where $$F$$ is a set of features, and $$\bigl(\phi_f(S_t)\bigr), f \in F$$ are sometimes called basis functions.
2. Cost function approximations (CFAs). Here we are going to design a parametric cost function, or parametrically modified constraints, producing a policy that we might write as
$U^{\pi}(S_t\,|\,\theta) = \arg \min\nolimits_{u \in \mathrm{U}_t^{\pi}(\theta)} C_t^{\pi} (S_t, u\,|\,\theta) \: \: \: \: \: \: \: \: \: (3)$
where $$C_t^{\pi} (S_t, u\,|\,\theta)$$ is a parametrically modified set of costs (think of including bonuses and penalties to handle uncertainty), while $$U_t^{\pi}(\theta)$$ might be a parametrically modified set of constraints (think of including schedule slack in an airline schedule, or a buffer stock).
3. Policies based on value function approximations (VFAS). These are the policies most familiar under the umbrella of dynamic programming and reinforcement learning. These might be written as
\begin{align} & U^{\pi}_t(S_t\,|\,\theta) = \arg \min\nolimits_{u \in \mathrm{U}_t^{\pi}(\theta)} C(S_t, u)+{} \nonumber\\ & \quad \mathbb{E}\bigl\{\overline{V}_{t+1}^{\pi}(S^M(S_t, u, W_{t+1})\,|\,\theta)\,|\,S_t\bigr\} \end{align} \: \: \: \: \: \: \: \: \: (4)
where $$\overline{V}_{t+1}^{\pi}(S_{t+1})$$ is an approximation of the value of being in state $$S_{t+1} = S^M(S_t, u, W_{t+1})$$, where $$\pi$$ captures the structure of the approximation and $$\theta \in \Theta^\pi$$ represents any tunable parameters.
\begin{align} & U^{\pi}_t(S_t\,|\,\theta) = \arg \min\nolimits_u\Biggl(C(S_t, u) + \min\nolimits_{\pi \in \Pi} \nonumber \\ & \qquad \mathbb{E}^{\pi}\Biggl\{\sum_{t' = t + 1}^T C(S_{t'}, U_{t'}^{\pi}(S_{t'}))\,|\,S_t, u\Biggr\}\Biggr) \end{align}\: \: \: \: \: \: \: \: \: (5)
The problem is that the second term in $$(5)$$ is not computable (if this were not the case, we could have solved the objective function in $$(1)$$ directly). For this reason, we create a lookahead model which is an approximation of the real problem. Common approximations are to limit the horizon (e.g. from $$T$$, which might be quite long, to $$t+H$$ for some appropriately chosen horizon $$H$$), and (most important) to replace the original stochastic information process with something simpler. The most obvious is a deterministic approximation, which we can write as
$U_t^{\pi}(S_t|\theta)=arg~min_{u_t, \tilde{u}_{t,t+1} ,\ldots,\tilde{u}_{t,t+H}}\ \Biggl(C(S_t,u_t)+\sum^{t+H}_{t'=t+1}C(\tilde{S}_{tt'},\tilde{U}_{tt'})\Biggr).\: \: \: \: \: \: \: \: \: (6)$
To make the distinction from our original base model in $$(1)$$, we put tildes on all our variables (other than those at time $$t$$), and we also index the variables by $$t$$ (to indicate that we are solving a problem at time $$t$$), and $$t’$$ (which is the point in time within the lookahead model).
A widely-used approach in industry is to start with $$(6)$$ and then introduce modifications (often to the constraints) so that the decisions made now are more robust to uncertain outcomes that occur later. This would be a form of (hybrid) cost function approximation.
We may instead use a stochastic lookahead model. For example, the stochastic programming community most often uses
${U}_t^{\pi}(S_t|\theta)=arg~min_{u_t, \tilde{u}_{t,t+1} ,\ldots,\tilde{u}_{t,t+H}}\\ \Biggl(C(S_t,u_t)+\sum_{\omega\in\tilde{\Omega_t}}p(\omega)\sum^{t+H}_{t'=t+1}C(\tilde{S}_{tt'}(\omega),\tilde{U}_{tt'}(\omega))\Biggr).\: \: \: \: \: \: \: \: \: (7)$
Here, we would let $$\theta$$ capture parameters such as the planning horizon, and the logic for constructing $$\hat\Omega_t$$.
Other variations include a robust objective (which minimizes over the worst outcome rather than the expected outcome), or a chance-constrained formulation, which approximates the costs over all the uncertain outcomes using simple penalties for violating constraints.
All of these policies involve tunable parameters, given by $$\theta$$. We would represent the policy $$\pi$$ as the policy class $$f\in F$$, and the parameters $$\theta \in \Theta^f$$. Thus, the search over policies $$\pi$$ in equation $$(1)$$ can now be thought of as the search over policy classes $$f\in F$$, and then over the tunable parameters $$\theta \in \Theta^f$$.
No, this is not easy. But with this simple bit of notation, all of the different communities working on sequential stochastic optimization problems can be represented in a common framework.
Why is this useful? First, a common vocabulary facilitates communication and the sharing of ideas. Second, it is possible to show that each of the four classes of policies can work best on the same problem, if we are allowed to tweak the data. And finally, it is possible to combine the classes into hybrids that work even better than a pure class.
And maybe some day, mathematicians will figure out how to search over function spaces, just as Dantzig taught us to search over vector spaces.
Warren B. Powell is a faculty member of the Department of Operations Research and Financial Engineering at Princeton University. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152292966842651, "perplexity": 651.3496980428753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866938.68/warc/CC-MAIN-20180525024404-20180525044404-00188.warc.gz"} |
http://cms.math.ca/cmb/msc/31A05?fromjnl=cmb&jnl=CMB | location: Publications → journals
Search results
Search: MSC category 31A05 ( Harmonic, subharmonic, superharmonic functions )
Expand all Collapse all Results 1 - 2 of 2
1. CMB 2003 (vol 46 pp. 373)
Laugesen, Richard S.; Pritsker, Igor E.
Potential Theory of the Farthest-Point Distance Function We study the farthest-point distance function, which measures the distance from $z \in \mathbb{C}$ to the farthest point or points of a given compact set $E$ in the plane. The logarithm of this distance is subharmonic as a function of $z$, and equals the logarithmic potential of a unique probability measure with unbounded support. This measure $\sigma_E$ has many interesting properties that reflect the topology and geometry of the compact set $E$. We prove $\sigma_E(E) \leq \frac12$ for polygons inscribed in a circle, with equality if and only if $E$ is a regular $n$-gon for some odd $n$. Also we show $\sigma_E(E) = \frac12$ for smooth convex sets of constant width. We conjecture $\sigma_E(E) \leq \frac12$ for all~$E$. Keywords:distance function, farthest points, subharmonic function, representing measure, convex bodies of constant widthCategories:31A05, 52A10, 52A40
2. CMB 2002 (vol 45 pp. 154)
Weitsman, Allen
On the Poisson Integral of Step Functions and Minimal Surfaces Applications of minimal surface methods are made to obtain information about univalent harmonic mappings. In the case where the mapping arises as the Poisson integral of a step function, lower bounds for the number of zeros of the dilatation are obtained in terms of the geometry of the image. Keywords:harmonic mappings, dilatation, minimal surfacesCategories:30C62, 31A05, 31A20, 49Q05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9802815318107605, "perplexity": 617.282961996021}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.usgs.gov/publications/rainfall-effects-rare-annual-plants | # Rainfall effects on rare annual plants
January 1, 2008
1. Variation in climate is predicted to increase over much of the planet this century. Forecasting species persistence with climate change thus requires understanding of how populations respond to climate variability, and the mechanisms underlying this response. Variable rainfall is well known to drive fluctuations in annual plant populations, yet the degree to which population response is driven by between-year variation in germination cueing, water limitation or competitive suppression is poorly understood.
2. We used demographic monitoring and population models to examine how three seed banking, rare annual plants of the California Channel Islands respond to natural variation in precipitation and their competitive environments. Island plants are particularly threatened by climate change because their current ranges are unlikely to overlap regions that are climatically favourable in the future.
3. Species showed 9 to 100-fold between-year variation in plant density over the 5–12 years of censusing, including a severe drought and a wet El Niño year. During the drought, population sizes were low for all species. However, even in non-drought years, population sizes and per capita growth rates showed considerable temporal variation, variation that was uncorrelated with total rainfall. These population fluctuations were instead correlated with the temperature after the first major storm event of the season, a germination cue for annual plants.
4. Temporal variation in the density of the focal species was uncorrelated with the total vegetative cover in the surrounding community, suggesting that variation in competitive environments does not strongly determine population fluctuations. At the same time, the uncorrelated responses of the focal species and their competitors to environmental variation may favour persistence via the storage effect.
5. Population growth rate analyses suggested differential endangerment of the focal annuals. Elasticity analyses and life table response experiments indicated that variation in germination has the same potential as the seeds produced per germinant to drive variation in population growth rates, but only the former was clearly related to rainfall.
6. Synthesis. Our work suggests that future changes in the timing and temperatures associated with the first major rains, acting through germination, may more strongly affect population persistence than changes in season-long rainfall. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166307806968689, "perplexity": 4294.06775766574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00522.warc.gz"} |
https://www.physicsforums.com/threads/lattice-field-theory.5678/ | # Lattice Field Theory
#### gnl
21
0
Hi everyone! I would like to post a new thread, related to my research work: QFT on a lattice, i.e. on computers! Is anyone interested?
#### jcsd
Gold Member
2,074
11
Sure, I only know the basic theory behind quantum computing rather than the practicalities, how is the problem of decoherence being overcome?
#### gnl
21
0
lattice
This is what I am talking about. Take some QFT. Write a Euclidean space-time discrete version of the action and then use numerical methods to evaluate Green´s functions. The inverse lattice spacing serves as a momentum cutoff...
#### jcsd
Gold Member
2,074
11
Staff Emeritus
Gold Member
Dearly Missed
6,691
5
This is certainly a very interesting and hot topic, and the opportunity to get some info from the horse's mouth is not to be missed. Fire away, gnl!
#### gnl
21
0
Lattice QCD
One of the most interesting field theories to be studied on the lattice is QCD. QCD is a very complicated theory, with many non-perturbative aspects. The lattice offers a way to investigate, from first principles, such aspects. In the low-energy regime, the QCD coupling becomes too large for any perturbative expansion to make sense. Confinement and hadron structure are among the things one can study in Lattice QCD: hadron masses (QCD spectroscopy in general, including glueballs), hadronic matrix elements.
A good intro can be found in:
hep-lat/9807028
Agreement with experiment has been striking in many cases.
Staff Emeritus
Gold Member
Dearly Missed
6,691
5
I am working my way through the tutorial, and I wondered, gnl what is your topic? And are you going to be doing monte carlo estimations of path integrals like it says?
#### gnl
21
0
my field
My field of research, so far, has been lattice QCD. I have done works on hadron spectroscopy and on the study of leptonic anc semileptonic decays. These decays involve some non-perturbative quantity, like decay constants or form factors.
These objects are calculated as MC estimates (numerical path integral!) of time-ordered products of fields. For example, given the operator that creates a meson with given quantum numbers from the vacuum, one that creates another meson , and a current, lots of things can be calculated.
Lattice QCD needs BIG CPUS!!! However, lots of interesting physics can still be explored with scalar models. The Higgs boson, after all, is such a field!
### The Physics Forums Way
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599304556846619, "perplexity": 2395.924060448368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205597.84/warc/CC-MAIN-20190326160044-20190326182044-00014.warc.gz"} |
https://www.semanticscholar.org/author/Yonglin-Cao/1781136 | # Yonglin Cao
• Yonglin Cao
• 2011
For R a Galois ring and m 1, . . . , m l positive integers, a generalized quasi-cyclic (GQC) code over R of block lengths (m 1, m 2, . . . , m l ) and length $${\sum_{i=1}^lm_i}$$ is an R[x]-submodule of $${R[x]/(x^{m_1}-1)\times\cdots \times R[x]/(x^{m_l}-1)}$$ . Suppose m 1, . . . , m l are all coprime to the characteristic of R and let {g 1, . . . , g t(More)
• 2
• Yonglin Cao
• 2012
Let R be an arbitrary commutative finite chain ring with $$1\ne 0$$ . 1-generator quasi-cyclic (QC) codes over R are considered in this paper. Let $$\gamma$$ be a fixed generator of the maximal ideal of R, $$F=R/\langle \gamma \rangle$$ and $$|F|=q$$ . For any positive integers m, n satisfying $$\mathrm{gcd}(q,n)=1$$ , let \mathcal{R}_n=R[x]/\langle(More)
• 1
• 2015
Keywords: Additive cyclic code Galois ring Linear code Dual code Trace inner product Self-dual code Quasi-cyclic code a b s t r a c t Let R = GR(p ϵ , l) be a Galois ring of characteristic p ϵ and cardinality p ϵl , where p and l are prime integers. First, we give a canonical form decomposition for additive cyclic codes over R. This decomposition is used to(More)
• 2015
In this paper, we study the construction of cyclic DNA codes by cyclic codes over the finite chain ring 2 4 1 F u u . First, we establish a 1-1 correspondence between DNA pairs and the 16 elements of the ring 2 4 1 F u u . Considering the biology features of DNA codes, we investigate the structure and properties of self-reciprocal(More)
• 2015
Let $\mathbb{F}_{p^m}$ be a finite field of cardinality $p^m$ and $R=\mathbb{F}_{p^m}[u]/\langle u^2\rangle=\mathbb{F}_{p^m}+u\mathbb{F}_{p^m}$ $(u^2=0)$, where $p$ is a prime and $m$ is a positive integer. For any $\lambda\in \mathbb{F}_{p^m}^{\times}$, an explicit representation for all distinct $\lambda$-constacyclic codes over $R$ of length $p^sn$ is(More) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686315059661865, "perplexity": 714.6627973804013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691977.66/warc/CC-MAIN-20170925145232-20170925165232-00536.warc.gz"} |
https://chemistry.stackexchange.com/questions/33818/bond-angles-for-the-hydrides/33824 | # Bond angles for the hydrides
I noticed the fact that all the hydrides of the elements belonging to group IV has bond angle 109.5° while in group V it varies from 107.3° for $\ce{NH3}$ to 91.3° for $\ce{SbH3}$. Similarly we see in group VI that water has highest bond angle then it significantly decreases to 89.5° for $\ce{H2Te}$. Is there any specific reason for hydrides of group IV metals to have remarkably equal bond angles, and other groups to have varying angles?
The simplest way to look at this trend is through VSEPR theory, which produces a very good qualitative understanding of molecular geometry even if it is not compatible with modern molecular orbital theory. However, VSEPR is not the whole story for the heavier Group V and Group VI hydrides.
According to VSEPR, electron domains (i.e. bonds and lone pairs) are arranged around the central atom in geometries that minimize electrostatic repulsion between the electron domains. These geometries also happen to be solutions to the Schrödinger equation for the set of molecular orbitals as linear combinations of atomic orbitals, and in the molecular orbital case, they arise not from electrostatic repulsion, but strictly from the mathematics of the linear combinations producing a minimum energy self-consistent set of orbitals.
VSEPR is qualitatively simpler for small molecules. All of the binary hydrides in Group IV, V, and VI have steric number four around the central atom. This means that the number of lone pairs and the number of bonding domains add up to four. These species can also be described using the $\ce{AXE}$ method, where $\ce{A}$ denotes the central atom, $\ce{X}$ denotes an atom bonded to $\ce{A}$ and $\ce{E}$ denotes a lone pair of electrons.
• The Group IV hydrides all have four bonds and no lone pairs. We describe this arrangement as $\ce{AX4}$.
• The Group V hydrides all have three bonds and one lone pairs. We describe this arrangement as $\ce{AX3E}$.
• The Group VI hydrides all have two bonds and two lone pairs. We describe this arrangement as $\ce{AX2E2}$.
There are several factors that affect the degree of repulsion:
1. Lone pairs - Lone pair domains are more diffuse and have higher repulsion than bonding domains.
2. The nature of $\ce{X}$ - Larger and/or more electropositive atoms have higher repulsion.
3. Bond order - Bonds with higher bond orders have higher repulsion than bonds with lower bond orders.
Since $\ce{X=H}$ in all cases, we only need to worry about the first factor: lone pairs.
The arrangements $\ce{AX4}$, $\ce{AX3E}$, and $\ce{AX2E2}$ all have four electron domains, and thus the base electron domain geometry group is tetrahedral.
The differences in bond angle between $\ce{CH4}$, $\ce{NH3}$, and $\ce{H2O}$ originate from the increasing number of lone pairs:
• $\ce{CH4}$ has no lone pairs: the bond angles are the ideal tetrahedral bond angles of $109.5^\circ$.
• $\ce{NH3}$ has one lone pair: the bond angles are contracted a little because of the grater repulsion of the lone pair to $107.8^\circ$.
• $\ce{H2O}$ has two lone pairs: the bond angle is contracted further due to the repulsion of two lone pairs to $104.5^\circ$.
All of the Group IV hydrides will have perfect tetrahedral geometry due to having four bonds to the same atom and no lone pairs. Thus $\ce{CH4}$, $\ce{SiH4}$, $\ce{GeH4}$, $\ce{SnH4}$, and $\ce{PbH4}$ all have bond angles of $109.5^\circ$.
For Group V, the bond angles are:
$$\begin{array}{c|c} \ce{AH3}&\text{angle}\\ \hline \ce{NH3}&107.8^\circ \\ \ce{PH3}&93.5^\circ \\ \ce{AsH3}&91.8^\circ \\ \ce{SbH3}&91.7^\circ \\ \ce{BiH3}&90.5^\circ \\ \end{array}$$
Notice that there is a large decrease from $\ce{NH3}$ to $\ce{PH3}$ and then very small changes for $\ce{AsH3}$, $\ce{SbH3}$, and $\ce{BiH3}$.
You could make the argument that the larger orbitals of the heavier elements will have high repulsion than the smaller orbitals of nitrogen. This is where VSEPR breaks down. VSEPR works well for some compounds because there is significant $s-p$ mixing when constructing the molecular orbitals. This mixing need not happen for the heavier elements. One molecular orbital solution for $\ce{PH3}$, for example, indicates very little contribution from the $\ce{P}\ 3s$ orbital. We can describe the $\ce{P-H}$ bonds as predominantly $\ce{P}_{3p}\ce{-H}_{1s}$ bonds. Thus the geometry around $\ce{P}$ is determined mostly by the orientation of the $p$ orbitals.
If we look at the Group VI hydrides, we see the same trend: a large decrease from $\ce{H2O}$ to $\ce{H2S}$ and then very little change. The same argument can be made for primarily $\ce{A}_{p}\ce{-H}_{1s}$ bonding in the heavier Group VI hydrides as for the heavier Group V hydrides.
$$\begin{array}{c|c} \ce{AH2}&\text{angle}\\ \hline \ce{H2O}&104.5^\circ \\ \ce{H2S}&92.1^\circ \\ \ce{H2Se}&91.0^\circ \\ \ce{H2Te}&90.0^\circ \\ \end{array}$$
VSEPR does not do a good job in predicting these geometries; it only works by chance for the second period.
For the 14th group, (the IV group in older nomenclature) things are pretty much as Ben explained them. We have a consistent number of substituents (four) and the same number of lone pairs (zero) around the central atom. Therefore, we can arrange the substituents as far away from each other as possible to generate a tetrahedric structure, and molecular orbital calculations indicate that this will lead to the lowest energy. You can predict what the molecular orbitals will look like from symmetry considerations. Changing the central atom only changes the bond length.
Once we move on to group 15, we suddenly have a lone pair on the central atom in ground state. This lone pair will occupy an s-orbital in the isolated atom, and it would be most stable to keep this lone pair in an s-atom for two reasons:
• p-orbitals actually extend into a direction of space, so the overlap with a bonding partner will be larger
• the s-orbital is ‘closer to the nucleus’ (in a non-quantum view) meaning that the negative charge of the electrons is better stabilised
As such, there is a general tendency across the entire periodic table to keep lone pairs in orbitals with an s-character as large as possible. The largest s-character is, of course, a pure s-orbital. Therefore, we should predict a bond angle of $90^\circ$ for all group 15 hydrides — which is well met for every case except ammonia. (This is similarly true for the group 16 ones with the exception of oxygen. One lone pair must reside in a p-orbital since only one s-orbital is available.)
So what’s wrong with ammonia and water? Well, the oxygen and nitrogen atoms are much smaller than their heavier homologues and therefore the hydrogen atoms would be much closer to each other at an ideal $90^\circ$ angle. This, however, would induce new steric stress between the hydrogen atoms, destabilising the entire structure. Therefore, the hydrogens shift out just far enough to where they are adequately far away from each other while still allowing the central atom to have as much s-character in its lone pair as possible. The sweet spot happens to be $104.5^\circ$ for water and $107.8^\circ$ for ammonia. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7367776036262512, "perplexity": 799.7296696063423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107867463.6/warc/CC-MAIN-20201019232613-20201020022613-00051.warc.gz"} |
https://dml.cz/handle/10338.dmlcz/108519 | # Article
Full entry | PDF (0.6 MB)
References:
[1] Atkinson V. V.: On second-order non-linear oscillations. Pacific J. Math. 5 (1955), 643-647. MR 0072316 | Zbl 0065.32001
[2] Belohorec S.: Oscillatory solutions of certain nonlinear differential equations of the second order. (Slovak), Mat.-Fyz. Casopis Sloven. Akad. Vied. 11 (1961) 4, 250-255.
[3] Belohorec S.: On some properties of the equation $y" + f(x) y(x) = 0$, $0 < a < 1$. Mat. Časopis Sloven. Akad. Vied. 17 (1967) 1, 10-19. MR 0214854
[4] Coffman C. V., Ullrich D. F.: On the continuation of solutions of a certain nonlinear differential equation. Monatsh, Math., 71 (1967), 385-392. MR 0227494
[5] Gollwitzer H. E.: On nonlinear oscillations for a second order delay equation. J. Math. Anal. Appl., 26 (1969), 385-389. MR 0239224 | Zbl 0169.11401
[6] Gollwitzer H. E.: Nonlinear second order differential equations and Stieltjes integrals. University of Tennessee Report, 1969.
[7] Hastings S. P.: Boundary value problems in one differential equation with a discontinuity. J. of Differential Equations, 1 (1965), 346-369. MR 0180723 | Zbl 0142.06303
124
[8] Heidel J. W: A short proof of Atkinson's oscillation theorem. SI AM Review, in press.
[9] Heidel J. W: Rate of growth of non-oscillatory solutions of $y" + q(t)y^\gamma = 0$, $0 < y < 1$. to appear.
[10] Izjumova D. V., Kiguradze I. T: Certain remarks on solutions of the equation $u" + + a(t)f(u) = 0$. Differencialnye Uravenija 4 (1968), 589-605 (Russian). MR 0227544
[11] Moore R. A., Nehari Z.: Non-oscillation theorems for a class of nonlinear differential equations. Trans. Amer. Math. Soc. 93 (1959), 30-52. MR 0111897
[12] Nehari Z.: On a class of nonlinear second order differential equations. Trans. Amer. Math. Soc. 95 (I960), 101-123. MR 0111898 | Zbl 0097.29501
[13] Wong J. S. W: On second order nonlinear oscillation. Funkcialaj Ekvacioj, 11 (1968), 207-234. MR 0245915 | Zbl 0157.14802
Partner of | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9235695600509644, "perplexity": 4015.963235233367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103642979.38/warc/CC-MAIN-20220629180939-20220629210939-00728.warc.gz"} |
https://www.physicsforums.com/threads/arithmetic-series-and-triangular-numbers.185566/ | # Arithmetic Series and Triangular Numbers
1. Sep 18, 2007
### ramsey2879
Re: Arithmetic Series and Factors of Triangular Numbers
A+C*n, B+F*n, A+E*n, B+D*n are all arithmetic series which I define below
It is still not clear to me as why
(A+C*n)*(B+F*n) and (A+E*n)*(B+D*n) are both triangular numbers for all integers n
Can someone please visit my blog and explain it?
Perhaps someone can show somehow that
$$8*(A+C*n)*(B+F*n) + 1$$ and $$8*(A+E*n)*(B+D*n) + 1$$ are both perfect squares for all n using Mathematica or some other program.
--- In [email protected], "ramsey2879"
<ramseykk2@...> wrote:
>
> I found a previously unknow property of Triangular Numbers AFAIK.
> Given that T is a triangular number having exactly M distinct ways
to
> pair the product into two factors, A*B with A </= B. For each of
these
> M distinct pairs, there are two coprime pairs of integers (C,E) and
> (E,D)where C*D = the perfect square of integral part of the square
root
> of 2*T and E*F = the next higher perfect square and each of the
> products (A+Cn)*(B+Fn) and (A+En)*(B+Dn) are triangular numbers for
all
> integer values of n.
> As an example.
> Let T = 666 which can be factored into 6 distinct pairs A,B. The
six
> sets (A,B,C,D,E,F) are as follows
>
> 1. (1,666,1,1369,2,648)
> 2. (2,333,1,1369,8,162)
> 3. (3,222,1,1369,18,72)
> 4. (6,111,1,1369,72,18)
> 5. (9,74,1,1369,162,8)
> 6. (18,37,1,1369,648,2)
>
> I don't have a proof of the general result but am working on it.
>
Still no proof, however, let ab = T(r) = r(r+1)/2, gcd(n,m) = the
greatest common divisor of m and n, then the formula for C,D,E and F
as a function of A and B is
Case 1, r is even
C = (gcd(A,r+1))^2, F = 2*(gcd(B,r))^2
E = 2*(gcd(A,r))^2, D = (gcd(B,r+1))^2
The determinant
|C F|
|E D| = (r+1)^2 - r^2 = 2r+1
Case 2 r is odd
C = 2*(gcd(A,r+1))^2, F = (gcd(B,r))^2
E = (gcd(A,r))^2, D = 2*(gcd(B,r+1))^2
The determinant
|C F|
|E D| = (r+1)^2 - r^2 = 2r+1
Can you offer guidance or do you also need help?
Draft saved Draft deleted
Similar Discussions: Arithmetic Series and Triangular Numbers | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8022586107254028, "perplexity": 4993.41371435566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00651-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://brilliant.org/problems/quadratic-binomial-fusion-2/ | # Quadratic Binomial Fusion
Algebra Level 5
$\displaystyle 1 + \sum_{r = 1}^{10} \left [ 3^r \binom{10}{r} + r \binom{10}{r} \right ] = 2^{10} \left (\alpha \cdot 4^5 + \beta \right )$
Consider the above summation, where $\alpha, \beta \in \mathbb N$ and $f(x) = x^2 - 2x -k^2 + 1$.
If $\alpha, \beta$ lies strictly in between the roots of $f(x) = 0$, then find the smallest positive integral value of $k$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913963079452515, "perplexity": 416.37315108307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153899.14/warc/CC-MAIN-20210729234313-20210730024313-00598.warc.gz"} |
http://tex.stackexchange.com/questions/2899/how-can-one-determine-where-in-a-document-a-font-is-used | # How can one determine where in a document a font is used?
Looking at the output of pdffonts, I see fonts I expect and some I do not. How can I determine where a particular font is used? For example, I see that SFTT1095 is being used somewhere in one of my documents. Google tells me this is from the cm-super package (which I am not using) and that it is the computer modern typewriter font. I don't believe I have any text in typewriter font and I don't see a difference when I change the typewriter font to a different font.
Is there a good way to determine what is causing this particular font to be embedded? Some sort of visual pdf debugger would be handy.
I suspect that in the end, I'll just end up binary searching my way through the document, but I'm hoping there's a better way.
-
Not sure either. I'd also love something similar in a PDF viewer. – Will Robertson Sep 8 '10 at 15:41
I would at first look when t1cmtt.fd is loaded. This should give you some indication when the font is used the first time.
You could also manipulate the map-line. With a bit chance you can see where the font is used, or you get at least informations in the log about the glyph names used.
\documentclass{scrartcl}
\usepackage[ansinew]{inputenc}
\usepackage[T1]{fontenc}
\pdfmapline{=ectt1095 SFTT1095 <SkakNew-Figurine.pfb}
\begin{document}
abc {\ttfamily abc KQR}
\end{document}
-
The main problem with the manipulated map line is that "replacement" font can miss some of the glyph, which will give gap. One can improve the idea by using an encoding: – Ulrike Fischer Sep 9 '10 at 12:34
\pdfmapline{=ectt1095 SFTT1095 " TestEncodingBell ReEncodeFont " <test-enc.enc <wasy10.pfb} where the content of test-enc.enc is /TestEncodingBell[/bell /bell ...256 times] def. This will replace each glyph of the font with a bell. – Ulrike Fischer Sep 9 '10 at 12:35
You can do this with Acrobat Professional, but it's a little bit obscure. Here's the recipe for Acrobat 9:
1. Advanced->Preflight...
2. Select the Profiles tab, and the left of the three buttons (with the tooltip Select profiles, and an icon that's maybe supposed to be a basket?)
3. This step is optional, but if you do it then you can choose the fonts from a list instead of having to type them:
• Select any one of the profiles (I like Report PDF syntax issues) and execute it by pressing the Analyze button.
• Assuming there were no errors, go back to the profile tab
4. From the Options button/menu in the top right of the window, select Create New Preflight Profile...
5. Give the profile a name (eg "Check usage of specific fonts").
6. In the tree panel on the left, on the Fonts branch of your new profile.
7. Use the dropdown next to A font is used which is to change it to Info.
8. Click on the Add... button and choose the font(s) you wish to investigate (if you skipped the optional step above, then you have to simply type the font name, ignoring the 6-character random prefix).
9. Click OK, select your new profile (under Custom profiles) and press Analyze
10. Expand the Text uses font tree branch, to see all the uses. Choose the different uses of your font(s) and either double click to jump to the location in the document (with a box around it for emphasis), or click Show in snap to see a preview of only the text in question.
Another useful tool for investigating the usage of fonts in PDFs is also available from the Acrobat 9 preflight tool. Click on the Options button/menu and choose Create inventory.... Check only Fonts and click OK. Now you have a document showing the use of all the fonts, including which glyphs are included in the subset, what are their Unicode names, which pages they appear on, as well as other arcana such as the PostScript name and the italic angle.
-
Thanks. Those look pretty good...except that I don't own Acrobat so I can't try them out. =/ – TH. Sep 9 '10 at 18:31
I usually open the PDF in FontForge, select the font in question and see what subset of glyphs are used and then search for them in the PDF. Might not be the best method, but I used it a lot while testing unicode-math package to see what symbols were still taken from CM and not from my fonts.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7809954881668091, "perplexity": 1942.4179230740215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458866468.44/warc/CC-MAIN-20150501054106-00019-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/201823/show-p-1pn-1-equiv-1-mod-pn-for-n-in-mathbb-n | Show $[(p-1)!]^{p^{n-1}} \equiv -1$ (mod $p^n$) for n $\in \mathbb N$
Show $[(p-1)!]^{p^{n-1}} \equiv -1$ (mod $p^n$) for n $\in \mathbb N$, by induction. p a prime and p>2.
I can't seem to prove the inductive step for this. Would appreciate help.
My approach has been:
n=1 is just from Wilson.
Assume true for n=m: $[(p-1)!]^{p^{m-1}} \equiv -1$ (mod $p^m$)
Then,
$[(p-1)!]^{p^{(m+1)-1}} = ([(p-1)!]^{p^{m-1}})^p \equiv (-1)^p \equiv -1$ (mod $p^m$)
But, how do I get this to say anything in terms of mod $p^{m+1}$? Since I need the RHS to end up as: -1 (mod $p^{m+1}$).
One thing I could draw from this congruence is that $[(p-1)!]^{p^{(m+1)-1}}$ is not a multiple of p, since multiples of p must be greater than -1 apart from each other.
Hence, GCD($[(p-1)!]^{p^{(m+1)-1}}, p^{m+1})=1.$ I wasn't sure how I might use this.
Alternatively, I could express it as: $[(p-1)!]^{p^{(m+1)-1}} = [(p-1)!]^{p^m}$. But this didn't seem the right way to go about it, since by cancelling the -1+1, there doesn't seem to be any way to use the inductive hypothesis/assumption above.
Another useful result might be that: GCD((p-1)!,$p^{m+1}$)=1.
-
Hint: If $x\equiv -1 \pmod {p^{m-1}}$ show that $x^p\equiv -1 \pmod {p^m}$ when $p$ is an odd prime, when $m>1$ – Thomas Andrews Sep 24 '12 at 20:25
Hint: More generally, if $x\equiv -1\pmod {p^{m-1}}$ then $x^p \equiv -1 \pmod {p^m}$ for $m>1$ and $p$ an odd prime.
-
Thanks @Thomas. how do you derive this result? – confused Sep 24 '12 at 20:48
Showing this is roughly the same as Martini's answer, namely, that if $x=pk-1$ then using the binomial theorem. – Thomas Andrews Sep 24 '12 at 20:53
Am I able to do it without the binomial though? The notes that I saw this problem in don't touch on binomial expansion. – confused Sep 24 '12 at 20:55
Alternatively, you could write $x^p+1 = (x+1)(x^{p-1}-x^{p-2}+x^{p-3}...)$ and note that the terms of $x^{p-1}-x^{p-2}+...$ have to add up to something divisible by $p$. – Thomas Andrews Sep 24 '12 at 20:55
In this case, couldn't (x+1) be divisible by p instead? – jack Sep 24 '12 at 21:04
$(p-1)!^{p^{n-1}} = kp^n - 1$ for some $k\in \mathbb Z$ by induction hypothesis, so \begin{align*} (p-1)!^{p^n} &= \bigl((p-1)!^{p^{n-1}}\bigr)^p\\ &= (kp^n - 1)^p\\ &= \sum_{l=0}^p \binom{p}l (kp^n)^l(-1)^{p-l}\\ &= (-1)^p + p \cdot kp^n(-1)^{p-1} + k^2p^{2n}\sum_{l=2}^p \binom pl (kp^n)^{l-2}(-1)^{p-l}\\ &\equiv (-1)^p + 0 + 0\\ &= -1 \pmod{p^{n+1}} \end{align*} and we are done.
-
I don't really understand the use of combinations like this. – confused Sep 24 '12 at 20:48
Better now?${}{}$ – martini Sep 24 '12 at 21:01
Thanks. @martini I can follow it better now. In the term, $k^2p^{2n}\sum_{l=2}^p \binom pl (kp^n)^{l-2}(-1)^{p-l}$, wouldn't some of the terms of this sum be non-integer fractions, since $\binom pl$ can be a non-integer fraction? Won't this create a problem when working modulo $p^{n+1}$? – confused Sep 24 '12 at 21:36
$\binom{n}{k}$ is always an integer, since it is the number of ways to choose a committee of $k$ people from a group of $n$ people. – André Nicolas Sep 24 '12 at 21:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9675956964492798, "perplexity": 485.67573705653626}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445080.12/warc/CC-MAIN-20151124205405-00048-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://uv-cdat.llnl.gov/Jupyter-notebooks/vcs/Mathematical_Expressions_and_Symbols/Mathematical_Expressions_and_Symbols.html | # How to plot mathematical expressions and symbols in vcs?¶
VCS can take advantage of matolotlib's text capabilites, this Jupyter notebook essentially shows how to implement in vcs the following matplolib tutorial
You can use a subset TeX markup in any vcs text string by placing it inside a pair of dollar signs ($). Note that you do not need to have TeX installed, since matplotlib ships its own TeX expression parser, layout engine and fonts. The layout engine is a fairly direct adaptation of the layout algorithms in Donald Knuth’s TeX, so the quality is quite good. Any text element can use math text. You should use raw strings (precede the quotes with an 'r'), and surround the math text with dollar signs ($), as in TeX. Regular text and mathtext can be interleaved within the same string. Mathtext can use DejaVu Sans (default), DejaVu Serif, the Computer Modern fonts (from (La)TeX), STIX fonts (with are designed to blend well with Times), or a Unicode font that you provide. The mathtext font can be selected with the customization variable mathtext.fontset see Customizing matplotlib
Simple example
The CDAT software was developed by LLNL. This tutorial was written by Charles Doutriaux. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
In [1]:
import vcs
x=vcs.init(bg=True, geometry=(800,60))
# Convenience function for later in the script
def update():
x.clear()
return x.plot(text)
text = vcs.createtext()
# Position string
text.x = [.5]
text.y = [.5]
text.halign = "center"
text.valign = "half"
text.height = 250
text.string = "alpha > beta"
x.plot(text)
Out[1]:
produces “alpha > beta”.
Whereas this:
In [2]:
# mathematical text
text.string = r"$\alpha > \beta$"
update()
Out[2]:
Note
Mathtext should be placed between a pair of dollar signs (\$). To make it easy to display monetary values, e.g., “$100.00”, if a single dollar sign is present in the entire string, it will be displayed verbatim as a dollar sign. This is a small change from regular TeX, where the dollar sign in non-math text would have to be escaped (‘\$’). Note While the syntax inside the pair of dollar signs (\$) aims to be TeX-like, the text outside does not. In particular, characters such as:
x.plot(text2)
`
Out[15]: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9460310935974121, "perplexity": 5187.918602305093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00381.warc.gz"} |
http://www.ntg.nl/pipermail/ntg-context/2008/032380.html | # [NTG-context] Feature request: \digits command
Morgan Brassel morgan.brassel at free.fr
Fri Jun 20 16:10:04 CEST 2008
Thank you, Wolfgang.
In fact, I was looking for a solution to have
\digits{e-5} printed as 10^5
\digits{2e-5} printed as 2 . 10^5
Is it possible to do that? to detect if there is a number before the 'e'
in \digits?
Regards,
Morgan
Wolfgang Schuster a écrit :
> On Thu, Jun 19, 2008 at 10:10 PM, Morgan Brassel <morgan.brassel at free.fr> wrote:
>
>> Hi everyone,
>>
>> The \digits command is really great when it comes to typeset numbers in
>> different languages. However, I miss one functionality from the numprint
>> package in latex: when you type for example $e-5$, you get 10^{-5} (with
>> no dot in front of it).
>>
>> Would it be possible to add an option to \digits to reproduce this?
>>
>
> \starttext
> \def\digitpowerseparator{10}
> \digits{e-5}
> \stoptext
>
>
> Regards
> Wolfgang
> ___________________________________________________________________________________
> If your question is of interest to others as well, please add an entry to the Wiki!
>
> maillist : ntg-context at ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context
> webpage : http://www.pragma-ade.nl / http://tex.aanhet.net
> archive : https://foundry.supelec.fr/projects/contextrev/
> wiki : http://contextgarden.net
>
> ___________________________________________________________________________________
> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687015175819397, "perplexity": 12631.913141262841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707437545/warc/CC-MAIN-20130516123037-00016-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://csharp-book.softuni.org/Content/Chapter-11-tricks-and-hacks/debugging-techniques/debugging-techniques.html | # Code Debugging Techniques
Debugging plays an important role in the process of creating software, which is to allow us to follow the implementation of our program step by step. With this technique we can follow the values of the local variables, because they are changing during the execution of the program, and to remove possible errors (bugs). The process of debugging includes:
• Finding the problems (bugs).
• Locating the code, which causes the problems.
• Correcting the code, which causes the problems, so that the program works correctly.
• Testing to make sure that the program works correctly after the corrections we have made.
## Debugging in Visual Studio
Visual Studio gives us a built-in debugger, thanks to which we can place breakpoints at places we have chosen. When it reaches a breakpoint, the program stops running and allows step-by-step running of the remaining lines. Debugging allows us to get in the details of the program and see where exactly the errors occur and what is the reason for this.
In order to demonstrate the debugger, we will use the following program:
static void Main(string[] args)
{
for (int i = 0; i < 100; i++)
{
Console.WriteLine(i);
}
}
We will place a breakpoint at the function Console.WriteLine(…). For this we will need to move our cursor to the line, which prints on the console, and press [F9]. A breakpoint appears, showing where the program will stop running:
## Starting the Program in Debug Mode
In order to start the program in debug mode, we choose [Debug] -> [Start Debugging] or press [F5]:
After starting the program, we can see that it stops executing at line 11, where we placed our breakpoint. The code in the current line is colored in yellow and we can run it step by step. In order to go to the next line, we use the key [F10]. We can see that the code on the current line hasn't executed yet. It executes when we go to the next line:
From the Locals window we can observe the changes in the local variables. In order to open the window, you must choose [Debug] -> [Windows] -> [Locals]. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2847091853618622, "perplexity": 830.9328430947235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524517.31/warc/CC-MAIN-20190716075153-20190716101153-00469.warc.gz"} |
http://www.maa.org/press/periodicals/loci/thinking-outside-the-box-or-maybe-just-about-the-box-2 | # Thinking Outside the Box -- or Maybe Just About the Box
Author(s):
Thomas Hern (Bowling Green State Univ.) and David Meel (Bowling Green State Univ.)
#### The second Box Problem applet
At first glance, this applet, ClosedBox2, contains many of the same components as the first Box Problem applet; however, the cut length determines the positioning of the cut so that in each case the box volume is relatively maximized.
Figure 9: The second Box Problem applet
Warning: The second Box Problem applet page, entitled ClosedBox2, is best viewed in 1024 x 768 resolution or greater and may take up to a minute to load.
In this applet there are a variety of elements that can be seen. First, the point P is no longer adjustable but is rather determined by the length of the cut defined by the segment BQ. In addition, the grey box in the lower left-hand of the applet contains a dynamic graphical depiction of the functional relationship between cut length and volume. That is, it contains a graphical depiction of
$V(l) = \left( {B - 2l} \right)\left( {{1 \over 2}A - 2l} \right)\left( {2l} \right)$
where l corresponds to $m( \overline {BQ} )$ and currently B = 8.5 and A = 14.0.
One element that students need to grapple with when working with this particular applet is the graphical depiction of the function. In particular, graphing $V(l)$ using a graphing calculator or computer algebra system yields a figure similar to the following:
Figure 10: Graph of the Box Problem Function
The graphical depiction of the function presented in the applet is a truncated version of the one in figure 10. Students will need to come to grips with the fact that the applet is only concerned with the volume of boxes that are physically constructible whereas the graph of the function shown in figure 10, does not necessarily concern itself with the constructability of the box. Instead, it provides a graph of the functional relationship between an independent variable l and a dependent variable V. In essence, this graph in figure 10 is less concerned with cut length and volume and more concerned with expressing the relationship for all possible values of l, irrespective if these values are possible cut lengths or if those cut lengths yield appropriate volumes.
Too often, we see students focused on the algebraic elements of a problem without considering the physical (or mathematical) constraints on that problem. We seek in this applet to guide students to grapple with the interplay of these two seemingly disparate forces. In turn, we hope to lead them to reconcile for themselves how functional relationships that model real-world phenomena require a careful examination of the domain for which that relationship actually does model the phenomena. For instance, students might at first think that l's only restriction is that it must be less than ${1 \over 2}m( {\overline {BB'} } )$ since a cut cannot exceed half the width of the cardboard and maintain its connectedness. However, there is another constraint: the cut length, l, cannot exceed ${1 \over 4}m( {\overline {AA'} } )$ . This ''hidden'' constraint comes directly from the relationship of cut length and position of the cut and depends on the relationship between length and width of the rectangular piece of cardboard.
Thomas Hern (Bowling Green State Univ.) and David Meel (Bowling Green State Univ.), "Thinking Outside the Box -- or Maybe Just About the Box," Convergence (February 2010), DOI:10.4169/loci003321 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7389068603515625, "perplexity": 520.769908189462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824185.14/warc/CC-MAIN-20160723071024-00075-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.studypug.com/ca/differential-equations/step-functions | # Step functions
## Step Functions
What is a step function?
The main characteristic of a step function, and the reason why it truly looks like a staircase doodle, is that this function happens to be constant in intervals. These intervals do not have the same value and we end up with a function which "jumps" from one value to the next (following its own conditions) in a certain pattern. Every time the function jumps to a new interval it has a new constant value and we can observe the "steps" on its graphic representation as horizontal lines. Take a look at the graphic representation below for a typical step function where is plain to see how the name of the function came about.
Notice from the figure above that we can also define a step function as a piecewise function of constant steps, meaning that the function is broken down in a finite number of pieces, each piece having a constant value, in other words, each piece is a constant function.
When talking about step functions we cannot forget to talk about the Heaviside function also known as the unit step function. This is a function that is defined to have a constant value of zero up to a certain point on t (the horizontal axis) at which it jumps to a value of 1. This particular value at which the function jumps (or switches from zero to one) is usually taken as the origin in the typical coordinate system representation, but it can be any value c in the horizontal axis.
Consequently, the Heaviside step function is defined as:
Where c represents the point at t in which the function goes from a 0 value to a value of 1. So, if we want to write down the mathematical expression of the heaviside function depicted in figure 3, we would write it as: u3(t). Notice how we only wrote the math expression using the first type of notation found in equation 1, this is because that notation happens to be the most commonly used and is also the one we will continue to use throughout our lesson. Is still important though, that you know about the other notations and keep them in mind just in case you find them throughout your studies.
From its definition, we can understand why the Heaviside function is also called the "unit step" function. As it can be observed, a Heaviside function can only have values of 0 and 1, in other words, the function is always equal to zero before arriving to a certain value t=c at which it "turns on" and jumps directly into having a value of 1, and so, it jumps in a step size of one unit.
Notice how we have used the phrase "turning on" to describe the process of the unit step function to jump from a zero value to a unit value, this is a very common way to refer to Heaviside step functions' behavior and interestingly enough, it is all due to their real life usage and comes from the reason why they were invented. Historically, physicist, self-taught engineer and mathematician Oliver Heaviside, invented Heaviside step functions in order to describe the behaviour of a current signal when you click the switch on of an electric circuit, thus, allowing you to calculate the magnitude of the current from zero when the circuit is off, to a certain value when is tuned on.
There is an important note to clarify here. Electric current does not magically jump from a zero value to a high value. When having a constant current through a circuit, we know that this constant value obviously started off from zero from when the circuit was off and then it arrived until a certain value after gradually increasing in time, the thing is that electric current travels at a very high speed, and so it is impossible for us (in a simple everyday life setting) to see this gradual increase of the current from zero to its final value since it happens in a very short instant of time, and so, we take it as a "jump" from one value to the next and describe it accordingly in graphic representations.
## Heaviside function properties
Although the Heaviside function itself can only have the values of 0 or 1 as mentioned before, this does not mean we cannot obtain a graphic representation of a higher jump using Heaviside step functions. It is actually a very simple task to obtain a higher value since you just need to multiply it for any constant value that you want to have as the size jump, in other words, if you have a step function written as 3u5(t) in here you have a step function which has a value of zero until it gets to a value of t=5, and which point, the function has a final value of 3. This can be seen in the figure below:
One of the greatest properties of Heaviside step functions is that they allow us to model certain scenarios (like the one described for current in an on/off circuit) and mathematically be able to solve for important information on these scenarios. Such cases tend to require the use of differential equations and so here we have yet another tool to solve them.
Being this a course on differential equations, the most important point of this lesson is to give an introduction to a function which will aid in the solution of certain differential equations, such tool will be used along others seen before, such as the Laplace transform. This will serve to come up with important formulas to be used and to be prepared for the next lesson in which you will be solving differential equations with step functions.
We will talk a bit more about this on the last section of this lesson, meanwhile for a review on the definition of the unit step function and a list of a few of its identities, we recommend the next Heaviside step function article.
## Heaviside step function examples
Let us take a look into an example in which you will have to write all of the necessary unit step functions in order to completely describe the graphic representation found in figure 5.
#### Example 1
Write the mathematical expression for the following graph in terms of the Heaviside step function:
Let's divide this in parts so we can see how the functions of the graph behave at each different value given. And so, we will write a separate expression for each of the next pieces: for t < 3, for t=3 to t=4, for t=4 to t=6 and for t>6.
• For t < 3: Notice that this is a regular Heaviside function in which c=0, multiplied by a 2 in order to obtain the jump of 2 units in size. This would fit the requirement of the function being zero for all negative values of t, and then to have a value of 2 for values of t from 0 to 3. And so, the expression is: 2uo(t)=2.
• For t=3 to t=4:
In this range we have to cancel the unit step function that we had before, that means we need a negative unit step function in here but this one will start to be applied at t=3 and will have to be multiplied by 2 again in order to cancel the value of the previous expression. Thus, our expression for this part of the function is: -2u3(t).
• For t=4 to t=6:
If we weren't to add any function at t=4, the value of the function would remain as zero to infinity since the second function cancelled the first one, but instead, we see in the graph that we have a diagonal line increasing one unit step size for each one unit of distance traveled on t.
Thus, since the function is increasing at the same rate as t, we could easily multiply a new unit step function which starts at 4 by t and be over with, this would produce a diagonal line following the same behavior. The problem is that just multiplying u4(t) by t would produce a line that would come out of the origin instead of t=4, and for that, we need to multiply the unit step function by (t-4) so the function starts at the same time as the unit step function will be applied (which is at t=4). And so the expression is: (t-4)u4(t).
Notice (t-4)u4(t) produces the values for y of: 0 (when t=4), 1 (when t=5) and 2 (when t=6), which is what the graph requires.
• For t > 6:
For this last piece of the graph should be already easy that we just need to cancel our last function in order to have a value of zero back to the graph. For that we use a negative unit step function which starts at t=6, and which should be multiplied again by (t-4). And so, the expression to cancel our last one and completes the graph is: -(t-4)6(t).
We add all of the four pieces of function we found to produce the expression that represents the whole graph shown in figure 5:
Due to this particular example having multiple step function samples, we continue onto the next section to work on more complicated problems.
If you would like to continue practicing how to write down Heaviside step functions, we recommend you to visit these notes of step functions for more Heaviside function examples along with a little introduction. Notice these notes also introduce the topic for our next section: unit step function Laplace transforms and the overall use of the Laplace transform when associated to Heaviside functions.
# Laplace transform of Heaviside function
You have already had an introduction to the Laplace transform in recent past lessons, still, at this time we do recommend you to give the topic a review if you think it appropriate or necessary. The lesson on calculating Laplace transforms is of special use for you to be prepared for this section.
Let us continue with the Heaviside step function and how we will use it along the Laplace transform. The Laplace transform will help us find the value of y(t) for a function that will be represented using the unit step function, so far we have talked about step functions in which the value is a constant (just a jump from zero to a constant value, producing a straight horizontal line in a graph) but we can have any type of function to start at any given point in time (which is what we represent with t mostly).
What are we saying? Well, think on the graphic representation of a function, you can have any function, with any shape, but this special case comes from the fact that the function will be zero through time, until a certain point in which the signal will turn on, and then it will "jump" into this "any shape" function behavior. This is what we call a "shifted function" and this is because we can think of any regular function graphed, and then saying "oh but we want this function to start happening later" and we "switch" it to start at a later point in time (at t=c).
Since these shifted functions will be equal to zero until at certain point c in which they will "turn on", they can be represented as:
The shifted function then is defined as:
Now let's see what happens when we take the Laplace transform of a shifted function!
First, remember that the mathematical definition of the Laplace transform is:
Therefore the Laplace transform for the shifted function is:
Notice how the Laplace transform gets simplified from an improper integral to a regular integral where you have dropped the unit step function. The reason for this is that although the range of the whole Laplace transform integral is from 0 to infinity, before c (whatever value c has) the unit step function is equal to zero, and therefore the whole integral goes to zero. After c, the unit step function has a value of 1, and thus, we can just take it as a constant value of 1 multiplying the rest of the integral, which range is not from c to infinity.
Continuing with the simplification of the Laplace transform of the shifted function, we set x = t - c, which means that t = x+c and so the transformation looks as follows
By making the integral to be in relation to one single variable rather that a t-c term, we have simplified this transformation so we could obtain a quickly manageable formula we can readily use with future problems and our already known table of Laplace transforms from past lessons.
Having solved equation 5 makes it easier to obtain another important formula: the unit step function Laplace transform. Notice, not the transform for a shifted function, but the Laplace transform of the unit step function (Heaviside function) itself and alone.
If you notice, equation 5 was useful while obtaining equation 6 because taking the Laplace transformation of the Heaviside function by itself can be taken as having a shifted function in which the f(t-c) part equals to 1, and so you end up with the Laplace transform of a unit step function times 1, which results in the simple and very useful formula found in equation 6.
Now let us finish this lesson by working on some transformation of a unit step function examples.
#### Example 2
Find the Laplace transform of each of the following step functions:
• Applying the Laplace transform to the function and using equations 5 and 6 we obtain:
• Using equation 5, we set x=t-c (for this case x=t-5) and work through the transformation on the second term of the last equation:
Notice that in order to solve the last term, we used the method of comparison with the table of Laplace transforms. You can find such table on the past lessons related to the Laplace transform.
• Now let us solve the third transformation (remember we set x = t-c, which in this case is x=t-7):
• Now we put everything together to form the final answer to the problem:
#### Example 3
Find the Laplace transform for each of the following functions in g(t):
• Now we use equation 5 and set up x=t-c to solve the Laplace transform:
• We separate the two terms found on the right hand side of the equation, and solve for the first one (for this case x=t-) using the trigonometric function: sin(a+b)=sin(a)cos(b)+cos(a)sin(b).
Therefore:
• Now let us solve the second term, where x = t-4, therefore, t=x+4:
• Putting the whole result together:
And now we are ready for our next section where we will be solving differential equations with what we learned today. If you would like to see some extra notes on the Heaviside function and its relation with another function which we will study in later lessons, the Dirac delta function, visit the link provided.
### Step functions
#### Lessons
A Heaviside Step Function (also just called a "Step Function") is a function that has a value of 0 from 0 to some constant, and then at that constant switches to 1.
The Heaviside Step Function is defined as,
The Laplace Transform of the Step Function:
$L${$u_{c}(t)$ $f(t - c)$} = $e^{-sc}$$L${$f(t)$}
$L${$u_{c}(t)$} = $\frac{e^{-sc}}{s}$
These Formulae might be necessary for shifting functions:
$\sin{(a + b)} = \sin(a)\cos(b) + \cos(a)\sin(b)$
$\cos{(a + b)} = \cos(a)\cos(b) - \sin(a)\sin(b)$
$(a + b)^{2} = a^{2} + 2ab +b^{2}$
• Introduction
a)
What is the Heaviside Step Function?
b)
What are some uses of the Heaviside Step Function and what is the Laplace Transform of a Heaviside Step Function?
• 1.
Determining Heaviside Step Functions
Write the following graph in terms of a Heaviside Step Function
• 2.
Determining the Laplace Transform of a Heaviside Step Function
Find the Laplace Transform of each of the following Step Functions:
a)
$f(t) = 6u_{3}(t) - e^{3t - 15}u_{5}(t) + 3(t - 7)^{2}u_{7}(t)$
b)
$g(t) = -\sin{(t)}u_{\pi}(t) + 2t^{2}u_{4}(t)$
• 3.
Determining the Inverse Laplace Transform of a Heaviside Step Function
Find the inverse Laplace Transform of the following function:
$F(s) = \frac{4e^{-3s}}{(s - 2)(s + 3)}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8628491163253784, "perplexity": 239.58458368028974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890105.39/warc/CC-MAIN-20200706042111-20200706072111-00087.warc.gz"} |
https://www.freeconvert.com/unit/meters-to-inches | # Convert Meters to Inches
Enter a value below and we will automatically convert it to Inches
## How to convert Meters to Inches?
"Inches"="Meters"xx39.370078740157
Example: Convert 10 Meters to Inches
"Inches"="10 Meters"xx39.370078740157 = 393.7007874 "Inches"
### Meters to Inches Conversion Table
Meters Inches
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
11 11
12 12
13 13
14 14
15 15
16 16
17 17
18 18
19 19
20 20
Want to convert large files without a queue or Ads? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5147109627723694, "perplexity": 955.4850929886563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00182.warc.gz"} |
https://www.splashlearn.com/math-vocabulary/geometry/rectangle | # Rectangle – Definition with Examples
## What is Rectangle?
A rectangle is a closed 2-D shape, having 4 sides, 4 corners, and 4 right angles (90°).The opposite sides of a rectangle are equal and parallel. Since, a rectangle is a 2-D shape, it is characterized by two dimensions, length, and width. Length is the longer side of the rectangle and width is the shorter side.
## Rectangles Around Us
Rectangle being the most common shape forms a part of our day to day life. Some real-life examples of the rectangle are given below.
## What Else Can We Call a Rectangle?
• Since, all the angles of a rectangle are equal, we also call it an equiangular quadrilateral. A quadrilateral is a 4-sided closed shape.
• Since, a rectangle has parallel sides, we can also call it a right-angled parallelogram. A parallelogram is a quadrilateral whose opposite sides are equal and parallel. Rectangles are special case of paralleograms.
## What is a Diagonal of a Rectangle?
The line segments that join the opposite corners of a rectangle are called diagonals. In the given figure, the two diagonals of the rectangle are AC and BD. The diagonals of a rectangle are the same in length. Therefore, AC = BD.
If we know the length and width of a rectangle, we can use Pythagorean Theorem to find the length of its diagonal.
In the given figure, ADB forms a triangle right-angled at A. The diagonal (BD) of the rectangle forms its hypotenuse.
so, using the pythagorean theorem, we get,
diagonal$^{2}$ = length$^{2}$ + width$^{2}$
∴ diagonal = $\sqrt{length^{2} + width^{2}}$
## Properties of a Rectangle
The properties of a rectangle are as given below:
1. It is a flat and closed shape.
2. It has 4 sides, 4 angles, and 4 corners (vertices).
3. It has 2 dimensions, namely, length and width
4. Every angle of a rectangle measures 90°.
5. Opposite sides are equal and parallel.
6. It has 2 diagonals of equal length.
## Area and Perimeter of a Rectangle
### Area of a Rectangle
The space occupied by a rectangle is termed as its area. The area of a rectangle can be calculated by finding the product of its length and width. So,
Area of a rectangle = Length × Width
Since the area of a rectangle is the product of the length and width, it is measured in square units, like, square meters (m2), square inches (in2) and so on.
### Perimeter of a Rectangle
The perimeter of a rectangle is the sum of the length of its four sides. Let’s find the formula for the perimeter of a rectangle.
Perimeter of a rectangle = length + width + length + width
= 2 × length + 2 × width
= 2 (length + width)
Since, we are adding the units of length to find the perimeter, its unit is also the
The perimeter is measured in the unit of length (inches or feet or metre and so on) because we add the length of the sides to find the answer.
## Fun Facts
1. All rectangles are parallelograms, but all parallelograms are not rectangles
2. The diagonals of a rectangle divide the rectangle into four triangles
3. Every square is a rectangle, but every rectangle is not a square
## Solved Examples
1. Identify rectangles in the given figures.
Solution:
Shape A and D are rectangles because they have opposite sides equal and parallel and all four right angles.
1. Identify the length, width and diagonal in the given rectangle.
Solution:
Length → PQ and RS
Width → SP and RQ
Diagonals → PR and QS
1. The length and width of a rectangle are 7 inches and 21 inches respectively. Find its perimeter.
Solution:
Perimeter of a rectangle = 2 × (Length + Width)
= 2 × (7 + 21) inches
= 2 × (28) inches
= 56 inches
1. The length and width of a rectangle are 0.3 m and 15 cm. Find its area.
Solution:
Length = 0.3 m and Width = 15 cm
The length and width of the rectangle are in different so, so we convert one of them. Let’s convert length into centimeters by multiplying it by 100 because 1 m = 100 cm.
So, length = 0.3 ✕ 100 cm = 30 cm
Area = length ✕ width = 30 cm ✕ 15 cm = 450 cm2
1. Find the length of the diagonal of a rectangle whose sides are 8 inches and 5 inches.
Solution:
Length of the diagonal = $\sqrt{length^{2} + width^{2}} = \sqrt{8^{2} + 6^{2}} = \sqrt{64+36} = \sqrt{100} = \sqrt{10 ✕ 10}$ = 10 inches
## Practice Problems
1
### What is the area of a rectangular cardboard 1 m long and 30 cm wide?
30 sq. cm
300 sq. cm
3,000 sq. cm
30,000 sq. cm
CorrectIncorrect
Correct answer is: 3,000 sq. cm
Length of the cardboard = 1 m = 100 cm and width = 30 cm
Area of rectangular cardboard = 100 cm ✕ 30 cm = 3,000 sq. cm
2
### What is the perimeter of a rectangle with length 16 feet and width 7 feet?
23 feet
46 feet
112 feet
305 feet
CorrectIncorrect
Correct answer is: 46 feet
Perimeter = 2 × (16 + 7) cm = 2 × 23 feet = 46 feet
3
### What is the width of the rectangle whose length and area are 8 cm and 32 cm2 respectively?
256 cm
4 cm
40 cm
80 cm
CorrectIncorrect
Correct answer is: 4 cm
Area = Length × Width
So, Width = Area ÷ Length of the rectangle = 32 ÷ 8 = 4 cm.
## Frequently Asked Questions
A rectangle is a closed 2-D shape, having 4 sides, 4 corners, and 4 right angles (90°).The opposite sides of a rectangle are equal and parallel.
Some real-life examples of a rectangle include books, door, table top, blackboard etc.
All sides of a square are equal whereas opposite sides of a rectangle are equal and parallel.
All rectangles are not squares because by definition of a rectangle, its opposite sides must be equal but Its adjacent may or may be be equal. So, only rectangles with equal adjacent sides are squares. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7841063737869263, "perplexity": 919.0892532833566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00604.warc.gz"} |
http://parabon.com/dev-center/sdk/FrontierSDKUG/index.html | # Introduction
Welcome to the Frontier® SDK User's Guide & Tutorial. This document describes the Parabon® Frontier Software Development Kit (SDK), a collection of software tools and libraries that enables you to create distributed, compute-intensive applications that run on the Frontier grid platform.
The tutorial is designed to:
• Provide instructions on the installation and setup of the Frontier SDK.
• Review the basics of developing applications that run on the Frontier Internet computing platform.
• Illustrate useful techniques and "best practices" that you can use when you develop your own Frontier applications.
Before starting the tutorial you should first read the white paper The Frontier Platform Application Programming Interface , which explains many of the concepts the tutorial explores.
Frontier is the first commercial grid computing platform that aggregates the unused power of computers, connected exclusively within an enterprise or across the Internet, transforming idle computing resources into a general-purpose, high-performance computing environment. Frontier does this using three main components: the client application, the Frontier® server, and the Frontier® Compute Engine.
A Frontier client application starts by creating a job and sending smaller units of work called tasks within the job to the Frontier server. The server then forwards these tasks to providers-computer users who have the Frontier Compute Engine installed-for execution using their computers' spare power. While a task runs on a provider node, progress information and results are reported back to the Frontier server. A Frontier application can run a listener which queries the server for any results that have been collected, and listens for new results from the Frontier server. The listener can be stopped and restarted at will, each time obtaining the latest results from the Frontier server. When the job is complete and all results have been received the application removes the job from the server.
## Frontier SDK Directory Structure
The following is a brief overview of the Frontier SDK directory hierarchy.
Subdirectory Contents
bin SDK tools and scripts
conf Configuration files for the SDK tools and the client trusted certificate authorities (i.e. client.truststore)
demo Sample applications
doc Contains this document, the Frontier API white paper, and JavaDocs for the Frontier API
lib Frontier SDK libraries
simulator Directory where the Frontier Grid Simulator stores temporary files. This directory is automatically created when a demo application is launched in "simulator" mode.
This User's Guide & Tutorial is divided into the following sections:
Section Description
Introduction Provides an introduction to the Frontier SDK and Parabon's Frontier grid computing platform, and document and Technical Support contact information.
Setup Provides procedures for setting up the Frontier SDK for Remote Mode execution. It also includes instructions for building and running the sample applications included in the Frontier SDK package.
Tutorial Contains lessons that provide a step-by-step review of the fundamentals of developing a Frontier application.
Examples Complete sample applications that illustrate advanced Frontier programming techniques. These are a useful reference when developing your own Frontier applications. They are presented in a format similar to the lessons in the tutorial.
Appendix A: Using Client-Scope and Global Elements Provides instructions re-using task elements across multiple jobs.
Appendix B: Glossary Contains terms specific to the Frontier computing environment.
## Typographical Conventions
The following typographical conventions are used in this document:
Typeface Description Example
FixedWidth Application and task code, program output, file, and class names. To create a task we must first create and populate a TaskSpec object.
Bold Exact commands and characters typed by the user. The command
prime -listen
launches the "prime" sample application in "monitor" mode.
Italic
BoldItalic
Indicates a placeholder that should be replaced by the actual value. To build the sample, enter the following commands, replacing app_name with the name of the lesson's sample application (e.g., "local" for Lesson 1):
1. cd frontier-sdk-home/demo/app_name/src
2. ant
## Related Documents
The Frontier Platform Application Programming Interface
## Technical Support
For technical support, troubleshooting, and other questions concerning Frontier application development, please feel free to email or call the technical contacts provided by Parabon or fill out and submit our email form at http://www.parabon.com/MyFrontier/support.jsp.
# Setup
The Frontier SDK can run an application in one of three modes: local, simulator, or remote. This section describes how to configure an application to run in each mode.
## Local Mode
Local mode runs an application's tasks on the local machine and is used primarily for program testing and debugging. By default, all applications run in local mode unless explicitly configured to run in one of the other two modes.
## Simulator Mode
When running an application in simulator mode, the Frontier SDK will automatically launch the Frontier Grid Simulator thread to process the application's tasks. The simulator thread will emulate a small grid consisting of 3 compute engines, and will display the following window to show the application's progress. This mode is also used for program testing and debugging, but also provides the programmer a means to test the remote components in a controlled environment prior to submitting the job to the real Frontier Grid.
### Frontier Grid Simulator Configuration
The following system properties can be set to control the Frontier Grid Simulator. Default values are defined in the frontier-sdk-home/conf/frontier.properties file.
Property Description
com.parabon.gridSimulator.displayGui If set to true, the Frontier Grid Simulator window will appear when an application is launched in simulator mode. If false, the Frontier Grid Simulator will run quietly in the background. Default true.
com.parabon.gridSimulator.maxJavaHeap The maximum Java Virtual Machine (JVM) heap size per task when running an application in simulator mode. Format is equivalent to Java's "-Xmx" parameter. Default 256m.
Examples:
128m => 128 Megabytes 128k => 128 Kilobytes 1g => 1 Gigabyte
## Remote Mode
In remote mode, the application is launched from the local machine, and tasks are sent to the Frontier server. The Frontier server sends the tasks to provider machines to execute the tasks. When the provider machines complete the tasks, the results are sent back through the server to the application. Remote mode lets your application take full advantage of the power of the Frontier grid computing platform.
### Client Registration
Before running an application in remote mode, you must first register for an account with the Frontier Grid Server. This will provide you with the necessary credentials to connect to, and exchange data securely with the server. If you haven't already done so, visit the Frontier Grid Server web site at http://www.parabon.com/MyFrontier/ to register for an account.
## Runtime Configuration
The parameters the Frontier SDK uses to connect to the Frontier Grid Server are specified by a group of Java system properties. These properties are configured by the Frontier SDK installation, but may be overridden by your application by either defining the properties on the java command line, or coding your application to set the properties before it connects to Frontier.
To set system properties on the java command line, do:
java -Dproperty=value -Dproperty=value... application-main-class
To set system properties from within your application use the System.setProperty() method as follows:
System.setProperty(property,value);
These methods can be combined; for example, your application could set default values for properties not specified on the command line. Note that if your Frontier application is a web application running in a J2EE application server, the J2EE server may restrict the system properties your application can modify.
The frontier-sdk-home/conf/frontier.properties file sets the default values for most of the system properties used by the Frontier SDK. This property file is loaded by the SDK Session Manager at initialization time.
Note: All properties set via the java command line take precedence over properties set in the frontier-sdk-home/conf/frontier.properties file.
The system properties used by the Frontier SDK and their typical values are shown below.
Property Description
mode A value of remote runs the application in remote mode; any other value runs in local mode. Default local.
javax.net.ssl.trustStore SSL certificate truststore. The Frontier SDK uses SSL for all communication between the application and the Frontier server.
Normally:
frontier-sdk-home/conf/certs/client.truststore
Note: This property should not be modified.
com.parabon.frontier.user.username
com.parabon.frontier.user.password
Your user name and password registered with the Frontier server (i.e. Your Frontier account credentials -- user name is typically your email address). If any of these properties are not set (default), you will be prompted for them when you run a Frontier application in remote mode.
com.parabon.io.ssl.proxyHost
com.parabon.io.ssl.proxyPort
Set to the host name and port number of your proxy server. Only required if you use a proxy to connect to the Internet.
com.parabon.io.ssl.proxyAuthRequired
com.parabon.io.ssl.proxyAuthUser
com.parabon.io.ssl.proxyAuthPassword
When proxyAuthRequired is set to true, the Frontier SDK will authenticate to the proxy server using the username and password specified by the proxyAuthUser and proxyAuthPassword properties. Default value for proxyAuthRequired property is false .
# Sample Applications
Included with the Frontier SDK are several sample programs that demonstrate how to write a Frontier application. The samples include the files needed to run the application, complete source code, and an Ant build script for each sample. The sample files are located in the demo subdirectory of the Frontier SDK installation.
## Building the Samples
Each sample application provides an Ant build.xml file to build the application. Ant is a popular, platform-independent Java build tool that can be run from a command line or integrated into most popular Java IDEs. UNIX make and Microsoft® Windows® nmake makefiles, and a Windows batch file, are also provided for systems where Ant is unavailable.
To build a sample application, first set the JAVA_HOME and ANT_HOME environment variables to point to the Java JDK and Ant installation directories, and add the JDK and Ant executables to the system path:
• UNIX or Linux (Bourne/Bash/ksh shells):
JAVA_HOME=path-to-JDK
ANT_HOME=path-to-Ant-install
PATH=ANT_HOME/bin:$PATH export JAVA_HOME ANT_HOME PATH • UNIX or Linux (csh/tcsh shells): setenv JAVA_HOME path-to-JDK setenv ANT_HOME path-to-Ant-install set path=( ANT_HOME/bin$path)
• Microsoft® Windows®:
set JAVA_HOME=path-to-JDK
set ANT_HOME=path-to-Ant-install
path %JAVA_HOME%\bin;%ANT_HOME%\bin;%path%
For Windows these setting can be made persistent using the Control Panel. Select:
and set these environment variables in the User Variables pane.
To build a sample, enter the following commands replacing app_name with the name of the sample application directory (e.g., "local" for the sample used in Lesson 1) and frontier-sdk-home with the directory the Frontier SDK was installed in:
• UNIX or Linux:
cd frontier-sdk-home/demo/app_name/src
ant
• Microsoft® Windows®:
cd frontier-sdk-home\demo\app_name\src
ant
If Ant is unavailable, build the sample using the supplied batch file:
cd frontier-sdk-home\demo\app_name\src
buildwin32.bat
Two jar files are created by the build: one with the application code (app_nameApp.jar) and one that contains the task code (app_nameTask.jar).
You can use the Ant build files as examples when developing your own Frontier applications.
## Running the Samples
Each sample includes a UNIX shell script and Windows batch file to run the application. Run the samples with these commands:
• UNIX or Linux:
cd frontier-sdk-home/demo/app_name
./app_name.sh
• Microsoft® Windows®:
cd frontier-sdk-home\demo\app_name
app_name.bat
The examples will be explained in detail as we proceed through the tutorial on how to write a Frontier application.
# Tutorial Overview
This tutorial teaches the fundamentals of using the Frontier SDK to develop applications for the Frontier platform. The tutorial consists of a series of lessons and corresponding example programs, with each lesson illustrating progressively more complex programming techniques using the Frontier API.
The tutorial expects you to be already familiar with the fundamentals of Java programming, but does not require any prior experience with grid computing.
The lessons in the tutorial are:
The name in parentheses next to each lesson is the name of the directory under frontier-sdk-home/demo containing the sample application and source code explained in that lesson.
# Lesson 1: Running in Local Mode
## Overview
The Frontier Application Programming Interface (API) allows you to run your program in one of three modes: remote, local, or simulator.
Remote mode is the normal operating mode. In remote mode the application is launched from your local machine, tasks are sent to the Frontier server, the server sends the tasks to provider machines, the provider machines execute the tasks, and the results are sent back to the application through the Frontier server.
Local mode performs the above operations entirely on the local machine, executing the tasks locally instead of sending them to the Frontier server. Local mode allows you to run tasks directly on your machine, and is useful both for small tasks, and for application development and testing where the full power of Frontier is not necessary. However, local mode is tuned for efficiency and does not attempt to enforce many of the rules which come into play when running remotely, such as disallowing task-to-task communication or limiting available executable code and security.
Finally, simulator mode also runs on a single machine but simulates much of the behavior of the distributed Frontier system by employing multiple JVMs, messaging between tasks, runtime partitions, et cetera, allowing one to ensure that an application behaves as expected before running it remotely.
LocalApp is coded to run only in simulator mode, though it could just as easily be used in local mode. In the next lesson we'll see how to make this application run remotely.
## Files
The files used in this example are:
File Description
local/src/LocalApp.java Application code to start the tasks and collect their results
local/src/LocalTask.java The task code that actually does the work
You should have a copy of the source files available while you proceed through the tutorial.
## Application
The LocalApp application computes the squares of a set of numbers. It creates a Frontier job and starts a task for each number to compute that number's square. A listener registered by the application receives the results of the tasks as they complete and prints them to the console. Once all the tasks have completed the application terminates.
### main
Let's start in LocalApp.java. The main method instantiates a LocalApp object and calls launch with the numbers whose squares we're going to compute. The launch method does all the work of the application.
public static void main(String[] args) throws Exception {
int[] inputValues = new int[] { 5, 7, 83 };
LocalApp localApp = new LocalApp();
localApp.launch(inputValues);
localApp.waitUntilComplete();
localApp.closeSession();
}
In the constructor, we invoke the createSession method which in turn creates a session manager. This is our first use of the Frontier API.
manager = new SimulatorSessionManager();
The type of session manager created dictates the mode the library is operating in for all jobs and tasks associated with that session manager. SimulatorSessionManager is a version of SessionManager which uses simulator mode to execute all tasks locally in an environment meant to mimic remote execution. For true remote sessions we'll use a different type of SessionManager but most of the rest of the API used here will still apply.
Note that we could modify LocalApp to use full local mode simply by changing this single line to:
manager = new LocalSessionManager();
Once the session is established, we call LocalApp.launch to create a job, populate it with tasks, start those tasks, and register a listener. We then call waitUntilComplete which simply blocks until all tasks have returned results. In the meantime, the LocalApp -- which has been registered as a listener -- will wait for tasks to send back events, reporting results as they're obtained, and noting when all tasks have been run to completion.
Shutting down the application requires several steps. First we call LocalApp.remove which deletes the job -- including all tasks it contains -- via Job.remove. Next we call LocalApp.closeSession, which shuts down the connection to the session manager by calling SessionManager.destroy.
### launch
Now that we have a session we can create a job. Jobs have a set of attributes associated with them, each of which is a key string associated with a string value. We'll visit attributes in more depth later; for now we'll simply provide an empty map.
job = manager.createJob(new HashMap());
Next we need to provide the job with the executable code for the task. The task implementation itself -- LocalTask.class, as described below -- plus any Java classes needed by the task (not including those provided by the task runtime environment, notably the Java standard library plus the relevant portions of the Frontier SDK, such as com.parabon.runtime and com.parabon.client.SerializableTaskContext, etc) must be provided explicitely in a set of jar files associated with each task. Those jar files are described as "executable elements", and can be added directly to either individual tasks or, if they're shared amongst multiple tasks, as most usually are, they can be added to the job and merely marked as 'required' for the task, as described later. For now, we'll merely create job-level executable elements containing the jar with our LocalTask.class.:
File demoDirectory = new File(System.getProperty("demo.home", "."));
File taskJarFile = new File(demoDirectory, EXECUTABLE_ELEMENT_FILENAME);
try {
}
catch (IOException e) {
System.exit(0);
}
The "executable element ID" (taskJarID) is saved so that we can use it later to refer to this new executable element.
We're now able to create tasks -- but first, a little housekeeping. First, we loop through the set of input values which will each be passed to a single task and take note of what will soon be the 'pending tasks'. This will let our listener know what it's listening for and, specifically, when all the tasks we've started have completed. Then, we register the LocalApp object itself as a listener to the job's tasks, so that we'll be sent the results and other status information for all tasks in the job. We do this before starting any tasks, so that we won't miss out on any events that might be sent between starting tasks and registering a listener afterwards.
for (int inputValue : inputValues) {
}
Finally, we're ready to create and start the tasks themselves. We start by creating a SerializableTaskSpec, which serves as a description of a single task. We create only one task spec object and reuse it for each task simply for efficiency, but we could just as well create one task spec per task.
SerializableTaskSpec taskSpec = new SerializableTaskSpec();
Next, we add our single executable element, created above, as a "required executable element" for each task, so that Frontier knows that each task needs a reference to that executable element. If we didn't do this, then even though that executable element exists in the job, the tasks would simply ignore it. By specifying it explicitely, this allows us to put many elements in the job, and pick-and-choose which are needed for each task, hence allowing us to have many tasks performing different functions in a single job. Note that, similarly, this would not work if the executable element ID we pass in wasn't associated with an element in the job -- in this case, our task will simply come back with an error later on.
taskSpec.addRequiredExecutableElement(taskJarID);
Now we loop through out input values so that we can start one task for each. For each task, we first specify a task attribute. Task attribute -- like job attributes discussed above -- are a set of string -> string mappings. We can define any key we want, and associate it with any string value, though we keep both reasonably small for bandwidth reasons -- it would be inefficient to store a large deal of data in this manner. Specifically, what we want to record here is the input value associated with each task, so that when we hear back from that task, we can identify the task and what it was working on. Alternately, we could've created a unique ID for each task and used a local data structure to associate that ID with the input value; it's entirely up to the developer to decide what identifying information to store in attributes, and how to use it.
for (int inputValue : inputValues) {
Next we do two important steps in rapid succession. Don't let the brevity fool you -- these two steps form the heart and soul of the Frontier API. The first step is creating the task object itself. That is, we instantiate an object which implements the SerializableTask interface, and pass to it any parameters we need to define the task. This can be as involved a process as is necessary, though here, we only need a single parameter to define a task -- the 'input value' -- and so we just pass it in the constructor. The actual class we're instantiating is completely up to us; the LocalTask used in this demo is explained in more detail further down, as is the SerializableTask interface it implements.
The second step is to pass that task object to SerializableTaskSpec.setTask. This method will serialize that task object and store it internally; the resulting byte-stream, containing the 'frozen' task object, will be passed around from client to server and finally to a Frontier compute engine where it will be de-serialized and reconstituted into a mirror of the SerializableTask we originally passed in to setTask. From there, it will be executed, as we'll describe later; for now, we're merely sending the original task object in to be frozen and stored in the task spec. Note that task instances can be reused and sent to multiple taskSpecs -- once we call SerializableTaskSpec.setTask with a given task object, we keep ownership of that object and can continue to change it without affecting the contents of the task spec, which serialized and recorded the contents of the object we passed it and now no longer stores a reference to the object itself. We create one object per task here simply because the task object is so small it's not worth keeping across iterations.
taskSpec.setTask(new LocalTask(inputValue));
Finally, we add the task to the job and start it. When we add the task to the job, it's merely stored, dormant, but we receive back a TaskProxy which is now our single conduit of communication with and control of that task (note, however, that we can get a reference to that TaskProxy via other means at a later time if we wish -- for instance, through a task event or directly from the job). We could use this TaskProxy to remove the task at a later time, to register a listener for events produced by that task (which we do not need to do simply because we've already registered a job listener which will receive events from all tasks in that job, which is similar to registering a listener for each task individually), or get the set of attributes we'd associated with the task when creating it. However, what we want to use that TaskProxy for now is to start the task, letting the system know to begin execution as soon as possible. This means starting it in another thread (or queuing it until other tasks have finished) in local mode, or, in remote mode, sending it to the server and letting the server know that it's ready to be sent down to a compute engine.
TaskProxy task = job.addTask(taskSpec, taskAttributes);
}
After the tasks and listener have been created, control returns to the main method.
### Listener
In addition to providing the overall application structure and control, the LocalApp class also serves as a listener for task events. Not that it is not required that the listener by the main application class; in fact, most complex applications will use a separate class (or classes) altogether to serve as task event listeners. As we've registered the LocalApp instance as a job listener, we'll receive events from all tasks in that job; the types of events we'll receive is dictated by which listener interfaces we implement, discussed further below. The goals of this particular listener are twofold: first, report task results, progress, and errors; and second, keep track of when tasks have completed so that we know when all pending tasks have finished. This latter goal allows the waitForComplete method to return and for the application to subsequently clean up and shut down.
As a task listener, we're required to implement TaskEventListener, but this interface itself does nothing for us. Instead, what we want to do is implement one or more sub-interfaces: TaskProgressListener, TaskIntermediateResultListener , TaskResultListener, and/or TaskExceptionListener . Alternatively, we could implement UniversalTaskListener in order to have all events sent to us via a single method and let the listener sort through, but more often than not these explicit single-event-type interfaces are easier to use. We don't sent progress values -- a sort of generalized "percentage complete" report used to help provide a user-visible gauge of how far along a task is in its work, as well as other uses -- as our task is very short; nor do we use intermediate results, a fancier type of status report sent from the task, with contents defined by the task developer. Hence, we'll leave those two interfaces out, and implement only the two most important: TaskResultListener, and TaskExceptionListener.
public class LocalApp implements TaskResultListener, TaskExceptionListener {
In all cases, each interface adds a single method to which the corresponding event type will be sent. In all cases, the task events implement at least the TaskEvent interface, which provides a means to access task attributes in order to identify the task; access to a TaskProxy from which we can control the task and add listeners, stop the task, etc; and get basic information about the task's current state and progress.
First, let's take a look at the resultsPosted method, inherited from the TaskResultListener, which is sent the final results of a given task via a TaskResultEvent object. The first thing we do in this method is to use LocalApp 's getInputValue method to look at the task attributes for the task which produced the event in question. Specifically, this method will pull out the value of the "InputValue" attribute created in our launch method.
private int getInputValue(TaskEvent event) {
return Integer.parseInt(
}
int inputValue = getInputValue(event);
Next, we want to get the actual task results. The task results are the value returned from the task's run method (described later) -- after those results have been serialized, sent around as a byte-stream, and de-serialized. In our case, LocalTask.run returns an Integer, so that's what we expect to receive here. After obtaining the results, we simply print them to the console; most applications would of course do something more meaningful with task results.
int result = (Integer)event.getResultsObject();
System.out.println("The square of " + inputValue + " is " + result);
Finally, the fact that we've received the task's final results implies that the task is complete. We'll call our taskComplete method with the input value associated with this task, and taskComplete will first record the fact that this particular task is no longer 'pending' and, if this was the last active task, notify waitForComplete that the entire job appears to have been completed.
taskComplete(inputValue);
}
private synchronized void taskComplete(int inputValue) {
notifyAll();
}
}
Next we'll take a look at exceptionThrown, inherited from TaskExceptionListener, which receives a TaskExceptionEvent. This listener is notified when an error occurs while running this task. This exception could be the result of an Exception or other Throwable being thrown from the task's run method, an error during de-serialization, a missing executable element, a security violation, or some other persistent error caused by a bug in the task or task specification; these are referred to as "user exceptions", and the TaskExceptionEvent.isUserException method will return true in these cases. Alternatively, the error may have been caused by a systemic error: for instance, a corrupted installation of the engine running the task, or insufficient memory or other resources. When possible, in remote mode, the Frontier server will attempt to rectify these problems by re-running the task on another engine; however, if this is not feasible or does not resolve the issue -- or if the task is running in local or simulator mode, as it is here -- such errors will be reported back to the client. All errors have an associated description encoded as a string; if the error was caused by a thrown Exception, this description will generally contain the exception's message, stack trace, and cascading nested exceptions as applicable. The exception event will also, when possible, contain codes giving more precise information about the categories of specific error conditions, such as "out of memory" or "security violation"; these are returned by TaskExceptionEvent.getCode and .getSubcode.
Our exceptionThrown method is similar to our resultsPosted method. First, the task is identified via its attributes. Second, the error condition is printed to the console. Third, because an exception means that a task is effectively (if unsuccessfully) complete -- as all tasks end with either results or an exception, but never return both or continue executing after either have been sent -- we'll record the task's completion so that the application can terminate when everything is complete.
public void exceptionThrown(TaskExceptionEvent event) {
int inputValue = getInputValue(event);
System.out.println("Exception (" + event.getCode() + ")"
+ " while working on input value " + inputValue);
System.out.println(event.getDescription());
System.out.println();
}
Now that we've seen how to start tasks and receive their results let's look at the task itself.
The code for the task used in this lesson is in LocalTask.java . It's a simple task that takes a single integer as input, squares it, and sends back the result. Of course, using the Frontier system to perform such a simple task is overkill, but we don't want to concentrate on a complex task here, merely the means to execute it; suffice to say that the work done by the task will in general be significantly more complex and longer-running than squaring a number, but the basic principles of defining a task and returning results remain the same.
Although a task may involve dozens of arbitrary Java classes, each task is ultimately defined by a single entry-point class (or, rather, an instance of this class). This class must implement com.parabon.client.SerializableTask (or com.parabon.runtime.Task, but that's a topic for another lesson), which in turn extends the java.io.Serializable interface. Hence, implementing a task first means following the rules of the Serializable interface (including the option of using Externalizable instead); this common part of the Java platform is described by many pages and books, so we won't go into it here beyond noting that one should be careful when making a task serializable in order to avoid serializing either large or unnecessary pieces of data (a separate mechanism for sending data to tasks is described in Lesson 3, Using Data Elements) or classes which are not included in the executable elements associated with a task; for instance, the transient keyword should be used liberally when applicable, and in some cases a task may even with to use Externalizable for finer-grained control of task serialization. When serialization seems entirely inappropriate for specifying a given task, one may wish to switch to the 'flat' or byte-stream mode of the API, based on com.parabon.runtime.Task, which avoids the use of Serializable altogether, giving a task fine-grained control of the bytes used for specifying both tasks. The remainder of this lesson, however, will assume the use of SerialiableTask.
LocalTask is straightforward and contains only a single member variable: inputValue. Given this simple data structure, it requires very little to make it properly Serializable, though it does include a serialVersionUID as recommended for Serializable classes.
public class LocalTask implements SerializableTask {
private static final long serialVersionUID = -1;
private int inputValue;
The task developer should keep in mind that task instances exist in two places: on the client side and on an engine. In the latter case, the task is being specified, and so the constructor and whatever other mutator methods are required to set up a task and it's parameters should be included here. How this is done is up to the task developer, but it should be kept in mind that little work should be done on this side, and extraneous initialization which could be just as easily be performed on the engine side should be avoided, both to ensure that the task creation is as efficient as possible and does not become a bottleneck, and to keep the size of the serialized task small by avoiding predigested, redundant pieces of data. One exception to this guideline is when precomputation is fast and reduces the size of a task -- for instance, when only a subset of a larger data structure is actually needed for the task to perform its work.
In the case of LocalTask, the only client-side interface is the constructor, which accepts an integer to use as the inputValue parameter for the task.
public LocalTask(int inputValue) {
this.inputValue = inputValue;
}
The other face of a task is the runtime interface. After the task is defined by the client application, serialized, sent down to an engine, and de-serialized, then the methods in the SerializableTask interface become important. At this point -- once the run method is invoked -- the task has been 'fully baked', so to speak, and is free to perform further initialization, knowing that it's parameters have been fully specified and it can afford to use both more computation and more data. This division between serialized vs. started tasks is softened by the introduction of checkpoints, but these will be dealt with in detail in Lesson 4, Using Checkpoints.
SerializableTask contains only two methods: run and stop. run is the first method that's called on a task after it's de-serialized, and that's where a task does its work, returning the final results when complete. In the case of LocalTask, its work is simple: square inputValue and return an Integer representing the result. As the result isn't strongly typed -- if the client application expects a different type than the task returns, it will generally result in an exception on the client side when the cast is made -- it's a good idea to be very explicit about the type being returned, for instance tightly specifying the return type of run rather than using the broader type of Serializable.
public Integer run(SerializableTaskContext context) {
return (Integer)(inputValue*inputValue);
}
Note that run takes a single parameter of type SerializableTaskContext. This context may be used to communicate with the runtime, performing such operations as reporting progress and intermediate results, obtaining data (as described in Lesson 3, Using Data Elements), logging checkpoints (as described in Lesson 4, Using Checkpoints), and other operations. None are used in LocalTask, so the context parameter is never referenced.
The other method in the SerializableTask interface is stop. The concept of this method is simple: if the engine needs to temporarily (or permanently) stop executing a task for any reason, such as when the engine's compute resources are temporarily needed by a user, it invokes this method to request that the task shut down gracefully. This gives the task the opportunity to register a last checkpoint, send back one last intermediate result, or do other cleanup before being temporarily put on a shelf, likely to be restarted from the last checkpoint at a later time, and then to exit gracefully from the run method by throwing a TaskStoppedException . The task is under no obligation to actually do anything at all based on this call, it's merely a courtesy from the engine; if the task doesn't shut down -- or doesn't shut down quickly enough -- the engine will usually stop the JVM or otherwise forcefully shut down the task. Similarly, there are no guarantees that the engine will actually call this method before shutting the task down.
Another means for a task to accomplish the same thing as the stop method is to periodically check the value returned by SerializableTaskContext.shouldStop. For some tasks, this mechanism is more convenient than being interrupted by the stop method -- for instance, it could be checked each time through a loop. Both provide the same information from the engine, however, so it's up to the task which to take advantage of, if either.
As the LocalTask is very short-running and doesn't have any loops, it wouldn't be meaningful to have it be able to stop; hence, LocalTask has an empty stop method.
public void stop() {
// Stopping not supported
}
## Running the Application
We now have a complete and functional Frontier application. You can run the application as show below:
• UNIX or Linux:
cd frontier-sdk-home/demo/local
./local.sh
• Microsoft® Windows®:
cd frontier-sdk-home\demo\local
local.bat
The application's output should look like this:
Connecting to session manager
Connected to simulator
Starting task to compute the square of 5
Starting task to compute the square of 9
Starting task to compute the square of 100
Waiting for results...
The square of 5 is 25
The square of 9 is 81
The square of 100 is 10000
Note that task results are received in a random order, due to variations in the scheduling of the tasks.
# Lesson 2: Running in Remote Mode
Note: You must complete the additional Frontier SDK configuration described in Remote Mode Setup before your application can run in remote mode. If you have not already done so, perform this step before continuing with this lesson.
## Overview
In Lesson 1 we saw how to create and run a simple Frontier application on our local machine. We'll now extend this application to make it capable of running either on our local machine, or remotely on the Frontier platform.
This lesson will also show you how to implement the Launch-and-Listen design pattern used by Frontier applications. In Launch-and-Listen, an application launches its tasks on the Frontier server and then often exits without waiting for the task results. At a later time, the application is run again to obtain any pending results from the server. The application can be run repeatedly until all results have been obtained for the tasks.
Without using Launch-and-Listen, an application would have to stay connected to the Frontier server until all its tasks have completed. For a complex Frontier application, this might be hours or days, and if the application was terminated during this time (user logged off, system crashed, etc.), the task results would be lost.
## Files
The files used in this lesson are:
File Description
remote/src/RemoteApp.java Application code to start the tasks and collect results
remote/src/RemoteTask.java The task that actually does the work
## Application
The RemoteApp application performs the same calculations as the LocalApp application in the previous lesson: it computes the squares of a set of numbers. Depending on the command line flags passed when the application is run, the RemoteApp application will do one or more of the following:
• Create a Frontier job and start tasks to compute the squares of the numbers.
• Connect to an already-existing Frontier job (unless the job was launched in the same invocation) and monitor task status and results.
• Delete the Frontier job and its tasks from the Frontier server.
You would normally run RemoteApp a first time to create the job and tasks, then run it again to obtain the task results. Once all task results have been received, you would run the application one last time to delete the job from the Frontier server. Multiple flags may be used to do more than one of these operations in sequence -- for instance, launching a job and immediately starting to listen for results, or listening for results and deleting the job once all results have been obtained.
### main
Let's start with the RemoteApp.main method. The method starts by determining what operation should be performed based on the command line arguments.
boolean launch = false;
boolean listen = false;
boolean remove = false;
if (args.length == 0) {
usage();
System.exit(1);
}
for (String arg : args) {
if (arg.equals("launch")) {
launch = true;
} else if (arg.equals("listen")) {
listen = true;
} else if (arg.equals("remove")) {
remove = true;
} else {
usage();
System.exit(1);
}
}
Next we create a new RemoteApp instance, similarly to LocalApp but using a "job name" derived from an environment variable; more on this later. During the RemoteApp constructor, a connection to the remote server is initiated -- more on this later, as well.
String jobName = System.getProperty("jobName", DEFAULT_JOB_NAME);
RemoteApp remoteApp = new RemoteApp(jobName);
Following this, we perform the operation specified by the command line arguments. We either create a new job or connect to an existing one; optionally listen for results and wait for completion; and finally, optionally stop that job and remove it from the server.
int[] inputValues = new int[] { 5, 9, 100 };
if (launch) {
if (remoteApp.findJob()) {
System.err.println(
"Job of type " + JOB_TYPE + " with name " + jobName +
System.err.println("Use the \"remove\" mode to remove the job.");
System.exit(1);
}
remoteApp.launch(inputValues, listen);
} else {
if (!remoteApp.findJob()) {
System.err.println(
"Job of type " + JOB_TYPE + " with name " + jobName +
" does not exist.");
System.err.println("Use the \"launch\" mode to create a new job.");
System.exit(1);
}
}
if (listen) {
if (!launch) {
}
remoteApp.waitUntilComplete();
}
if (remove) {
remoteApp.remove();
}
This main method illustrates a relatively straightforward version of the launch-and-listen paradigm. One may launch a job by issuing RemoteApp launch, gather results via RemoteApp listen, and remove the job after completion -- or before -- via RemoteApp remove. The only long-running step in this sequence is the "listen" stage, and if the user quits the application -- or suffers a crash, loss of power, etc -- during this stage, he merely needs to restart the application to continue monitoring the tasks' progress. However, more complex applications need only follow the spirit of this sequence, not the letter; for instance, they may launch tasks, listen to progress, and stop jobs explicitely within a single session, re-attaching to existing jobs only as an exception to recover from a crash or similar event.
### createSession
The createSession method, invoked from the constructor, differs from its counterpart in LocalApp in two important ways.
manager = new RemoteSessionManager();
manager.reestablish();
The first difference is the creation of a RemoteSessionManager rather than a LocalSessionManager; this kicks off a session in 'remote' mode, initiating a connection to the Frontier server and performing all subsequent interactions through that SessionManager -- such as creation of jobs and tasks -- in the context of that server, using a message transfer protocol to submit jobs and tasks, register listeners, obtain status and results, and perform other queries.
The second difference is the fact that immediately after creating the session, the createSession method calls RemoteSessionManager.reestablish. This method will communicate with the server to obtain a list of the currently running jobs. Before calling this method, the RemoteSessionManager didn't know about any jobs except those that were created by the current machine during this session -- that is, none. reestablish will block until the latest list of jobs is returned by the server.
### launch
The launch method does the work of creating a new job, populating it with tasks, and submitting it to the server. The code to do this is nearly identical to its counterpart in LocalApp, because the API for local and remote modes is, for most purposes, exactly the same; this makes it easy for a single application to function in either mode, simply by 'flipping a switch'. The RemoteApp version of the launch method differs in only two ways: first, it adds an addListener parameter, so that a job listener won't be added unless the app is in 'listen' mode. The second change is more significant: the specification of job attributes. Technically, job attributes could be used in local mode as well, they simply aren't as necessary. Job attributes are similar to task attributes, and in fact use the same mechanism. Here, we add three attributes; none are specifically required and, in fact, none actually do anything special, they're merely used to help identify a running job when we wish to re-attach with a new session of the application -- or a different application, for that matter, such as the "job listener" demo.
• JobType: Used to help identify the application which created the job and give a rough indication of what the job is intended to do. Although nothing in the Frontier API requires the use of this attribute, it is often used by convention.
• JobName: Used as a unique identifier for a job. In the case of this demo, this attribute merely helps ensure that multiple redundant jobs aren't started by accident, as described below; however, more complex applications might use this identifier to, for instance, associate a job with local resources such as log files, results storage tables in databases, et cetera.
• InputValues: This attribute gives some specific information about the parameters of this job instance -- specifically, listing the values used as input. New sessions which re-attach to this job later can use this attribute to know what the job is working on and, significantly, know it is complete when results for all input values have been obtained. This attribute is merely an example of how arbitrary, small pieces of information, which may be useful later in reestablishing a connection with a job, may be stored in attributes.
StringBuilder inputValuesAttribute = new StringBuilder();
for (int inputValue : inputValues) {
if (inputValuesAttribute.length() != 0) {
inputValuesAttribute.append(' ');
}
inputValuesAttribute.append(inputValue);
}
Map jobAttributes = new HashMap ();
jobAttributes.put(JOB_TYPE_ATTRIBUTE, JOB_TYPE);
jobAttributes.put(JOB_NAME_ATTRIBUTE, jobName);
jobAttributes.put(JOB_INPUT_VALUES_ATTRIBUTE,
inputValuesAttribute.toString());
job = manager.createJob(jobAttributes);
### findJob
The findJob method attempts to locate and connect to an existing job on the server with the same type and name attributes (as described above), returning true if such a job was found. This is used in 'listen' and 'remove' modes -- after reestablishing the list of jobs -- in order to locate the job we wish to listen to or remove. It is also used in 'launch' mode in order to ensure that a matching job doesn't already exist on the server.
public boolean findJob() {
assert(job == null);
for (Job currJob : manager.getJobs()) {
String currJobType = currJob.getAttributes().get(JOB_TYPE_ATTRIBUTE);
String currJobName = currJob.getAttributes().get(JOB_NAME_ATTRIBUTE);
if ((currJobType != null) && currJobType.equals(JOB_TYPE) &&
(currJobName != null) && currJobName.equals(jobName)) {
String[] inputValueStrings =
currJob.getAttributes().get(JOB_INPUT_VALUES_ATTRIBUTE).split(" ");
for (int i = 0; i <inputValueStrings.length; i++) {
int inputValue = Integer.parseInt(inputValueStrings[i]);
}
job = currJob;
return true;
}
}
return false;
}
### closeSession
The closeSession method in RemoteApp is functionally similar to that of LocalApp. However, it's worth noting what goes on behind the scenes. Specifically, when RemoteSessionManager.destroy is called, it first waits until all communication with the Frontier server has completed. As the Frontier SDK uses asynchronous messaging and queues up both outgoing and incoming messages, simply exiting the application could cause unexpected behavior -- for instance, even though task.start may have been called on one or more tasks, the server may not have yet been sent the messages which will actually create and start these tasks. Simply exiting may cause some unknown number of messages to be 'dropped on the floor'; hence, it is generally very important to shut down a session cleanly via RemoteSessionManager.destroy before exiting an application (for instance, via System.exit or an uncaught exception).
The job listener functionality in RemoteApp is identical to that of LocalApp. However, it's worth noting three subtle but important difference in listener functionality between the two modes.
First, in remote mode, adding a listener isn't just a simple, passive operation; rather, it informs the server that event messages should be sent. Hence, it makes sense to add listeners only when they're actually needed, and then add them in the narrowest possible context. For instance, if an app cares only about task progress, it should only add a TaskProgressListener, not a TaskResultListener or a UniversalTaskListener, as the latter two will result in full results being sent over the network even though they're not needed.
Second, when a listener is first added, the server will generally create a 'latest status' event if applicable for each task (meaning, in the case of a job listener, all tasks in that job). That means that the latest progress will often be sent for started tasks, intermediate results may be sent down if the task has posted any, and -- most significantly -- for completed tasks, either results or exceptions will be sent down to a registered result/exception listener. This is not necessarily the case for all new listeners, however, just the first applicable listeners in a new session -- that is, adding a TaskResultListener a few minutes after adding a UniversalTaskListener will not generally result in another "latest status" event being generated for the benefit of the new listener. This combined with the first point above mean that one should be careful about when and where listeners are added in an application.
Third, unlike local mode, developers should keep in mind that not all events will be make it back to the listeners. Multiple progress or intermediate result events may be thrown on the floor by the server to conserve bandwidth, for instance, and checkpoint events generally will not be sent at all -- in fact, checkpoints are rarely sent to the server from engines. The rules governing which events are and are not guaranteed are laid out in the Frontier API Documentation.
The rest of RemoteApp is similar to LocalApp.
RemoteAppTask is identical to LocalAppTask - a properly-written Frontier task requires no changes to run in remote mode.
## Running the Application
Now it's time to run the application.
• UNIX or Linux:
cd frontier-sdk-home/demo/remote
./remote.sh launch
./remote.sh listen
• Microsoft® Windows®:
cd frontier-sdk-home\demo\remote
remote.bat launch
remote.bat listen
This will start the tasks and then display the latest task results. We can run the application with the listen flag any number of times. When we're done we destroy the tasks and the job by doing:
• UNIX or Linux:
cd frontier-sdk-home/demo/remote
./remote.sh remove
• Microsoft® Windows®:
cd frontier-sdk-home\demo\remote
remote.bat remove
If we run RemoteApp with the remove flag twice in a row, we'll receive an error indicating the job does not exist.
We can also combine flags to perform multiple modes in sequence -- for instance, "launch listen" will launch a new job and immediately start listening to results.
You'll see a couple of minor differences in the way the application operates when running remotely, compared to the local-mode operation of LocalApp. When the application starts, you'll be prompted for the Frontier user name and password you set up when you ran registered for a Frontier account (see Remote Mode). You may also have to wait a few minutes for your application to receive task status events. Task status events are not available until the Frontier server has scheduled the task on a provider.
You've now run your first application remotely on the Frontier platform. The rest of the examples in the tutorial will run in both local and remote mode, but that option is described in the next lesson.
# Lesson 3: Using Data Elements
## Overview
Applications need data. While we've used hard-coded input values in our examples so far, a real application normally requires data from external sources such as files or databases. In this lesson we'll learn how to use data elements to supply input data from a file to a task.
Data elements are the mechanism provided by the Frontier platform to pass data from files and other data sources to tasks. Because tasks execute in a restricted JVM "sandbox" on remote systems, they cannot access files or other data sources directly. A Frontier application must create data elements for data sources required by its tasks, and submit these data elements to the Frontier server. When a task is scheduled to a remote system for execution, the Frontier runtime will retrieve all required elements from the Frontier server, and make them available to the task as needed.
## Files
The files used in this lesson are:
File Description
data/src/DataApp.java Application code to start the task and collect its results
data/src/DataTask.java The task that actually does the work
data/DataApp.data The input data file used by the application
## Application
The DataApp application computes the mean and standard deviation of a set of integers. The input numbers are contained in the file DataApp.data, one number per line. The application constructs a data element from this file and creates a single task that reads this data, then computes the mean and standard deviation of the values.
Data elements are similar to executable elements, except that instead as being added to the JVM's classpath during execution on an engine, they are instead shipped to the engine as pure data and may be accessed directly by the task. Adding a data element within a client application is very similar to adding an executable element. In the case of DataApp, the launch method creates a data element for the input data file in the job, receiving back a String containing a data element ID with which to identify the new data element.
File dataFile = new File(demoDirectory, DATA_FILENAME);
In addition to File objects, data elements can be constructed from any class that implements the com.parabon.io.DataWrapper interface (such as com.parabon.io.ByteArrayDataWrapper). This lets you create data elements from arbitrary input sources like arrays in memory, or database query results.
There are several ways for the task to access the data element. The most straightforward is to simply pass the data element ID to the task, which will then store it internally and, during task execution, obtain a DataBuffer with which to access the contents of the file (the DataBuffer interface itself will be discussed later) by calling SerializableTaskContext.getDataElement with the data element ID. When using this mechanism, the system must also be notified that a particular data element is required for a task by calling TaskSpec.addRequiredDataElement with the data element ID while the task is being created.
However, a more common and simpler mechanism is available which relies on some behind-the-scenes magic to simplify this process; this is the mechanism used by DataApp. During task creation, a DataElementProxy object is created and passed the data element ID; this object in turn is passed to the task. The DataElementProxy implements DataBuffer, and so -- once the task starts executing -- it can be used to access the data element directly, just as if it had been returned from SerializableTaskContext.getDataElement. This removes the need to store the data element ID in the task, to call SerializableTaskContext.getDataElement, or even -- because the system detects the DataElementProxy during task serialization and treats it specially -- to mark the data element as required via TaskSpec.addRequiredDataElement.
Hence, our DataApp is able to send the data element to its single task during task creation, as in the two lines below, and the data element contents will automatically be sent along with the task. Note that although DataApp creates only one task, it would be equally feasible to create dozens, each with their own (or even a single shared) DataElementProxy referring to the same data element.
DataElementProxy dataElementProxy = new DataElementProxy(dataElementID);
One other difference in DataApp versus the RemoteApp from the previous lesson is the means by which the SessionManager is created. Thus far we have hard-coded the SessionManager implementation used by each particular example, but an alternative to this is to use the static SessionManager.getInstance method, which will use the value of the "modelocal", "simulator", or "remote" -- to determine which type of SessionManager to create. This lets us easily switch between modes in relatively small, command-line applications via a switch sent to the JVM -- e.g. "-Dmode=remote". Larger more complex applications would likely continue to create SessionManager instances with explicit types and decide between which type to use depending on, for instance, application preferences specified via a GUI.
## Listener
Because this application has only one task, the listener functionality is much simpler than the listeners in previous examples: its resultsPosted() method prints the results from the task and then sets a flag to indicate that the job is complete when the task is complete. However, the results themselves are slightly different.
As our task is now computing results slightly more complex than the single Integer returned by the tasks in the previous two lessons -- specifically, our new task will need to return two numbers, a mean and a standard deviation -- we in turn need to change the return type from the task. A task can return any Serializable object, as long as the application and task agree on what is being sent. Hence, we'll make a new custom type and add it as a static nested class within DataTask: DataTask.Result. This contains two float fields representing the mean and standard deviation, respectively. Note that it's very important that this is a static nested class (as opposed to a non-static or "inner" class), because otherwise, attempting to serialize the result would also result in the outer class -- in this case, the task itself, DataTask -- being serialized as well. This is a very common source of bugs in serialization in general, and crops up here not only in results but in tasks themselves; hence, a developer should always be wary of such problems when using nested classes.
Using the results, which in DataApp are only used within the resultsPosted event listener method itself, is simple. The only sticking point is to be sure that the type the application is expecting and the type the task is sending are always the same.
public void resultsPosted(TaskResultEvent event) {
System.out.println("The mean is " + result.getMean()
+ " +/- " + result.getStandardDeviation());
System.out.println();
}
We've seen how the application creates data elements and passes them as parameters to a task. Now let's look at how the task reads data from a data element.
Accessing data via the DataBuffer interface is reasonably straightforward. This interface, intended to provide read- and, in other cases (that is, not including data element access) write-access to a set of bytes. A task can obtain one via several different means, including SerializableTaskContext.getDataElement (as described above) and as a DataElementProxy created on the client side and shipped along with the task. This latter course is how DataTask access its data element: that is, it merely uses its existing data member, set in the constructor. Once a DataBuffer is available, it has a number of different means of access, such as a read method, but perhaps the most common and easiest means to access the data -- and, again, the means used here by DataTask -- is via the getInputStream method, which returns a java.io.InputStream implementation which will read the contents of the DataBuffer . Since this is just like any other InputStream, it can be either used directly or sent to any number of library methods, from XML parsers to image readers. Here, we create a java.io.BufferedReader so that we can read the contents line-by-line.
public Result run(SerializableTaskContext context)
Note: We explicitly specify the character set of the input stream when we create the BufferedReader instead of relying on the JVM's default character set. Since the default character set of the JVM used to run the task may differ from that of the JVM used by the application, explicitly setting the character set ensures we get a consistent interpretation of the input data.
Next, we get to the meat of the task's work: iterating through the values in the data stream and computing their sum and the sum of their squares, which will be used later to determine the mean and standard deviation.
int numValues = 0;
long sum = 0;
long sumOfSquares = 0;
boolean done = false;
while (!done) {
if (context.shouldStop()) {
}
if (inputLine == null) {
done = true;
} else if (inputLine.trim().length() > 0) {
int value = Integer.parseInt(inputLine);
numValues++;
sum += value;
sumOfSquares += value * value;
}
// We don't know how many values there are, so we can't report
// an actual fraction-complete, the best we can do is report that we're
// a certain amount done. Hence, by convention, we start our progress
// at a value over 1.0.
context.reportProgress(1+numValues);
}
It is worth noting four points about the above code. First, the task periodically calls context.shouldStop to determine whether the engine has requested that the task shut down gracefully, and if so, throws a TaskStoppedException out of the run method. This is good practice, even if the task doesn't take advantage of the opportunity to log checkpoints or send intermediate results. Second, we report progress when we can; doing so is never necessary, but can be useful to the user (to whom the information could be presented via a progress bar, for instance), as well as for the system (which might schedule the same task on multiple engines, and could keep track of which had gotten further in the task by comparing their progress values). Third, the task doesn't check NumberFormatExceptions from Integer.parseInt, but rather declares them in the signature of the run method; if any such exception is thrown, it will end up being sent back to the client application as a task exception. It's entirely up to the task developer to decide what to do with such exceptions, but resulting in an exception being thrown from run is often a reasonable choice. Fourth, the task reads the lines of data one-at-a-time, performing processing as it goes, rather than reading all data up-front and acting on it. Such structural decisions are left entirely up to the task developer, so that he can make tradeoffs between memory considerations, efficiency, and flexibility. In this case, the data is read in piecemeal merely for simplicity. If we had instead read it into an array first and hence no longer needed the data at all, we'd probably want to release the resources used by the data element's DataBuffer object, data, by first calling DataBuffer.release and then nulling out our reference to it so that it could be garbage-collected. The decision of how and when to read data element contents is further complicated by checkpoints, which are discussed in the next lesson.
Once the data is read and processed, DataTask.run merely needs to construct final results and send them back, using the type agreed upon with the client application -- in this case, DataTask.Result (which was also used above as the return type from run for safety's sake).
double mean = sum / (double) numValues;
double standardDeviation =
Math.sqrt(sumOfSquares/(double)numValues - mean*mean);
return new Result(mean, standardDeviation);
## Running the Application
DataApp uses the file DataApp.data in the current directory as its input data file. Run the program with the following commands, specifying the -remote flag on the command line if you want to run the application in remote mode:
• UNIX or Linux:
cd frontier-sdk-home/demo/data
./data.sh [-remote] launch
./data.sh [-remote] listen
./data.sh [-remote] remove
• Microsoft® Windows®:
cd frontier-sdk-home\demo\data
data.bat [-remote] launch
data.bat [-remote] listen
data.bat [-remote] remove
In the next lesson we'll extend this application with checkpoints that allow long-running tasks to be halted and then restarted by the Frontier server.
# Lesson 4: Using Checkpoints
## Overview
In this lesson we'll modify the example from Lesson 3 to periodically checkpoint the values being computed by the task, and to resume processing if the task is restarted from a checkpoint.
## Files
The files used in this lesson are:
File Description
checkpoint/src/CheckpointApp.java Application code to start the task and the listener
checkpoint/src/CheckpointTask.java The task code that actually does the work
checkpoint/CheckpointApp.data The input data file used by the application
## Application
As checkpoints are both produced by a task and subsequently used directly on the engine on which they were created in order to restart a task -- and generally are not reported back to the client application -- generally no client-side functionality is involved in using checkpoints; rather, it is merely the responsibility of the task itself to correctly log and recover from checkpoints. As such, CheckpointApp is, aside from class names and the like, identical to its counterpart from the previous lesson, DataApp.
The CheckpointTask class differs from DataTask in three ways. The first is a minor restructuring of the class in order to save some state in a way that it will be stored in checkpoints. The second is its ability to log checkpoints, and the third is the functionality needed to restart from a checkpoint.
First, we should define what a checkpoint is, exactly. Simply put, it is the task itself, serialized into an array of bytes just as it was when first specified. From that perspective, a checkpoint is in fact identical to an original task specification. In fact, from the task's perspective, restarting from a checkpoint is just like starting originally: the class is de-serialized and the run() method is called, which returns results on completion. The only real difference between the two is that in a checkpoint, the work the task has done thus far is encoded in its various fields, using whatever means was most appropriate for the particular task; that is, how to represent a task's state such that it can resume later is left to the task developer, Frontier merely provides the mechanism to save that state. Fortunately, it's also up to the task developer to decide when a task is in a state that can be saved, so we can pick points where it's easy to consolidate the task down to a few essentials -- that is, we needn't checkpoint when the task is in the middle of doing something complex.
With that in mind, we take a look at DataTask in order to determine where its state -- the work it has done thus far -- is stored. We can quickly notice that the three quantities actually being computed for most of the task's life -- sum-so-far, sum-of-squares-so-far, and number-of-values-considered-thus-far -- are simply local variables inside the run method. If we wish to start where we left off at any point, these are the three pieces of information we'd really need to know. So, we simply make these class member variables. Now, when the task is serialized out, these values will be as well -- and when we continue later, they will be available once again.
private int numValues;
private int sum;
private int sumOfSquares;
This is also the point where we want to consider what not to save, in the name of efficiency. Generally, any cached value which can be recreated reasonably easily, at least compared to its size in memory, should be left out of checkpoints. For instance, if we've parsed one of our data elements into a large structure in memory, then generally we wouldn't want to save that in a checkpoint but rather re-create it on our next run. Often, marking such fields as transient is enough to achieve this, but we also need to be sure that the task knows to recreate these caches after restarting.
The next thing we need to do -- the real meat, creating and saving off a checkpoint -- is actually deceptively easy. Simply put, we wait until we're in a state where we can save the state of a task, and call context.logCheckpoint, which will (or rather, might -- the engine usually only saves checkpoints when it thinks it's worthwhile, and just skips the rest), behind the scenes, serialize out our entire task object. This brings up the question of when we shouldn't log a checkpoint. The answer to this, too, is deceptively simple: don't store a checkpoint when you're in the middle of something. In computer science parlance, we might say that we want to ensure that our invariants are valid. For instance, if we're swapping two numbers,
e.g.
double temp = this.a;
this.a = this.b;
this.b = temp;
then we wouldn't want to save a checkpoint in the middle while this.a == this.b, but would want to wait until we're at least done with that swap. Similarly, in CheckpointTask , we wouldn't want to log a checkpoint after we've updated sum but haven't updated sumOfSquares. So, we'll simply log a checkpoint at the end of our main loop. All we need to do is add a call to context.logCheckpoint:
while (!done) {
...
context.logCheckpoint();
}
So, how frequently should we call this? There are two answers. Generally, one should call it often -- anything from several times a second to once a minute or so. The task needn't worry about the overhead incurred by calling this method itself, as the engine will keep track of how long checkpoints are taking to save and how long it's been since the last checkpoint, and will decide based on that whether to take advantage of the opportunity to log a new checkpoint. If a task calls logCheckpoint ten times a second but checkpoints are taking about 5 seconds to save, then the engine will generally return immediately from logCheckpoint without doing anything 99.9% of the time, and only once every few minutes will it actually decide to save a checkpoint during one of these calls. However, sometimes a task must do work to even be able to log a checkpoint; for instance, if a task has 5 independent threads, usually all threads must be in a valid, 'checkpoint-able' state when logCheckpoint is called, lest one of them be in the middle of updating some task fields when the checkpoint is made, resulting in a corrupt checkpoint. So, the task would likely need tell each thread it should pause as soon as it's checkpoint-able, wait until all threads have paused, and then call logCheckpoint. This results in lost performance as threads just sit there waiting for other threads to stop, and so it shouldn't be done when the engine isn't even going to bother saving the checkpoint. Another example might be if a task needed to cull down a large working dataset (for example, a sparse matrix) to determine the few values worth saving, which would be wasted work if no checkpoint was to be saved (though we should note that one could also do this in the java.io.Externalizable.writeObject method, if one was using this interface rather than Serializable ). The solution for such situations is to call context.shouldLogCheckpoint, which uses the engine's checkpoint-throttling logic to report whether it's worth saving a new checkpoint yet, and only when this returns true would the task go to the trouble of pausing its threads or doing other checkpoint-preparation work. Or alternatively, if the task wants to override the engine's throttling logic -- for instance, if a task knows it's in a checkpoint-able state now but won't be in such a state again for ten minutes -- the task can force the checkpoint to be saved via context.logCheckpoint(true).
One special time we should make an effort to call shouldLogCheckpoint, assuming we haven't called it very recently and we're in a valid checkpoint-able state, is after the engine requests that the task stop (e.g. if context.shouldStop returns true and before we throw a TaskStoppedException in response). CheckpointTask doesn't do this simply because it has just called logCheckpoint anyway.
The third thing we need to do to support checkpointing often involves the most code, but it's very specific to each task. Simply put, we must recover from a checkpoint. For some tasks, no such explicit recovery is necessary; they can simply start back where they left off. Other tasks may need to recreate cached values and perform other such housekeeping. Some tasks may want to skip initialization phases or jump past work already performed, in order to avoid doing things twice and causing problems in the process. This last is what needs to be done in CheckpointTask. As CheckpointTask reads data values one-at-a-time, performing computation on each, we need to be sure that when we restart from a checkpoint, we skip past values we've already dealt with, both to avoid performing redundant work and, more importantly, to avoid double-counting values and ending up with incorrect results. We could do this by storing an offset through our input stream and skipping past those bytes as soon as we get a new input stream, but that's difficult to do with a BufferedReader . We could instead save the number of values we've processed as part of our state, and discard that many lines from the input. In fact, we already have this information: the numValues field happens to be the same as the number of values we've considered thus far, so we don't need to store another member variable with this information, we simply need to skip numValues values before dealing with new ones. We add this code before the loop:
int valuesToSkip = 0;
if (numValues != 0) {
// Restarting from a checkpoint, so let's skip through data we
valuesToSkip = numValues;
}
Then in the loop itself, we replace this:
numValues++;
sum += value;
sumOfSquares += value * value;
with this:
if (valuesToSkip > 0) {
valuesToSkip--;
} else {
numValues++;
sum += value;
sumOfSquares += value * value;
}
CheckpointTask is, as its name promises, now fully able to produce and recover from checkpoints.
## Running the Application
CheckpointApp is run like our previous examples:
• UNIX or Linux:
cd frontier-sdk-home/demo/checkpoint
./checkpoint.sh [-remote] launch
./checkpoint.sh [-remote] listen
./checkpoint.sh [-remote] remove
• Microsoft® Windows®®:
cd frontier-sdk-home\demo\checkpoint
checkpoint.bat [-remote] launch
checkpoint.bat [-remote] listen
checkpoint.bat [-remote] remove
You have now completed the tutorial lessons. The next part of this document contains overviews of several additional applications included with the Frontier SDK that provide examples of advanced Frontier programming techniques.
# Examples
The following examples are available in the Frontier SDK demo directory. These examples can run in all three modes and are included to demonstrate best practices when developing Frontier enabled applications.
## Job Listener Application
### Overview
This section provides an example of using the Frontier API in a Swing GUI application. In a GUI application, the Frontier API methods should normally be executed in a separate thread so the GUI remains responsive while the API communicates with the Frontier server over the network. The example program here is derived from the "Job Controller" utility, which is an application included with the Frontier SDK that can be used to monitor and manipulate jobs on the Frontier server.
Since this example is over a thousand lines of code we'll focus on the key points of interest in the program.
### Files
The files used in this example are:
File Description
joblistener/src/JobListener.java Main program and Swing GUI
joblistener/src/JobSearcher.java Connects to the server and gets the list of jobs
joblistener/src/RunMode.java Utility class for decoding the task mode into colors
joblistener/src/SwingWorker.java Utility class for running a "background" thread
joblistener/src/TaskChartPane.java Displays task modes as a colorful chart
joblistener/src/TaskFrame.java GUI code to provide internal frame for job details
joblistener/src/TaskLegendPane.java Displays a legend of the task modes and their color representation
joblistener/src/TaskStatus.java Wrapper class for task mode and progress
joblistener/src/BasicWindowMonitor.java Utility class to handle closing the application
### Application
JobListener connects to the Frontier server, displays the list of all client jobs on the server, and allows you to select a job to display detailed information. This is done by selecting a job from the job list panel, then clicking the Show Task button.
Clicking Show Tasks displays a list of tasks in a job from the Frontier server, with additional tabbed panes displaying task attributes and status:
The code segment shown here from JobListener.java uses the SwingWorker class to run the Show Task operation in a background thread. In the construct() method, a reestablish() call is made on the job to populate the job's task information, then finished() is called to create a TaskFrame to display task details for the selected job.
public void actionPerformed( ActionEvent ev ) {
final int selected[] = _list.getSelectedIndices();
//
// Display the tasks associated with the job in the desktop pane.
//
if ( selected.length > 0 ) {
if ( _jobList.size() > 0 ) {
System.out.println( "Requesting job details from the server..." );
//
// Reestablish the job (this takes a little while).
//
SwingWorker worker = new SwingWorker() {
public Object construct() {
try {
synchronized ( _listResource ) {
Job tmpJob = (Job) _jobList.get( selected[0] );
if ( tmpJob != null ) tmpJob.reestablish();
}
} catch (OperationFailedException ofe ) { ofe.printStackTrace(); };
return null;
}
public void finished() {
String jobName;
Job tmpJob;
synchronized ( _listResource ) {
if ( selected.length > 0 ) {
jobName = (String) _model.get( selected[0] );
tmpJob = (Job ) _jobList.get( selected[0] );
}
else {
jobName = null;
tmpJob = null;
}
}
//
// Display the task frame on the desktop pane.
//
if ( tmpJob != null ) {
f.setVisible( true );
f.moveToFront();
f.setDesk( _desk );
}
}
};
worker.start();
}
}
}
};
The abstract SwingWorker class is extended here as an anonymous class. Anonymous classes allow you to define a class near the place in the code that it is used, and avoid declaring many trivial classes in your application.
The TaskChartPane, contained in the TaskFrame , tracks the individual task progress in a Map using a TaskProgressListener to regularly display the progress in a chart. The TaskProgressListener is a lightweight task listener that only receives the mode and progress information from tasks. This type of listener is useful if you want to monitor a job that produces large results but do not want to incur the communications penalty of receiving large messages. The TaskChartPane implements the interface of the TaskProgressListener by implementing a progressReported() method. The TaskProgressListener is set up by the following code in the TaskChartPane constructor:
if ( _job != null ) {
}
This causes the progressReported() method of TaskChartPane to be invoked whenever there is new progress information. Listening occurs asynchronously on a separate thread from the Swing GUI. The code segment below shows how the application keeps track of the task statuses in a Map. Note the use of the TASK_ID_DEFAULT_ATTRIBUTE_NAME to uniquely identify each task.
//
// TaskProgressListener - this method keeps the
// task status map up to date.
//
public void progressReported( TaskProgressEvent event ) {
.getAttributes()
if (id != null) {
if (status == null) {
}
status.setProgress( event.getProgress() );
status.setRunMode( event.getRunMode() );
}
}
On another thread, the GUI is requested to update the display every 10 seconds. This is triggered by the run() method. We use SwingUtilities.invokeLater() to make sure the repaint occurs on the Swing dispatch thread:
public void run() {
while ( true ) {
try {
} catch ( InterruptedException ie ) { break; }
//
// Cause the pane to be repainted.
//
Runnable doRepaint = new Runnable() {
public void run() {
_this.repaint();
}
};
SwingUtilities.invokeLater( doRepaint );
}
}
### Running the Application
To run this application do:
• UNIX or Linux:
cd frontier-sdk-home/demo/joblistener
./joblistener.sh
• Microsoft® Windows®:
cd frontier-sdk-home\demo\joblistener
joblistener.bat
## Mersenne Prime Application
### Overview
When a number of the form 2p-1 is prime, it is said to be a Mersenne Prime. This example searches a range of numbers for Mersenne primes using the Lucas-Lehmer test for Mersenne primality. The example illustrates how to use time-based checkpointing to trigger checkpoints using a timer rather that performing a checkpoint after a given number of iterations have completed.
### Files
The files used in this example are:
File Description
prime/src/Prime.java Application code to launch the tasks
prime/src/PrimeTaskEventListener.java The listener for task results
prime/src/PrimeTask.java Contains the task code that checks if a number is prime
### Application
The Prime application launches a task for each odd number in a range of numbers (since even numbers greater than 2 are never prime) to check if that number is a Mersenne prime.
The Prime application has launch, listen, and remove modes, similar to the tutorial examples.
### Listener
PrimeTaskEventListener listens for task status and completion events from the PrimeTasks. It prints the results of the prime test when a task completes, and terminates the application after all tasks have finished.
PrimeTask.run uses the Lucas Lehmer test to determine whether a given candidate is in fact a Mersenne Prime. This test functions by looping through p iterations, where p is the candidate number being tested. Aside from the candidate value itself, the state stored in the PrimeTask class includes the last iteration number and the current "residue"; these values are sufficient to simply continue the test where it left off, and are in a valid state after any given iteration of the main loop has completed. Describing the mathematics behind the Lucas Lehmer test or the role of the residue are outside scope of this document, but interested readers can easily find more information on this topic on the Internet.
Each time through the main loop, in addition to performing the test itself, the run method sends back progress and intermediate results (consisting of the current iteration number), checks if the runtime has requested that the task stop, and then logs a checkpoint.
public synchronized Serializable run(SerializableTaskContext context)
BigInteger m = TWO.pow(candidate).subtract(ONE); // 2^p - 1
while (iteration <= candidate) {
residue = residue.pow(2).subtract(TWO).mod(m);
iteration++;
context.postIntermediateResultsObject(iteration / (double)candidate, (Integer)iteration);
if (context.shouldStop()) {
}
context.logCheckpoint();
}
return (Boolean)residue.equals(ZERO);
}
### Running the Application
To run this application execute the following commands:
• UNIX or Linux:
cd frontier-sdk-home/demo/prime
./prime.sh [-remote launch [start-number end-number] ./prime.sh [-remote]listen ./prime.sh [-remote]remove
• Microsoft® Windows®: cd frontier-sdk-home\demo\prime prime.bat [-remote launch [start-number end-number] prime.bat [-remote]listen prime.bat [-remote]remove
Appendices Appendix A: Using Client-Scope and Global Elements Client-Scope Elements Some Frontier users may find that they launch many jobs that use the same executable and data elements. Each job must resend those elements to the server, resulting in duplicate copies of the elements residing on the server when only one set is necessary. Frontier allows you to create client-scope executable and data elements on the Frontier server, which may be used by tasks in any job created by you. Client-scope elements are created using the upload-element script included in the Frontier SDK. A client-scope element is created by the following commands: UNIX or Linux: cd frontier-sdk-home/bin ./upload-element.sh elementName elementType elementFile Microsoft® Windows®: cd frontier-sdk-home\bin upload-element.bat elementName elementType elementFile where: elementName is a unique name for the element. This name is used as an element ID in your Frontier applications to reference the element. elementType is either data to create a data element or executableJar to create an executable element. elementFile is the pathname of the file to be uploaded. The following example creates a client-scope data element named element1 from the file foo.data: upload-element.sh element1 data foo.data And this example creates a client-scope executable element named element2 from the jar task.jar: upload-element.sh element2 executableJar task.jar In order to use client-scope elements submitted this way, applications simply use the element name provided when the element was created using upload-element just like any other element ID. For example, to pass the client-scope data element named element1 to a task your application would do: myTask.setData(new DataElementProxy("element1")); This example adds the executable element named element2 to a task: myTaskSpec.addRequiredExecutableElement("element2"); Tasks access client-scope elements the same way they do other elements. Global Elements Frontier also supports global elements which are available to any client-not just the client that uploaded the elements. To create global elements please contact Parabon support through our email form at http://www.parabon.com/MyFrontier/support.jsp. Appendix B: Glossary A B C D E F G H I J K L M N O P Q R S T U V W X Y Z A attributes A set of data that can be attached to both jobs and tasks to help describe and identify their purpose and contents. Attributes are stored in a HashMap<String,String> as name-value pairs. B C checkpoint The mechanism Frontier uses to capture the state of a task in order to minimize the amount of computational work lost due to interruptions during task execution. client application One of three main components comprising the Frontier platform. The client application is run on a single computer by an organization or individual wishing to utilize Frontier by communicating with the Frontier server. Client Application Programming Interface (API) The portion of the Frontier API with which client applications deal directly when creating, launching, monitoring, and controlling jobs and tasks. The Client API is implemented by the client library and provides a set of functionality used by the client application to communicate with the server. compute engine A software application that turns otherwise wasted CPU cycles into useful computation by working on tasks using a provider node's spare power. compute-intensive Describes a problem or job that involves relatively small amounts of data, but requires a large amount of computation to solve. compute-to-data ratio A ratio of the amount of computation to the amount of data required to solve a problem. Jobs whose tasks are characterized as having a high compute-to-data ratio-i.e., tasks that require heavy computation but contain relatively small amounts of data-tend to process well on Frontier. Examples of compute-intensive jobs include photo-realistic image rendering, exhaustive regression, and sequence comparison. D data element A black-box chunk of data that is required for the running of a task. E element A mechanism used to efficiently transport relatively large chunks of binary data required to execute a task. Frontier uses both data and executable elements, which are sent from a client application to the server and directed to provider nodes as required before task execution is initiated. executable element A type of element that provides the Java bytecode instructions necessary to run a task. F Frontier A massively scalable distributed computing platform that draws on the otherwise unused computational capacity of computers connected to the Internet. Frontier Application Programming Interface (API) The framework behind the Frontier platform that enables developers to create, launch, monitor, and control arbitrary compute-intensive jobs from an average desktop computer. Frontier Enterprise A version of Frontier that draws on the spare computational power of computers connected to an intranet, such as a corporate network enclosed by a firewall. Unlike Frontier, Frontier Enterprise is a fixed-capacity solution whose peak depends on the number and capacity of computers on the network. In addition, Frontier Enterprise supports native code and lower compute-to-data ratios. Frontier server The central hub of the Frontier platform that communicates with both the client application and individual Frontier compute engines. The Frontier server is responsible for coordinating the scheduling and distribution of tasks; maintaining records identifying all provider nodes, client sessions, and tasks; and receiving and storing task progress information and results until the client application retrieves them. G H I idle Describes the state of a computer that is on but whose processing power is not actively engaged in processing tasks. The Windows® version of the Frontier compute engine can be configured to process tasks during a provider node's idle time. intermediate results Results that are returned by the task before the task is considered complete. Internet distributed computing A distributed processing model in which the idle or spare computational power of computers connected to the Internet is aggregated to create a platform with high-performance capability. J Java Virtual Machine (JVM) The Java technology that provides a "sandbox" inside which the Frontier compute engine can safely process tasks on a provider node. job The single, relatively isolated unit of computational work performed on Frontier. A job is divided into a set of tasks, with one or more tasks assigned to and processed on a single provider node. K L Launch and Listen The two-stage methodology client applications typically use for processing on Frontier. Launching a job involves creating tasks and sending them to the server for distribution onto the provider network, while listening involves gathering results and status updates or removing tasks from Frontier. However, both can intermingle in a single session or occur over the course of several sessions with the server. listener A user-created class that implements one of a set of listener interfaces, each of which receives one or more types of events generated by a job or task. local mode A mode of operating a client application in which only resources local to the client machine are used. Local mode provides an efficient virtual session that is useful for testing and debugging purposes. M N O obfuscation Scrambling all symbolic code information such that no problem-domain information remains. P Frontier Compute Engine One of three main components comprising the Frontier platform that utilizes the spare power of a provider node to process tasks. progress An indication of how "complete" a task is. Many tasks have a known run cycle, i.e., the task will run for a known number of iterations. Tasks that exhibit this behavior generally indicate progress on a scale of 0.0 to 1.0. This progress can then be translated into a "percent complete" by the application. Other tasks simply report progressively larger values, starting above 1.0. provider An individual who installs and runs the Frontier compute engine to donate or provide his or her computer's spare power to process tasks retrieved from the Frontier server. Q R reestablishing Obtaining information about the contents and status of released jobs and tasks-most often those created in previous sessions-and attaching them in order to monitor status changes. remote mode A mode of operation that manages a session with the Frontier server, allowing the client application to actually command the resources of the Frontier platform. run mode The phase of execution a task is in at a particular point in time. Possible run modes include unknown, unstarted, running , paused, complete, stopped, and aborted. S sandbox The restricted, self-enclosed environment inside which the Frontier compute engine processes tasks on a provider's computer. The Frontier compute engine's sandbox is a Java technology that prevents task code from accessing files and interacting with programs on the provider node. session An interaction with the Frontier environment, which can be either with the actual Frontier platform or with resources local to the user running the client application. session manager The Frontier API object responsible for creating and maintaining sessions. T task The relatively small, independent unit of computational work the Frontier compute engine executes on a single provider node. A job to be performed on Frontier is divided into a set of tasks. Task Runtime API The portion of the Frontier API used to create tasks to run on a provider node. Through this API, task execution is initiated, task parameters are set, data elements are accessed, status and results are reported, and checkpoints are logged. U V W X Y Z
© 1999-2008 Parabon Computation Inc. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1719452142715454, "perplexity": 2110.545882060855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249414450.79/warc/CC-MAIN-20190223001001-20190223023001-00386.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2008_AMC_12B_Problems/Problem_24&diff=prev&oldid=33127 | # Difference between revisions of "2008 AMC 12B Problems/Problem 24"
## Problem 24
Let . Distinct points lie on the -axis, and distinct points lie on the graph of . For every positive integer is an equilateral triangle. What is the least for which the length ?
## Solution
Let . We need to rewrite the recursion into something manageable. The two strange conditions, 's lie on the graph of and is an equilateral triangle, can be compacted as follows: which uses , where is the height of the equilateral triangle and therefore times its base.
The relation above holds for and for , so Or, This implies that each segment of a successive triangle is more than the last triangle. To find , we merely have to plug in into the aforementioned recursion and we have . Knowing that is , we can deduce that .Thus, , so . We want to find so that . is our answer. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638452529907227, "perplexity": 499.9830878608061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00586.warc.gz"} |
https://www.hepdata.net/record/ins1357991 | • Browse all
Measurement of the correlation between flow harmonics of different order in lead-lead collisions at $\sqrt{s_{NN}}$=2.76 TeV with the ATLAS detector
The collaboration
Phys.Rev.C 92 (2015) 034903, 2015.
Abstract (data abstract)
CERN-LHC. Experimental measurements of the correlations between the elliptic or triangular flow coefficients, $v_m$ ($m=2$ or 3), and other flow harmonics, $v_n$ ($n=2$ to 5) in lead-lead collisions. The data tables are linked to the corresponding figure number in the paper as well as additional plots in reference.
• #### Table 1
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t1
$v_{2}$ data for various $q_2$ bins, Centrality 0-5%.
• #### Table 2
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t2
$v_{3}$ data for various $q_2$ bins, Centrality 0-5%.
• #### Table 3
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t3
$v_{4}$ data for various $q_2$ bins, Centrality 0-5%.
• #### Table 4
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t4
$v_{5}$ data for various $q_2$ bins, Centrality 0-5%.
• #### Table 5
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t5
$v_{2}$ data for various $q_2$ bins, Centrality 5-10%.
• #### Table 6
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t6
$v_{3}$ data for various $q_2$ bins, Centrality 5-10%.
• #### Table 7
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t7
$v_{4}$ data for various $q_2$ bins, Centrality 5-10%.
• #### Table 8
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t8
$v_{5}$ data for various $q_2$ bins, Centrality 5-10%.
• #### Table 9
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t9
$v_{2}$ data for various $q_2$ bins, Centrality 10-15%.
• #### Table 10
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t10
$v_{3}$ data for various $q_2$ bins, Centrality 10-15%.
• #### Table 11
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t11
$v_{4}$ data for various $q_2$ bins, Centrality 10-15%.
• #### Table 12
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t12
$v_{5}$ data for various $q_2$ bins, Centrality 10-15%.
• #### Table 13
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t13
$v_{2}$ data for various $q_2$ bins, Centrality 15-20%.
• #### Table 14
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t14
$v_{3}$ data for various $q_2$ bins, Centrality 15-20%.
• #### Table 15
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t15
$v_{4}$ data for various $q_2$ bins, Centrality 15-20%.
• #### Table 16
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t16
$v_{5}$ data for various $q_2$ bins, Centrality 15-20%.
• #### Table 17
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t17
$v_{2}$ data for various $q_2$ bins, Centrality 20-25%.
• #### Table 18
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t18
$v_{3}$ data for various $q_2$ bins, Centrality 20-25%.
• #### Table 19
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t19
$v_{4}$ data for various $q_2$ bins, Centrality 20-25%.
• #### Table 20
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t20
$v_{5}$ data for various $q_2$ bins, Centrality 20-25%.
• #### Table 21
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t21
$v_{2}$ data for various $q_2$ bins, Centrality 25-30%.
• #### Table 22
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t22
$v_{3}$ data for various $q_2$ bins, Centrality 25-30%.
• #### Table 23
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t23
$v_{4}$ data for various $q_2$ bins, Centrality 25-30%.
• #### Table 24
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t24
$v_{5}$ data for various $q_2$ bins, Centrality 25-30%.
• #### Table 25
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t25
$v_{2}$ data for various $q_2$ bins, Centrality 30-35%.
• #### Table 26
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t26
$v_{3}$ data for various $q_2$ bins, Centrality 30-35%.
• #### Table 27
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t27
$v_{4}$ data for various $q_2$ bins, Centrality 30-35%.
• #### Table 28
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t28
$v_{5}$ data for various $q_2$ bins, Centrality 30-35%.
• #### Table 29
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t29
$v_{2}$ data for various $q_2$ bins, Centrality 35-40%.
• #### Table 30
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t30
$v_{3}$ data for various $q_2$ bins, Centrality 35-40%.
• #### Table 31
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t31
$v_{4}$ data for various $q_2$ bins, Centrality 35-40%.
• #### Table 32
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t32
$v_{5}$ data for various $q_2$ bins, Centrality 35-40%.
• #### Table 33
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t33
$v_{2}$ data for various $q_2$ bins, Centrality 40-45%.
• #### Table 34
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t34
$v_{3}$ data for various $q_2$ bins, Centrality 40-45%.
• #### Table 35
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t35
$v_{4}$ data for various $q_2$ bins, Centrality 40-45%.
• #### Table 36
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t36
$v_{5}$ data for various $q_2$ bins, Centrality 40-45%.
• #### Table 37
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t37
$v_{2}$ data for various $q_2$ bins, Centrality 45-50%.
• #### Table 38
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t38
$v_{3}$ data for various $q_2$ bins, Centrality 45-50%.
• #### Table 39
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t39
$v_{4}$ data for various $q_2$ bins, Centrality 45-50%.
• #### Table 40
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t40
$v_{5}$ data for various $q_2$ bins, Centrality 45-50%.
• #### Table 41
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t41
$v_{2}$ data for various $q_2$ bins, Centrality 50-55%.
• #### Table 42
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t42
$v_{3}$ data for various $q_2$ bins, Centrality 50-55%.
• #### Table 43
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t43
$v_{4}$ data for various $q_2$ bins, Centrality 50-55%.
• #### Table 44
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t44
$v_{5}$ data for various $q_2$ bins, Centrality 50-55%.
• #### Table 45
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t45
$v_{2}$ data for various $q_2$ bins, Centrality 55-60%.
• #### Table 46
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t46
$v_{3}$ data for various $q_2$ bins, Centrality 55-60%.
• #### Table 47
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t47
$v_{4}$ data for various $q_2$ bins, Centrality 55-60%.
• #### Table 48
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t48
$v_{5}$ data for various $q_2$ bins, Centrality 55-60%.
• #### Table 49
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t49
$v_{2}$ data for various $q_2$ bins, Centrality 60-65%.
• #### Table 50
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t50
$v_{3}$ data for various $q_2$ bins, Centrality 60-65%.
• #### Table 51
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t51
$v_{4}$ data for various $q_2$ bins, Centrality 60-65%.
• #### Table 52
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t52
$v_{5}$ data for various $q_2$ bins, Centrality 60-65%.
• #### Table 53
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t53
$v_{2}$ data for various $q_2$ bins, Centrality 65-70%.
• #### Table 54
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t54
$v_{3}$ data for various $q_2$ bins, Centrality 65-70%.
• #### Table 55
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t55
$v_{4}$ data for various $q_2$ bins, Centrality 65-70%.
• #### Table 56
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t56
$v_{5}$ data for various $q_2$ bins, Centrality 65-70%.
• #### Table 57
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t57
$v_{2}$ data for various $q_2$ bins, Centrality 0-10%.
• #### Table 58
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t58
$v_{3}$ data for various $q_2$ bins, Centrality 0-10%.
• #### Table 59
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t59
$v_{4}$ data for various $q_2$ bins, Centrality 0-10%.
• #### Table 60
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t60
$v_{5}$ data for various $q_2$ bins, Centrality 0-10%.
• #### Table 61
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t61
$v_{2}$ data for various $q_2$ bins, Centrality 10-20%.
• #### Table 62
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t62
$v_{3}$ data for various $q_2$ bins, Centrality 10-20%.
• #### Table 63
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t63
$v_{4}$ data for various $q_2$ bins, Centrality 10-20%.
• #### Table 64
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t64
$v_{5}$ data for various $q_2$ bins, Centrality 10-20%.
• #### Table 65
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t65
$v_{2}$ data for various $q_2$ bins, Centrality 20-30%.
• #### Table 66
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t66
$v_{3}$ data for various $q_2$ bins, Centrality 20-30%.
• #### Table 67
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t67
$v_{4}$ data for various $q_2$ bins, Centrality 20-30%.
• #### Table 68
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t68
$v_{5}$ data for various $q_2$ bins, Centrality 20-30%.
• #### Table 69
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t69
$v_{2}$ data for various $q_2$ bins, Centrality 30-40%.
• #### Table 70
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t70
$v_{3}$ data for various $q_2$ bins, Centrality 30-40%.
• #### Table 71
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t71
$v_{4}$ data for various $q_2$ bins, Centrality 30-40%.
• #### Table 72
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t72
$v_{5}$ data for various $q_2$ bins, Centrality 30-40%.
• #### Table 73
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t73
$v_{2}$ data for various $q_2$ bins, Centrality 40-50%.
• #### Table 74
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t74
$v_{3}$ data for various $q_2$ bins, Centrality 40-50%.
• #### Table 75
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t75
$v_{4}$ data for various $q_2$ bins, Centrality 40-50%.
• #### Table 76
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t76
$v_{5}$ data for various $q_2$ bins, Centrality 40-50%.
• #### Table 77
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t77
$v_{2}$ data for various $q_3$ bins, Centrality 0-5%.
• #### Table 78
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t78
$v_{3}$ data for various $q_3$ bins, Centrality 0-5%.
• #### Table 79
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t79
$v_{4}$ data for various $q_3$ bins, Centrality 0-5%.
• #### Table 80
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t80
$v_{5}$ data for various $q_3$ bins, Centrality 0-5%.
• #### Table 81
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t81
$v_{2}$ data for various $q_3$ bins, Centrality 5-10%.
• #### Table 82
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t82
$v_{3}$ data for various $q_3$ bins, Centrality 5-10%.
• #### Table 83
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t83
$v_{4}$ data for various $q_3$ bins, Centrality 5-10%.
• #### Table 84
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t84
$v_{5}$ data for various $q_3$ bins, Centrality 5-10%.
• #### Table 85
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t85
$v_{2}$ data for various $q_3$ bins, Centrality 10-15%.
• #### Table 86
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t86
$v_{3}$ data for various $q_3$ bins, Centrality 10-15%.
• #### Table 87
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t87
$v_{4}$ data for various $q_3$ bins, Centrality 10-15%.
• #### Table 88
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t88
$v_{5}$ data for various $q_3$ bins, Centrality 10-15%.
• #### Table 89
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t89
$v_{2}$ data for various $q_3$ bins, Centrality 15-20%.
• #### Table 90
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t90
$v_{3}$ data for various $q_3$ bins, Centrality 15-20%.
• #### Table 91
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t91
$v_{4}$ data for various $q_3$ bins, Centrality 15-20%.
• #### Table 92
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t92
$v_{5}$ data for various $q_3$ bins, Centrality 15-20%.
• #### Table 93
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t93
$v_{2}$ data for various $q_3$ bins, Centrality 20-25%.
• #### Table 94
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t94
$v_{3}$ data for various $q_3$ bins, Centrality 20-25%.
• #### Table 95
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t95
$v_{4}$ data for various $q_3$ bins, Centrality 20-25%.
• #### Table 96
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t96
$v_{5}$ data for various $q_3$ bins, Centrality 20-25%.
• #### Table 97
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t97
$v_{2}$ data for various $q_3$ bins, Centrality 25-30%.
• #### Table 98
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t98
$v_{3}$ data for various $q_3$ bins, Centrality 25-30%.
• #### Table 99
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t99
$v_{4}$ data for various $q_3$ bins, Centrality 25-30%.
• #### Table 100
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t100
$v_{5}$ data for various $q_3$ bins, Centrality 25-30%.
• #### Table 101
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t101
$v_{2}$ data for various $q_3$ bins, Centrality 30-35%.
• #### Table 102
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t102
$v_{3}$ data for various $q_3$ bins, Centrality 30-35%.
• #### Table 103
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t103
$v_{4}$ data for various $q_3$ bins, Centrality 30-35%.
• #### Table 104
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t104
$v_{5}$ data for various $q_3$ bins, Centrality 30-35%.
• #### Table 105
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t105
$v_{2}$ data for various $q_3$ bins, Centrality 35-40%.
• #### Table 106
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t106
$v_{3}$ data for various $q_3$ bins, Centrality 35-40%.
• #### Table 107
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t107
$v_{4}$ data for various $q_3$ bins, Centrality 35-40%.
• #### Table 108
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t108
$v_{5}$ data for various $q_3$ bins, Centrality 35-40%.
• #### Table 109
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t109
$v_{2}$ data for various $q_3$ bins, Centrality 40-45%.
• #### Table 110
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t110
$v_{3}$ data for various $q_3$ bins, Centrality 40-45%.
• #### Table 111
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t111
$v_{4}$ data for various $q_3$ bins, Centrality 40-45%.
• #### Table 112
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t112
$v_{5}$ data for various $q_3$ bins, Centrality 40-45%.
• #### Table 113
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t113
$v_{2}$ data for various $q_3$ bins, Centrality 45-50%.
• #### Table 114
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t114
$v_{3}$ data for various $q_3$ bins, Centrality 45-50%.
• #### Table 115
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t115
$v_{4}$ data for various $q_3$ bins, Centrality 45-50%.
• #### Table 116
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t116
$v_{5}$ data for various $q_3$ bins, Centrality 45-50%.
• #### Table 117
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t117
$v_{2}$ data for various $q_3$ bins, Centrality 50-55%.
• #### Table 118
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t118
$v_{3}$ data for various $q_3$ bins, Centrality 50-55%.
• #### Table 119
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t119
$v_{4}$ data for various $q_3$ bins, Centrality 50-55%.
• #### Table 120
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t120
$v_{5}$ data for various $q_3$ bins, Centrality 50-55%.
• #### Table 121
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t121
$v_{2}$ data for various $q_3$ bins, Centrality 55-60%.
• #### Table 122
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t122
$v_{3}$ data for various $q_3$ bins, Centrality 55-60%.
• #### Table 123
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t123
$v_{4}$ data for various $q_3$ bins, Centrality 55-60%.
• #### Table 124
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t124
$v_{5}$ data for various $q_3$ bins, Centrality 55-60%.
• #### Table 125
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t125
$v_{2}$ data for various $q_3$ bins, Centrality 60-65%.
• #### Table 126
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t126
$v_{3}$ data for various $q_3$ bins, Centrality 60-65%.
• #### Table 127
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t127
$v_{4}$ data for various $q_3$ bins, Centrality 60-65%.
• #### Table 128
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t128
$v_{5}$ data for various $q_3$ bins, Centrality 60-65%.
• #### Table 129
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t129
$v_{2}$ data for various $q_3$ bins, Centrality 65-70%.
• #### Table 130
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t130
$v_{3}$ data for various $q_3$ bins, Centrality 65-70%.
• #### Table 131
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t131
$v_{4}$ data for various $q_3$ bins, Centrality 65-70%.
• #### Table 132
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t132
$v_{5}$ data for various $q_3$ bins, Centrality 65-70%.
• #### Table 133
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t133
$v_{2}$ data for various $q_3$ bins, Centrality 0-10%.
• #### Table 134
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t134
$v_{3}$ data for various $q_3$ bins, Centrality 0-10%.
• #### Table 135
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t135
$v_{4}$ data for various $q_3$ bins, Centrality 0-10%.
• #### Table 136
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t136
$v_{5}$ data for various $q_3$ bins, Centrality 0-10%.
• #### Table 137
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t137
$v_{2}$ data for various $q_3$ bins, Centrality 10-20%.
• #### Table 138
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t138
$v_{3}$ data for various $q_3$ bins, Centrality 10-20%.
• #### Table 139
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t139
$v_{4}$ data for various $q_3$ bins, Centrality 10-20%.
• #### Table 140
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t140
$v_{5}$ data for various $q_3$ bins, Centrality 10-20%.
• #### Table 141
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t141
$v_{2}$ data for various $q_3$ bins, Centrality 20-30%.
• #### Table 142
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t142
$v_{3}$ data for various $q_3$ bins, Centrality 20-30%.
• #### Table 143
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t143
$v_{4}$ data for various $q_3$ bins, Centrality 20-30%.
• #### Table 144
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t144
$v_{5}$ data for various $q_3$ bins, Centrality 20-30%.
• #### Table 145
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t145
$v_{2}$ data for various $q_3$ bins, Centrality 30-40%.
• #### Table 146
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t146
$v_{3}$ data for various $q_3$ bins, Centrality 30-40%.
• #### Table 147
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t147
$v_{4}$ data for various $q_3$ bins, Centrality 30-40%.
• #### Table 148
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t148
$v_{5}$ data for various $q_3$ bins, Centrality 30-40%.
• #### Table 149
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t149
$v_{2}$ data for various $q_3$ bins, Centrality 40-50%.
• #### Table 150
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t150
$v_{3}$ data for various $q_3$ bins, Centrality 40-50%.
• #### Table 151
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t151
$v_{4}$ data for various $q_3$ bins, Centrality 40-50%.
• #### Table 152
Data from Figure 3 and auxiliary figures 3 and 4
10.17182/hepdata.68950.v1/t152
$v_{5}$ data for various $q_3$ bins, Centrality 40-50%.
• #### Table 153
Data from Figure 5,6 left panel and auxiliary figures 5
10.17182/hepdata.68950.v1/t153
$v_{2}$ - $v_{2}$ inclusive correlation in 5% centrality intervals.
• #### Table 154
Data from Figure 5,6 left panel and auxiliary figures 5
10.17182/hepdata.68950.v1/t154
$v_{2}$ - $v_{2}$ correlation within each centrality.
• #### Table 155
Data from Figure 5,6 left panel and auxiliary figures 5
10.17182/hepdata.68950.v1/t155
$v_{2}$ - $v_{2}$ inclusive correlation in 5% centrality intervals.
• #### Table 156
Data from Figure 5,6 left panel and auxiliary figures 5
10.17182/hepdata.68950.v1/t156
$v_{2}$ - $v_{2}$ correlation within each centrality.
• #### Table 157
Data from Figure 5,6 left panel and auxiliary figures 5
10.17182/hepdata.68950.v1/t157
$v_{2}$ - $v_{2}$ inclusive correlation in 5% centrality intervals.
• #### Table 158
Data from Figure 5,6 left panel and auxiliary figures 5
10.17182/hepdata.68950.v1/t158
$v_{2}$ - $v_{2}$ correlation within each centrality.
• #### Table 159
Data from Figure 5,6 left panel and auxiliary figures 5
10.17182/hepdata.68950.v1/t159
$v_{2}$ - $v_{2}$ inclusive correlation in 5% centrality intervals.
• #### Table 160
Data from Figure 5,6 left panel and auxiliary figures 5
10.17182/hepdata.68950.v1/t160
$v_{2}$ - $v_{2}$ correlation within each centrality.
• #### Table 161
Data from Figure 5,6 left panel and auxiliary figures 5
10.17182/hepdata.68950.v1/t161
$v_{2}$ - $v_{2}$ inclusive correlation in 5% centrality intervals.
• #### Table 162
Data from Figure 5,6 left panel and auxiliary figures 5
10.17182/hepdata.68950.v1/t162
$v_{2}$ - $v_{2}$ correlation within each centrality.
• #### Table 163
Data from Figure 5,6 right panel and auxiliary figures 6
10.17182/hepdata.68950.v1/t163
$v_{3}$ - $v_{3}$ inclusive correlation in 5% centrality intervals.
• #### Table 164
Data from Figure 5,6 right panel and auxiliary figures 6
10.17182/hepdata.68950.v1/t164
$v_{3}$ - $v_{3}$ correlation within each centrality.
• #### Table 165
Data from Figure 5,6 right panel and auxiliary figures 6
10.17182/hepdata.68950.v1/t165
$v_{3}$ - $v_{3}$ inclusive correlation in 5% centrality intervals.
• #### Table 166
Data from Figure 5,6 right panel and auxiliary figures 6
10.17182/hepdata.68950.v1/t166
$v_{3}$ - $v_{3}$ correlation within each centrality.
• #### Table 167
Data from Figure 5,6 right panel and auxiliary figures 6
10.17182/hepdata.68950.v1/t167
$v_{3}$ - $v_{3}$ inclusive correlation in 5% centrality intervals.
• #### Table 168
Data from Figure 5,6 right panel and auxiliary figures 6
10.17182/hepdata.68950.v1/t168
$v_{3}$ - $v_{3}$ correlation within each centrality.
• #### Table 169
Data from Figure 5,6 right panel and auxiliary figures 6
10.17182/hepdata.68950.v1/t169
$v_{3}$ - $v_{3}$ inclusive correlation in 5% centrality intervals.
• #### Table 170
Data from Figure 5,6 right panel and auxiliary figures 6
10.17182/hepdata.68950.v1/t170
$v_{3}$ - $v_{3}$ correlation within each centrality.
• #### Table 171
Data from Figure 7,8 and auxiliary figures 9
10.17182/hepdata.68950.v1/t171
$v_{2}$ - $v_{3}$ inclusive correlation in 5% centrality intervals.
• #### Table 172
Data from Figure 7,8 left panel and auxiliary figures 9
10.17182/hepdata.68950.v1/t172
$v_{2}$ - $v_{3}$ correlation for various q2 bins within each centrality.
• #### Table 173
Data from Figure 7,8 and auxiliary figures 9
10.17182/hepdata.68950.v1/t173
$v_{2}$ - $v_{3}$ inclusive correlation in 5% centrality intervals.
• #### Table 174
Data from Figure 7,8 left panel and auxiliary figures 9
10.17182/hepdata.68950.v1/t174
$v_{2}$ - $v_{3}$ correlation for various q2 bins within each centrality.
• #### Table 175
Data from Figure 7,8 and auxiliary figures 9
10.17182/hepdata.68950.v1/t175
$v_{2}$ - $v_{3}$ inclusive correlation in 5% centrality intervals.
• #### Table 176
Data from Figure 7,8 left panel and auxiliary figures 9
10.17182/hepdata.68950.v1/t176
$v_{2}$ - $v_{3}$ correlation for various q2 bins within each centrality.
• #### Table 177
Data from Figure 7,8 and auxiliary figures 9
10.17182/hepdata.68950.v1/t177
$v_{2}$ - $v_{3}$ inclusive correlation in 5% centrality intervals.
• #### Table 178
Data from Figure 7,8 left panel and auxiliary figures 9
10.17182/hepdata.68950.v1/t178
$v_{2}$ - $v_{3}$ correlation for various q2 bins within each centrality.
• #### Table 179
Data from Figure 8 fit summary
10.17182/hepdata.68950.v1/t179
linear fit result of $v_{2}$ - $v_{3}$ correlation within each centrality.
• #### Table 180
Data from Auxiliary figures 10,16
10.17182/hepdata.68950.v1/t180
$v_{3}$ - $v_{2}$ inclusive correlation in 5% centrality intervals.
• #### Table 181
Data from Auxiliary figure 10,16
10.17182/hepdata.68950.v1/t181
$v_{3}$ - $v_{2}$ correlation for various q3 bins within each centrality.
• #### Table 182
Data from Auxiliary figures 10,16
10.17182/hepdata.68950.v1/t182
$v_{3}$ - $v_{2}$ inclusive correlation in 5% centrality intervals.
• #### Table 183
Data from Auxiliary figure 10,16
10.17182/hepdata.68950.v1/t183
$v_{3}$ - $v_{2}$ correlation for various q3 bins within each centrality.
• #### Table 184
Data from Auxiliary figures 10,16
10.17182/hepdata.68950.v1/t184
$v_{3}$ - $v_{2}$ inclusive correlation in 5% centrality intervals.
• #### Table 185
Data from Auxiliary figure 10,16
10.17182/hepdata.68950.v1/t185
$v_{3}$ - $v_{2}$ correlation for various q3 bins within each centrality.
• #### Table 186
Data from Auxiliary figures 10,16
10.17182/hepdata.68950.v1/t186
$v_{3}$ - $v_{2}$ inclusive correlation in 5% centrality intervals.
• #### Table 187
Data from Auxiliary figure 10,16
10.17182/hepdata.68950.v1/t187
$v_{3}$ - $v_{2}$ correlation for various q3 bins within each centrality.
• #### Table 188
Data from Figure 9,10 and auxiliary figures 11
10.17182/hepdata.68950.v1/t188
$v_{2}$ - $v_{4}$ inclusive correlation in 5% centrality intervals.
• #### Table 189
Data from Figure 9,10 and auxiliary figures 11
10.17182/hepdata.68950.v1/t189
$v_{2}$ - $v_{4}$ correlation for various q2 bins within each centrality.
• #### Table 190
Data from Figure 9,10 and auxiliary figures 11
10.17182/hepdata.68950.v1/t190
$v_{2}$ - $v_{4}$ inclusive correlation in 5% centrality intervals.
• #### Table 191
Data from Figure 9,10 and auxiliary figures 11
10.17182/hepdata.68950.v1/t191
$v_{2}$ - $v_{4}$ correlation for various q2 bins within each centrality.
• #### Table 192
Data from Figure 9,10 and auxiliary figures 11
10.17182/hepdata.68950.v1/t192
$v_{2}$ - $v_{4}$ inclusive correlation in 5% centrality intervals.
• #### Table 193
Data from Figure 9,10 and auxiliary figures 11
10.17182/hepdata.68950.v1/t193
$v_{2}$ - $v_{4}$ correlation for various q2 bins within each centrality.
• #### Table 194
Data from Figure 9,10 and auxiliary figures 11
10.17182/hepdata.68950.v1/t194
$v_{2}$ - $v_{4}$ inclusive correlation in 5% centrality intervals.
• #### Table 195
Data from Figure 9,10 and auxiliary figures 11
10.17182/hepdata.68950.v1/t195
$v_{2}$ - $v_{4}$ correlation for various q2 bins within each centrality.
• #### Table 196
Data from Auxiliary figures 12,17
10.17182/hepdata.68950.v1/t196
$v_{3}$ - $v_{4}$ inclusive correlation in 5% centrality intervals.
• #### Table 197
Data from Auxiliary figures 12, 17
10.17182/hepdata.68950.v1/t197
$v_{3}$ - $v_{4}$ correlation within each centrality.
• #### Table 198
Data from Auxiliary figures 12,17
10.17182/hepdata.68950.v1/t198
$v_{3}$ - $v_{4}$ inclusive correlation in 5% centrality intervals.
• #### Table 199
Data from Auxiliary figures 12, 17
10.17182/hepdata.68950.v1/t199
$v_{3}$ - $v_{4}$ correlation within each centrality.
• #### Table 200
Data from Figure 11
10.17182/hepdata.68950.v1/t200
$v_4$ decomposed into linear and nonlinear contributions based on q2 event-shape selection.
• #### Table 201
Data from auxiliary Figure 13
10.17182/hepdata.68950.v1/t201
$v_4$ decomposed into linear and nonlinear contributions based on q2 event-shape selection.
• #### Table 202
Data from auxiliary Figure 13
10.17182/hepdata.68950.v1/t202
$v_4$ decomposed into linear and nonlinear contributions based on q2 event-shape selection.
• #### Table 203
Data from auxiliary Figure 13
10.17182/hepdata.68950.v1/t203
$v_4$ decomposed into linear and nonlinear contributions based on q2 event-shape selection.
• #### Table 204
Data from auxiliary Figure 13
10.17182/hepdata.68950.v1/t204
$v_4$ decomposed into linear and nonlinear contributions based on q2 event-shape selection.
• #### Table 205
Data from Figure 15 left panel
10.17182/hepdata.68950.v1/t205
$v_5$ decomposed into linear and nonlinear contributions based on q2 event-shape selection.
• #### Table 206
Data from Figure 15 right panel
10.17182/hepdata.68950.v1/t206
$v_5$ decomposed into linear and nonlinear contributions based on q3 event-shape selection.
• #### Table 207
Data from Figure 15 left panel, Glauber model
10.17182/hepdata.68950.v1/t207
RMS eccentricity scaled v_n.
• #### Table 208
Data from Figure 15 right panel, MC-KLN model
10.17182/hepdata.68950.v1/t208
RMS eccentricity scaled v_n.
• #### Table 209
Data from Auxiliary figure 14 left panel
10.17182/hepdata.68950.v1/t209
$v_{2}$ - $v_{5}$ inclusive correlation in 5% centrality intervals.
• #### Table 210
Data from Auxiliary figure 14 right panel
10.17182/hepdata.68950.v1/t210
$v_{2}$ - $v_{5}$ correlation for various q2 bins within each centrality.
• #### Table 211
Data from Auxiliary figure 15 left panel
10.17182/hepdata.68950.v1/t211
$v_{3}$ - $v_{5}$ inclusive correlation in 5% centrality intervals.
• #### Table 212
Data from Auxiliary figure 15 right panel
10.17182/hepdata.68950.v1/t212
$v_{3}$ - $v_{5}$ correlation for various q2 bins within each centrality. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3313787579536438, "perplexity": 14175.795600967509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141204453.65/warc/CC-MAIN-20201130004748-20201130034748-00019.warc.gz"} |
http://www.cirosantilli.com/web/standards/ | # Standards
1. W3C
2. WHATWG
3. IETF
## W3C
WWW Consortium.
W3C specifies many of the key web standards including:
One important web standard that they do not specify is Javascript, which is specified by Ecma international http://www.ecma-international.org/
### Maturity levels of standards
W3C has several levels of endorsement for standards. They are described on the “Process” document: http://www.w3.org/2005/10/Process-20051014/tr.html#maturity-levels.
### Versions
Like any decent document, W3C standards are have versions, and it is possible to link either to a branch or specific version of the document.
The initial paragraph of the documents always contains links to:
• latest stable version: http://www.w3.org/TR/webrtc/
• bleeding edge version, AKA “editor’s draft”: http://dev.w3.org/2011/webrtc/editor/webrtc.html
• specific versions: http://dev.w3.org/2011/webrtc/editor/archives/20140617/webrtc.html
Link to the one that makes most sense.
## WHATWG
Specs group, split from W3C because they disagreed: http://wiki.whatwg.org/wiki/FAQ#WHATWG_and_the_W3C_HTML_WG
Started by Mozilla, Apple and Safari. http://wiki.whatwg.org/wiki/FAQ#What_is_the_WHATWG.3F
Claim to be more practical oriented.
W3C tries to keep in touch, e.g. HTML5 has on WHATWG editor.
## IETF
IETF standards are not covered here, since they are “lower level networking stuff”. See instead: https://github.com/cirosantilli/net This includes in particular HTTP. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43994641304016113, "perplexity": 19382.298585469805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323711.85/warc/CC-MAIN-20170628153051-20170628173051-00401.warc.gz"} |
https://revbayes.github.io/documentation/dnScaledDirichlet.html | # Rev Language Reference
## dnScaledDirichlet - Scaled Dirichlet Distribution
Scaled Dirichlet probability distribution on a simplex.
### Usage
dnScaledDirichlet(RealPos[] alpha, RealPos[] beta)
### Arguments
alpha : RealPos[] (pass by const reference) The concentration parameter. beta : RealPos[] (pass by const reference) The rate parameter.
### Details
The scaled Dirichlet probability distribution is the generalization of the dirichlet distribution. A random variable from a scaled Dirichlet distribution is a simplex, i.e., a vector of probabilities that sum to 1. If b[1]=b[2]=...=b[n], then the scaledDirichlet(alpha,beta) collapses to the Dirichlet with the same alphas.
### Example
# lets get a draw from a Dirichlet distribution
a <- [1,1,1,1] # we could also use rep(1,4)
b <- [1,2,3,4] # if these are all equal, the scaled Dirichlet is equivilent to the Dirichlet(a)x ~ dnScaledDirichlet(a,b)
x
# let check if b really sums to 1
sum(x) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682260155677795, "perplexity": 5423.222961081717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540531974.7/warc/CC-MAIN-20191211160056-20191211184056-00223.warc.gz"} |
http://publi2-as.oma.be/record/4313?ln=en | 2019
Ref: POSTER-2019-0102
Luminosities and effective temperature for Classical Cepheids in Gaia DR2
Poster presented at EWASS, Lyon (FR) on 2019-06-24
Abstract: Groenewegen (2018) presented an analysis of a sample of about 450 classical Cepheids (CCs) with metallicies available from high-resolution spectroscopy. All except three had a parallax available from Gaia DR2 (not necessarily of high quality) and many had a parallax available from Hipparcos and/or several Hubble space telescope programs. Based on nine CCs with a goodness-of-fit (GOF) statistic smaller than 8 and with an accurate non-Gaia parallax, a parallax zero-point offset of -0.049 $\pm$ 0.018 mas was derived. Imposing some selection criteria and limiting myself to fundamental mode CCs period-luminosity relations in the $V$, $K$ and Wesenheit $VK$ bands were presented that strongly depended on the parallax zero-point offset, for a sample of 200 CCs. One possible issue that arose from the study is the classification as CCs that was taken from the literature. To study the sample of 450 CCs in an entirely different way the spectral energy distributions from the UV to the far infrared have been compiled from the literature and fitted with the 1D dust radiative transfer code "More of DUSTY". For the overall majority of stars no dust is present and the analysis gives the (mean) effective temperature and (mean) luminosity (for an assumed distance). This allows to construct the Hertzsprung-Russell diagram and construct a PL-relation using the bolometric luminosity. Both may help is identifying possible stars misclassified as CCs. Some CCs are known to show some infrared excess and these stars are studied in more detail.
The record appears in these collections:
Royal Observatory of Belgium > Astronomy & Astrophysics
Conference Contributions & Seminars > Posters | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7740049362182617, "perplexity": 3152.2083898186443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145500.90/warc/CC-MAIN-20200221080411-20200221110411-00379.warc.gz"} |
https://tug.org/pipermail/texhax/2015-December/022039.html | # [texhax] Latex question
Walt Burkhard burkhard at cs.ucsd.edu
Wed Dec 16 05:51:52 CET 2015
Hello all,
I am using latex and have a few instances of figures/tables slightly in the
incorrect position. for instance, I am using a table created in MetaPost
as follows --
\begin{table*}[t]
\centerline{\hbox{\epsfxsize6.1in\epsfbox{TableData.1}}}
\label{ttt}
\caption{Hashing: {\it g} passbits per bucket. Successful and unsuccessful
search lengths.}
\end{table*}
The caption is positioned correctly but the TableData.1 table is about 4
inches
above where it should be --- with most of the table positioned above the
page
and not visible.
Many of the other smaller figures, also MetaPost creations, are positioned
correctly with similar Latex instructions.
issue. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9897018074989319, "perplexity": 15133.00326017273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00788.warc.gz"} |
http://lists.gnu.org/archive/html/lilypond-user/2006-10/msg00318.html | lilypond-user
[Top][All Lists]
## error in manual?
From: Julian Peterson Subject: error in manual? Date: Thu, 19 Oct 2006 14:36:15 -0400 User-agent: Thunderbird 1.5.0.7 (Windows/20060909)
I think I found an error in the lilypond documentation for 2.8.x, although as I'm still quite new to lilypond there is a good chance that it's my own misunderstanding...
```
Section 4.5 contains this snippet:
```
```
```
Which seems to contain an extra # preceding \$padding. I had to remove it to get this example to work, and further examples seem to be consistent with using only a \$.
```
Thanks,
Julian Peterson
``` | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8418446779251099, "perplexity": 2832.9282685017456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126538.54/warc/CC-MAIN-20170423031206-00453-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://asmedigitalcollection.asme.org/appliedmechanics/article-abstract/51/3/636/423229/Thermoelastic-Displacements-and-Stresses-Due-to-a?redirectedFrom=PDF | A solution is given for the surface displacement and stresses due to a line heat source that moves at constant speed over the surface of an elastic half plane. The solution is obtained by integration of previous results for the instantaneous point source. The final results are expressed in terms of Bessel functions for which numerically efficient series and asymptotic expressions are given.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9342133402824402, "perplexity": 203.2964497105308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00580.warc.gz"} |
http://www.wias-berlin.de/publications/wias-publ/run.jsp?template=abstract&type=Preprint&year=1997&number=326 | WIAS Preprint No. 326, (1997)
# Twice degenerate equations in the space of vector-valued functions
Authors
• Kloeden, Peter E.
• Krasnosel´skii, Alexander M.
2010 Mathematics Subject Classification
• 47H11 47H30
Keywords
• index at infinity, asymptotically linear and asymptotically homogeneous vector fields, periodic problems, system of two nonlinear first-order ODE, nonlinear second-order ODE
Abstract
New results are suggested which allow to calculate an index at infinity for asymptotically linear and asymptotically homogeneous vector fields in spaces of vector-valued functions. The case is considered where both linear approximation at infinity and "linear + homogeneous" approximation are degenerate. Applications are given to the 2π-periodic problem for a system of two nonlinear first order ODE's and to the two-point BVP for a system of two nonlinear second order ODE's. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9143341183662415, "perplexity": 1507.6458668558853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540503656.42/warc/CC-MAIN-20191207233943-20191208021943-00180.warc.gz"} |
https://starburstdata.github.io/latest/security/password-passthrough.html | The password credential pass-through feature guarantees that Starburst Enterprise platform (SEP) uses the same credentials as a user accessing a data source directly. This allows you to authenticate using the CLI or client application with the JDBC or ODBC driver. The supplied credentials are passed through SEP and the connector to the underlying data source.
To use password pass-through, the data source and SEP must use the same authentication backend and use the same credentials. A typical example is an LDAP system such as Active Directory. Not all connectors have this capability; check the documentation for individual connectors to ensure that they are supported.
Warning
DELEGATED-PASSWORD cannot be used with the PASSWORD authentication type and results in runtime exceptions. The functionality of PASSWORD authentication is integrated in the DELEGATED-PASSWORD authentication.
## Configuration#
To enable, include DELEGATED-PASSWORD in the config.properties file:
http-server.authentication.type=DELEGATED-PASSWORD
Typically, multiple Authentication types are used and must be configured as comma-separated values. These are evaluated in a short-circuit fashion. SEP attempts them in order until an authentication type succeeds, or fails the authentication attempt altogether if none succeed.
In the following example, SEP attempts to authenticate using DELEGATED-KERBEROS. If that succeeds, no further authentication attempts are made. If it fails, SEP attempts to authenticate using DELEGATED-PASSWORD, followed by CERTIFICATE. If those fail, the request fails as there are no further authentication methods specified:
http-server.authentication.type=DELEGATED-KERBEROS,DELEGATED-PASSWORD,CERTIFICATE
Update the catalog file, as needed by the connector, to enable password credential pass-through:
<connector_name>.authentication.type=PASSWORD_PASS_THROUGH
## Specifying username via extra credentials#
It is possible to overwrite the username to authenticate with the external data source using extra credentials added to the JDBC URL. The name of the extra credential used to log in must be configured in the catalog properties file:
user-credential-name=arbitrary_username_id
Then add extraCredentials=arbitrary_username_id:external_user_login to the parameters used with the JDBC driver to connect to SEP.
Users of the CLI can use the --extraCredential option.
This feature works only for the PASSWORD_PASS_THROUGH authentication type. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15660598874092102, "perplexity": 5822.515028816771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103669266.42/warc/CC-MAIN-20220630062154-20220630092154-00428.warc.gz"} |
Subsets and Splits