Search is not available for this dataset
url
string
text
string
date
timestamp[s]
meta
dict
http://jblevins.org/log/log-sum-exp
# Calculating the Log Sum of Exponentials November 21, 2008 The so-called “log sum of exponentials” is a functional form commonly encountered in dynamic discrete choice models in economics. It arises when the choice-specific error terms have a type 1 extreme value distribution–a very common assumption. In these problems there is typically a vector $v$ of length $J$ which represents the non-stochastic payoff associated with each of $J$ choices. In addition to ${v}_{j}$, the payoff associated with choice $j$ has a random component ${\epsilon }_{j}$. If these components are independent and identically distributed across $j$ and follow the type 1 extreme value distribution, then the expected payoff from choosing optimally is $𝔼\left[\mathrm{max}\left\{{v}_{1}+{\epsilon }_{1},\dots ,{v}_{J}+{\epsilon }_{J}\right\}\right]=\mathrm{ln}\left[\mathrm{exp}\left({v}_{1}\right)+\dots +\mathrm{exp}\left({v}_{J}\right)\right].$ We need to numerically evaluate the expression on the right hand side for many different vectors $v$. Special care is required to calculate this expression in compiled languages such as Fortran or C to avoid numerical problems. The function needs to work for $v$ with very large and very small components. A large ${v}_{j}$ can cause overflow due to the exponentiation. Similarly, when the ${v}_{j}$ are large (in absolute value) and negative, the exponential terms vanish. Taking the logarithm of a very small number can result in underflow. A simple transformation can avoid both of these problems. Consider the case where $v=\left(a,b\right)$. Note that we can write $\mathrm{exp}\left(a\right)+\mathrm{exp}\left(b\right)=\left[\mathrm{exp}\left(a-c\right)+\mathrm{exp}\left(b-c\right)\right]\mathrm{exp}\left(c\right)$ for any $c$. Furthermore, we have $\mathrm{ln}\left[\left(\mathrm{exp}\left(a-c\right)+\mathrm{exp}\left(b-c\right)\right)\mathrm{exp}\left(c\right)\right]=\mathrm{ln}\left[\mathrm{exp}\left(a-c\right)+\mathrm{exp}\left(b-c\right)\right]+c.$ We can choose $c$ in a way that reduces the possibility of overflow. Underflow is also possible when taking the logarithm of a number close to zero, since $\mathrm{ln}\left(x\right)\to -\infty$ as $x\to 0$. Thus, we also need to account for large negative elements of $v$. The following code is a Fortran implementation of a function which makes the adjustments described above. In order to prevent numerical overflow, the function shifts the vector v by a constant c, applies exp and log, and then adjusts the result to account for the shift. function log_sum_exp(v) result(e) real, dimension(:), intent(in) :: v ! Input vector real :: e ! Result is log(sum(exp(v))) real :: c ! Shift constant ! Choose c to be the element of v that is largest in absolute value. if ( maxval(abs(v)) > maxval(v) ) then c = minval(v) else c = maxval(v) end if e = log(sum(exp(v-c))) + c end function log_sum_exp Note that this function is still not completely robust to very poorly scaled vectors v and so over- or underflow is still possible (just much less likely).
2017-03-27T16:31:11
{ "domain": "jblevins.org", "url": "http://jblevins.org/log/log-sum-exp", "openwebmath_score": 0.7956950068473816, "openwebmath_perplexity": 427.58514485409864, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692305124305, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747692015071 }
https://stats.stackexchange.com/questions/313389/sample-size-vs-number-of-samples-in-calculating-standard-error
# Sample size vs Number of samples in calculating standard error Suppose a survey was given to 100 people (the response is just a number between 0 and 1) and only the mean was reported. So 1 sample was taken with n = 100. Now suppose this was repeated 20 times. So I have a list of means with 20 entries. It's my understanding that I can estimate the population mean and variance by the mean and variance of this list. (Right?) Given that the # of total people available to survey is infinite, but that the maximum # of times we are allowed to take a survey is 200, my questions are: Do we calculate standard error as: $SE = \frac{\sigma}{\sqrt{n}}$, where n = 100 or n = 20? Do I use the sample size, or number of samples as $n$? How does the SE change when increasing sample size vs number of samples... for example what would be the difference of surveying 150 people 20 times vs. surveying 100 people 30 times? (Keeping in mind we only know the mean of each survey)
2019-06-18T08:50:10
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/313389/sample-size-vs-number-of-samples-in-calculating-standard-error", "openwebmath_score": 0.8098146915435791, "openwebmath_perplexity": 309.6304741897014, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692305124305, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747692015071 }
https://math.libretexts.org/Courses/Remixer_University/Username%3A_pseeburger/MTH_098_Elementary_Algebra/3%3A_Math_Models/3.1%3A_Use_a_Problem-Solving_Strategy
# 3.1: Use a Problem-Solving Strategy Learning Objectives By the end of this section, you will be able to: • Approach word problems with a positive attitude • Use a problem-solving strategy for word problems • Solve number problems Note Before you get started, take this readiness quiz. 1. Translate “6 less than twice x” into an algebraic expression. If you missed this problem, review Exercise 1.3.43. 2. Solve: $$\frac{2}{3}x=24$$. If you missed this problem, review Exercise 2.2.10. 3. Solve: $$3x+8=14$$. If you missed this problem, review Exercise 2.3.1. ## Approach Word Problems with a Positive Attitude “If you think you can… or think you can’t… you’re right.”—Henry Ford The world is full of word problems! Will my income qualify me to rent that apartment? How much punch do I need to make for the party? What size diamond can I afford to buy my girlfriend? Should I fly or drive to my family reunion? How much money do I need to fill the car with gas? How much tip should I leave at a restaurant? How many socks should I pack for vacation? What size turkey do I need to buy for Thanksgiving dinner, and then what time do I need to put it in the oven? If my sister and I buy our mother a present, how much does each of us pay? Now that we can solve equations, we are ready to apply our new skills to word problems. Do you know anyone who has had negative experiences in the past with word problems? Have you ever had thoughts like the student below (Figure $$\PageIndex{1}$$)? When we feel we have no control, and continue repeating negative thoughts, we set up barriers to success. We need to calm our fears and change our negative feelings. Start with a fresh slate and begin to think positive thoughts. If we take control and believe we can be successful, we will be able to master word problems! Read the positive thoughts in Figure $$\PageIndex{2}$$ and say them out loud. Think of something, outside of school, that you can do now but couldn’t do 3 years ago. Is it driving a car? Snowboarding? Cooking a gourmet meal? Speaking a new language? Your past experiences with word problems happened when you were younger—now you’re older and ready to succeed! ## Use a Problem-Solving Strategy for Word Problems We have reviewed translating English phrases into algebraic expressions, using some basic mathematical vocabulary and symbols. We have also translated English sentences into algebraic equations and solved some word problems. The word problems applied math to everyday situations. We restated the situation in one sentence, assigned a variable, and then wrote an equation to solve the problem. This method works as long as the situation is familiar and the math is not too complicated. Now, we’ll expand our strategy so we can use it to successfully solve any word problem. We’ll list the strategy here, and then we’ll use it to solve some problems. We summarize below an effective strategy for problem solving. USE A PROBLEM-SOLVING STRATEGY TO SOLVE WORD PROBLEMS. 1. Read the problem. Make sure all the words and ideas are understood. 2. Identify what we are looking for. 3. Name what we are looking for. Choose a variable to represent that quantity. 4. Translate into an equation. It may be helpful to restate the problem in one sentence with all the important information. Then, translate the English sentence into an algebraic equation. 5. Solve the equation using good algebra techniques. 6. Check the answer in the problem and make sure it makes sense. 7. Answer the question with a complete sentence. Exercise $$\PageIndex{1}$$ Pilar bought a purse on sale for $$18$$, which is one-half of the original price. What was the original price of the purse? Step 1. Read the problem. Read the problem two or more times if necessary. Look up any unfamiliar words in a dictionary or on the internet. In this problem, is it clear what is being discussed? Is every word familiar? Let p = the original price of the purse. Step 2. Identify what you are looking for. Did you ever go into your bedroom to get something and then forget what you were looking for? It’s hard to find something if you are not sure what it is! Read the problem again and look for words that tell you what you are looking for! In this problem, the words “what was the original price of the purse” tell us what we need to find. Step 3. Name what we are looking for. Choose a variable to represent that quantity. We can use any letter for the variable, but choose one that makes it easy to remember what it represents. Step 4. Translate into an equation. It may be helpful to restate the problem in one sentence with all the important information. Translate the English sentence into an algebraic equation. Reread the problem carefully to see how the given information is related. Often, there is one sentence that gives this information, or it may help to write one sentence with all the important information. Look for clue words to help translate the sentence into algebra. Translate the sentence into an equation. Restate the problem in one sentence with all the important information. $$\color{cyan} \underbrace{\strut \color{black}\mathbf{18}} \quad \underbrace{\strut \color{black}\textbf{ is }} \quad \underbrace{\color{black}\textbf{one-half the original price.}}$$ Translate into an equation. $$18 \qquad = \qquad \qquad \qquad \frac{1}{2}\cdot p$$ Step 5. Solve the equation using good algebraic techniques. Even if you know the solution right away, using good algebraic techniques here will better prepare you to solve problems that do not have obvious answers. Solve the equation. $$18 = \frac{1}{2}p$$ Multiply both sides by 2. $${\color{red}{2}}\cdot 18 = {\color{red}{2}}\cdot \frac{1}{2}p$$ Simplify. $$36 = p$$ Step 6. Check the answer in the problem to make sure it makes sense. We solved the equation and found that $$p=36$$,which means “the original price” was $$36$$. Does $36 make sense in the problem? Yes, because 18 is one-half of 36, and the purse was on sale at half the original price. If this were a homework exercise, our work might look like this: Pilar bought a purse on sale for $$18$$, which is one-half the original price. What was the original price of the purse? Step 7. Answer the question with a complete sentence. The problem asked “What was the original price of the purse?” The answer to the question is: “The original price of the purse was$36.” Let $$p =$$ the original price. $$18$$ is one-half the original price. $$18 = \frac{1}{2}p$$ Multiply both sides by $$2$$. $${\color{red}{2}}\cdot 18 = {\color{red}{2}}\cdot \frac{1}{2}p$$ Simplify. $$36 = p$$ Check. Is $$36$$ a reasonable price for a purse? Yes. Is $$18$$ one half of $$36$$? $$18 \stackrel{?}{=} \frac{1}{2}\cdot 36$$ $$18 = 18\checkmark$$ The original price of the purse was $$36$$. Exercise $$\PageIndex{2}$$ Joaquin bought a bookcase on sale for $$120$$, which was two-thirds of the original price. What was the original price of the bookcase? $$180$$ Exercise $$\PageIndex{3}$$ Two-fifths of the songs in Mariel’s playlist are country. If there are $$16$$ country songs, what is the total number of songs in the playlist? $$40$$ Let’s try this approach with another example. Exercise $$\PageIndex{4}$$ Ginny and her classmates formed a study group. The number of girls in the study group was three more than twice the number of boys. There were $$11$$ girls in the study group. How many boys were in the study group? Step 1. Read the problem. Step 2. Identify what we are looking for. How many boys were in the study group? Step 3. Name. Choose a variable to represent the number of boys. Let $$n=$$ the number of boys. Step 4. Translate. Restate the problem in one sentence with all the important information. $$\color{cyan} \underbrace{\color{black}\textbf{The number}\\ \color{black}\textbf{of girls}(11)} \quad \underbrace{\strut \text{ } \\ \color{black}\textbf{was}} \quad \underbrace{\color{black}\textbf{three more than}\\ \color{black}\textbf{twice the number of boys}}$$ Translate into an equation. $$\qquad 11 \qquad \quad = \qquad \qquad \quad 2b + 3$$ Step 5. Solve the equation. $$\quad 11 = 2b + 3$$ Subtract 3 from each side. $$\quad 11 \,{\color{red}{- \,3}} = 2b + 3 \,{\color{red}{- \,3}}$$ Simplify. $$\quad 8 = 2b$$ Divide each side by 2. $$\quad \dfrac{8}{\color{red}{2}}=\dfrac{2b}{\color{red}{2}}$$ Simplify. $$\quad 4 = b$$ Step 6. Check. First, is our answer reasonable? Yes, having $$4$$ boys in a study group seems OK. The problem says the number of girls was $$3$$ more than twice the number of boys. If there are four boys, does that make eleven girls? Twice $$4$$ boys is $$8$$. Three more than $$8$$ is $$11$$. Step 7. Answer the question. There were $$4$$ boys in the study group. Exercise $$\PageIndex{5}$$ Guillermo bought textbooks and notebooks at the bookstore. The number of textbooks was $$3$$ more than twice the number of notebooks. He bought $$7$$ textbooks. How many notebooks did he buy? $$2$$ Exercise $$\PageIndex{6}$$ Gerry worked Sudoku puzzles and crossword puzzles this week. The number of Sudoku puzzles he completed is eight more than twice the number of crossword puzzles. He completed $$22$$ Sudoku puzzles. How many crossword puzzles did he do? $$7$$ ## Solve Number Problems Now that we have a problem solving strategy, we will use it on several different types of word problems. The first type we will work on is “number problems.” Number problems give some clues about one or more numbers. We use these clues to write an equation. Number problems don’t usually arise on an everyday basis, but they provide a good introduction to practicing the problem solving strategy outlined above. Exercise $$\PageIndex{7}$$ The difference of a number and six is $$13$$. Find the number. Step 1. Read the problem. Are all the words familiar? Step 2. Identify what we are looking for. the number Step 3. Name. Choose a variable to represent the number. Let $$n=$$ the number. Step 4. Translate. Remember to look for clue words like "difference... of... and..." Restate the problem as one sentence. $$\color{cyan} \underbrace{\color{black}\textbf{The difference of the number and }\mathbf{6}} \quad \underbrace{\strut \color{black}\textbf{ is }} \quad \underbrace{\strut \color{black}\mathbf{13}}$$ Translate into an equation. $$\qquad \qquad \qquad n-6 \qquad \qquad \qquad \quad = \quad 13$$ Step 5. Solve the equation. $$\quad n - 6 = 13$$ Simplify. $$\quad n =19$$ Step 6. Check. The difference of $$19$$ and $$6$$ is $$13$$. It checks! Step 7. Answer the question. The number is $$19$$. Exercise $$\PageIndex{8}$$ The difference of a number and eight is $$17$$. Find the number. $$25$$ Exercise $$\PageIndex{9}$$ The difference of a number and eleven is $$−7$$. Find the number. $$4$$ Exercise $$\PageIndex{10}$$ The sum of twice a number and seven is $$15$$. Find the number. Step 1. Read the problem. Step 2. Identify what we are looking for. the number Step 3. Name. Choose a variable to represent the number. Let $$n =$$ the number. Step 4. Translate. Restate the problem as one sentence. Translate into an equation. Step 5. Solve the equation. Subtract 7 from each side and simplify. Divide each side by 2 and simplify. Step 6. Check. Is the sum of twice 4 and 7 equal to 15? $$\begin{array} {rrl} {2\cdot 4 + 7} &{\stackrel{?}{=}}& {15} \\ {15} &{=} &{15\checkmark} \end{array}$$ Step 7. Answer the question. The number is $$4$$. Did you notice that we left out some of the steps as we solved this equation? If you’re not yet ready to leave out these steps, write down as many as you need. Exercise $$\PageIndex{11}$$ The sum of four times a number and two is $$14$$. Find the number. $$3$$ Exercise $$\PageIndex{12}$$ The sum of three times a number and seven is $$25$$. Find the number. $$6$$ ​​​​​​Some number word problems ask us to find two or more numbers. It may be tempting to name them all with different variables, but so far we have only solved equations with one variable. In order to avoid using more than one variable, we will define the numbers in terms of the same variable. Be sure to read the problem carefully to discover how all the numbers relate to each other. Exercise $$\PageIndex{13}$$ One number is five more than another. The sum of the numbers is 21. Find the numbers. Step 1. Read the problem. Step 2. Identify what we are looking for. We are looking for two numbers. Step 3. Name. We have two numbers to name and need a name for each. Choose a variable to represent the first number. Let $$n=1^{st}$$ number. What do we know about the second number? One number is five more than another. $$n+5=2^{nd}$$ number Step 4. Translate. Restate the problem as one sentence with all the important information. The sum of the 1st number and the 2nd number is 21. Translate into an equation. Substitute the variable expressions. Step 5. Solve the equation. Combine like terms. Subtract 5 from both sides and simplify. Divide by 2 and simplify. Find the second number, too. Step 6. Check. Do these numbers check in the problem? Is one number $$5$$ more than the other? $$13\stackrel{?}{=} 8 + 5$$ Is thirteen $$5$$ more than $$8$$? Yes. $$13 = 13\checkmark$$ Is the sum of the two numbers $$21$$? $$8 + 13 \stackrel{?}{=} 21$$ $$21 = 21\checkmark$$ Step 7. Answer the question. The numbers are $$8$$ and $$13$$. Exercise $$\PageIndex{14}$$ One number is six more than another. The sum of the numbers is twenty-four. Find the numbers. 9, 15 Exercise $$\PageIndex{15}$$ The sum of two numbers is fifty-eight. One number is four more than the other. Find the numbers. 27, 31 Exercise $$\PageIndex{16}$$ The sum of two numbers is negative fourteen. One number is four less than the other. Find the numbers. Step 1. Read the problem. Step 2. Identify what we are looking for. We are looking for two numbers. Step 3. Name. Choose a variable. Let $$n=1^{st}$$ number. One number is 4 less than the other. $$n−4=2^{nd}$$ number Step 4. Translate. Write as one sentence. The sum of the 2 numbers is negative 14. Translate into an equation. Step 5. Solve the equation. Combine like terms. Add 4 to each side and simplify. Simplify. Step 6. Check. Is −9 four less than −5? $$-5-4\stackrel{?}{=}-9$$ $$-9 = -9 \checkmark$$ Is their sum −14? $$-5+ (-9)\stackrel{?}{=}-14$$ $$-14 = -14 \checkmark$$ Step 7. Answer the question. The numbers are −5 and −9. Exercise $$\PageIndex{17}$$ The sum of two numbers is negative twenty-three. One number is seven less than the other. Find the numbers. -15, -8 Exercise $$\PageIndex{18}$$ The sum of two numbers is $$−18$$. One number is $$40$$ more than the other. Find the numbers. -29, 11 Exercise $$\PageIndex{19}$$ One number is ten more than twice another. Their sum is one. Find the numbers. Step 1. Read the problem. Step 2. Identify what you are looking for. We are looking for two numbers. Step 3. Name. Choose a variable. Let $$x=1^{st}$$ number. One number is 10 more than twice another. $$2x+10=2^{nd}$$ number Step 4. Translate. Restate as one sentence. Their sum is one. The sum of the two numbers is 1. Translate into an equation. Step 5. Solve the equation. Combine like terms. Subtract 10 from each side. Divide each side by 3. Step 6. Check. Is ten more than twice −3 equal to 4? $$2(-3) + 10 \stackrel{?}{=} 4$$ $$-6 + 10 \stacktel{?}{=} 4$$ $$4 = 4\checkmark$$ Is their sum 1? $$-3 + 4 \stackrel{?}{=} 1$$ $$1 = 1\checkmark$$ Step 7. Answer the question. The numbers are −3 and −4. Exercise $$\PageIndex{20}$$ One number is eight more than twice another. Their sum is negative four. Find the numbers. $$-4,\; 0$$ Exercise $$\PageIndex{21}$$ One number is three more than three times another. Their sum is $$−5$$. Find the numbers. $$-3,\; -2$$ Some number problems involve consecutive integers. Consecutive integers are integers that immediately follow each other. Examples of consecutive integers are: $\begin{array}{l}{1,2,3,4} \\ {-10,-9,-8,-7} \\ {150,151,152,153}\end{array}$ Notice that each number is one more than the number preceding it. So if we define the first integer as $$n$$, the next consecutive integer is $$n+1$$. The one after that is one more than $$n+1$$, so it is $$n+1+1$$, which is $$n+2$$. $\begin{array}{ll}{n} & {1^{\text { st }} \text { integer }} \\ {n+1} & {2^{\text { nd }} \text { consecutive integer }} \\ {n+2} & {3^{\text { rd }} \text { consecutive integer } \ldots \text { etc. }}\end{array}$ Exercise $$\PageIndex{22}$$ The sum of two consecutive integers is $$47$$. Find the numbers. Step 1. Read the problem. Step 2. Identify what you are looking for. two consecutive integers Step 3. Name each number. Let $$n=1^{st}$$ integer. $$n+1=$$ next consecutive integer Step 4. Translate. Restate as one sentence. The sum of the integers is $$47$$. Translate into an equation. Step 5. Solve the equation. Combine like terms. Subtract 1 from each side. Divide each side by 2. Step 6. Check. $$\begin{array} {lll} {23 + 24} &{\stackrel{?}{=}} &{47} \\ {47} &{=} &{47\checkmark} \end{array}$$ Step 7. Answer the question. The two consecutive integers are 23 and 24. Exercise $$\PageIndex{23}$$ The sum of two consecutive integers is 95. Find the numbers. 47, 48 Exercise $$\PageIndex{24}$$ The sum of two consecutive integers is −31. Find the numbers. -16, -15 Exercise $$\PageIndex{25}$$ Find three consecutive integers whose sum is −42. Step 1. Read the problem. Step 2. Identify what we are looking for. three consecutive integers Step 3. Name each of the three numbers. Let $$n=1^{st}$$ integer. $$n+1= 2^{nd}$$ consecutive integer $$n+2= 3^{rd}$$ consecutive integer Step 4. Translate. Restate as one sentence. The sum of the three integers is $$−42$$. Translate into an equation. Step 5. Solve the equation. Combine like terms. Subtract 3 from each side. Divide each side by 3. Step 6. Check. $$\begin{array}{lll} {-13 + (-14) + (-15)} &{\stackrel{?}{=}} &{-42} \\ {-42} &{=} &{-42\checkmark} \end{array}$$ Step 7. Answer the question. The three consecutive integers are −13, −14, and −15. Exercise $$\PageIndex{26}$$ Find three consecutive integers whose sum is −96. -33, -32, -31 Exercise $$\PageIndex{27}$$ Find three consecutive integers whose sum is −36. -13, -12, -11 Now that we have worked with consecutive integers, we will expand our work to include consecutive even integers and consecutive odd integers. Consecutive even integers are even integers that immediately follow one another. Examples of consecutive even integers are: $\begin{array}{l}{18,20,22} \\ {64,66,68} \\ {-12,-10,-8}\end{array}$ Notice each integer is $$2$$ more than the number preceding it. If we call the first one $$n$$, then the next one is $$n+2$$. The next one would be $$n+2+2$$ or $$n+4$$. $\begin{array}{cll}{n} & {1^{\text { st }} \text { even integer }} \\ {n+2} & {2^{\text { nd }} \text { consecutive even integer }} \\ {n+4} & {3^{\text { rd }} \text { consecutive even integer } \ldots \text { etc. }}\end{array}$ Consecutive odd integers are odd integers that immediately follow one another. Consider the consecutive odd integers $$77$$, $$79$$, and $$81$$. $\begin{array}{l}{77,79,81} \\ {n, n+2, n+4}\end{array}$ $\begin{array}{cll}{n} & {1^{\text { st }} \text {odd integer }} \\ {n+2} & {2^{\text { nd }} \text { consecutive odd integer }} \\ {n+4} & {3^{\text { rd }} \text { consecutive odd integer } \ldots \text { etc. }}\end{array}$ Does it seem strange to add 2 (an even number) to get from one odd integer to the next? Do you get an odd number or an even number when we add 2 to 3? to 11? to 47? Whether the problem asks for consecutive even numbers or odd numbers, you don’t have to do anything different. The pattern is still the same—to get from one odd or one even integer to the next, add 2. Exercise $$\PageIndex{28}$$ Find three consecutive even integers whose sum is 84. $\begin{array}{ll} {\textbf{Step 1. Read} \text{ the problem.}} & {} \\ {\textbf{Step 2. Identify} \text{ what we are looking for.}} & {\text{three consecutive even integers}} \\ {\textbf{Step 3. Name} \text{ the integers.}} & {\text{Let } n = 1^{st} \text{ even integers.}} \\ {} &{n + 2 = 2^{nd} \text{ consecutive even integer}} \\ {} &{n + 4 = 3^{rd} \text{ consecutive even integer}} \\ {\textbf{Step 4. Translate.}} &{} \\ {\text{ Restate as one sentence. }} &{\text{The sum of the three even integers is 84.}} \\ {\text{Translate into an equation.}} &{n + n + 2 + n + 4 = 84} \\ {\textbf{Step 5. Solve} \text{ the equation. }} &{} \\ {\text{Combine like terms.}} &{n + n + 2 + n + 4 = 84} \\ {\text{Subtract 6 from each side.}} &{3n + 6 = 84} \\ {\text{Divide each side by 3.}} &{3n = 78} \\ {} &{n = 26 \space 1^{st} \text{ integer}} \\\\ {} &{n + 2\space 2^{nd} \text{ integer}} \\ {} &{26 + 2} \\ {} &{28} \\\\ {} &{n + 4\space 3^{rd} \text{ integer}} \\ {} &{26 + 4} \\ {} &{30} \\ {\textbf{Step 6. Check.}} &{} \\\\ {26 + 28 + 30 \stackrel{?}{=} 84} &{} \\ {84 = 84 \checkmark} & {} \\ {\textbf{Step 7. Answer} \text{ the question.}} &{\text{The three consecutive integers are 26, 28, and 30.}} \end{array}$ Exercise $$\PageIndex{29}$$ Find three consecutive even integers whose sum is 102. 32, 34, 36 Exercise $$\PageIndex{30}$$ Find three consecutive even integers whose sum is −24. −10,−8,−6 Exercise $$\PageIndex{31}$$ A married couple together earns $110,000 a year. The wife earns$16,000 less than twice what her husband earns. What does the husband earn? Step 1. Read the problem. Step 2. Identify what we are looking for. How much does the husband earn? Step 3. Name. Choose a variable to represent the amount the husband earns. Let $$h=$$ the amount the husband earns. The wife earns $$16,000$$ less than twice that. $$2h−16,000$$ the amount the wife earns. Step 4. Translate. Together the husband and wife earn $$110,000$$. Restate the problem in one sentence with all the important information. Translate into an equation. Step 5. Solve the equation. $$h + 2h − 16,000 = 110,000$$ Combine like terms. $$3h − 16,000 = 110,000$$ Add $$16,000$$ to both sides and simplify. $$3h = 126,000$$ Divide each side by $$3$$. $$h = 42,000$$ $$42,000$$ amount husband earns $$2h − 16,000$$ amount wife earns $$2(42,000) − 16,000$$ $$84,000 − 16,000$$ $$68,000$$ Step 6. Check. If the wife earns $$68,000$$ and the husband earns $$42,000$$ is the total $$110,000$$(? Yes! Step 7. Answer the question. The husband earns $$42,000$$ a year. Exercise $$\PageIndex{32}$$ According to the National Automobile Dealers Association, the average cost of a car in 2014 was $28,500. This was$1,500 less than 6 times the cost in 1975. What was the average cost of a car in 1975? $5000 Exercise $$\PageIndex{33}$$ U.S. Census data shows that the median price of new home in the United States in November 2014 was$280,900. This was $10,700 more than 14 times the price in November 1964. What was the median price of a new home in November 1964? Answer$19300 ## Key Concepts • Problem-Solving Strategy 1. Read the problem. Make sure all the words and ideas are understood. 2. Identify what we are looking for. 3. Name what we are looking for. Choose a variable to represent that quantity. 4. Translate into an equation. It may be helpful to restate the problem in one sentence with all the important information. Then, translate the English sentence into an algebra equation. 5. Solve the equation using good algebra techniques. 6. Check the answer in the problem and make sure it makes sense. 7. Answer the question with a complete sentence. • Consecutive Integers Consecutive integers are integers that immediately follow each other. $\begin{array}{cc}{n} & {1^{\text { st }} \text { integer }} \\ {n+1} & {2^{\text { nd }} \text {consecutive integer }} \\ {n+2} & {3^{\text { rd }} \text { consecutive integer } \ldots \text { etc. }}\end{array}$ Consecutive even integers are even integers that immediately follow one another. $\begin{array}{cc}{n} & {1^{\text { st }} \text { integer }} \\ {n+2} & {2^{\text { nd }} \text { consecutive even integer }} \\ {n+4} & {3^{\text { rd }} \text { consecutive even integer } \ldots \text { etc. }}\end{array}$ Consecutive odd integers are odd integers that immediately follow one another. $\begin{array}{cc}{n} & {1^{\text { st }} \text { integer }} \\ {n+2} & {2^{\text { nd }} \text { consecutive odd integer }} \\ {n+4} & {3^{\text { rd }} \text { consecutive odd integer } \ldots \text { etc. }}\end{array}$
2021-03-03T22:01:45
{ "domain": "libretexts.org", "url": "https://math.libretexts.org/Courses/Remixer_University/Username%3A_pseeburger/MTH_098_Elementary_Algebra/3%3A_Math_Models/3.1%3A_Use_a_Problem-Solving_Strategy", "openwebmath_score": 0.4739220440387726, "openwebmath_perplexity": 601.2599966136503, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.653274768747278 }
http://www.linas.org/math/chap-rat/chap-rat.html
# Distributions of Rationals on the Unit Interval (or, How to (mis)-Count Rationals) Linas Vepstas <[email protected]> 12 October 2004 (revised 9 February 2005) ### Abstract: The distribution of rationals on the unit interval is filled with surprises. As a child, one is told that the rationals are distributed uniformly'' on the unit interval. If one considers the entire set , then yes, in a certain narrow sense, this is true. But if one considers just subsets, such as the subset of rationals with small'' denominators, then the distribution is far from uniform and full of counter-intuitive surprises, some of which we explore below. This implies that using intuition'' to understand the rationals and, more generally, the real numbers is a dangerous process. Once again, we see the footprints of the set-theoretic representation of the modular group at work. This paper is part of a set of chapters that explore the relationship between the real numbers, the modular group, and fractals. # Distributions Of Rationals on the Unit Interval The entire field of classical calculus and analysis is based on the notion that the real numbers are smoothly and uniformly distributed on the real number line. When one works with a particular representation of the rational numbers, say the dyadic representation, where each rational is represented by a sequence of binary digits, one gets, 'for free', a measure that goes with that representation. In the case of the dyadics, that measure is the idea that all strings of binary digits are uniformly distributed on the unit interval. This statement is so blatently obvious and taken for granted that it in fact impedes the understanding of measure. But this will be the topic of this chapter. There are several different ways of representing the rationals (and thier closures), and these are (as we will see shortly) inequivalent. One way is to represent them with p-adic, or base-p expansions of digits. Another way is to represent them as rationals, that is, as ratios of integers. Each of these representations will result in a uniform distribution of reals on the real number line, when one takes the apropriate limit of allowing p-adic strings with an infinite number of digits, or allowing fractions with arbitrarily large denominators. However, if we work with just finite subsets of p-adic expansions, or finite sets of rationals, one finds that the distributions are far from uniform, and are inequivalent to each other. In particular, this implies that the notion of measure on the real number line has a certain kind of ambiguity associated with it. The next thing that one finds is that the modular group becomes manifest, being the symmetry group that connects together the different representations of the rationals. However, insofar as there is no such thing as a 'real number' except as defined by the closure of the rationals, using a specific representation of the rationals, one has that the real numbers themselves have a modular group symmetry, if only because the underlying representations in terms of p-adic expansions and ratios have this symmetry. We develop the above wild-sounding claim a bit further in later chapters; here, we show one very simple way in which the modular group, and thus Farey Fractions, manifest themselves on the real number line. We do this by (incorrectly) counting rationals, and then wildly scrambling to find the correct way of counting. ## Simple Counting Lets begin by trying to enumerate the rationals, and seeing how they fall on the real number line. Start by listing all of the fractions with denomintors from 1 to N, and numerators between 0 and the denomintor. Clearly, many of these fractions will be reducible, i.e. the numerator and denominator have common factors, and thus, in this simple-minded enumeration, some rationals are counted multiple times. In particular, we'll count 0 over and over again: it will be in the list as 0/1, 0/2, 0/3 and so on. Likewise, 1 will appear in this list over and over: as 1/1, 2/2, 3/3, etc. We'll have 1/2 also appearing as 2/4, 3/6 and so on. Although this enumeration of the rationals clearly over-counts, it has the advantage of being extremely simple: it is a subset of the rectangular lattice . Its the canonical grade-school example of how the rationals are enumerable. How are these rationals distributed on the real number line? In fancy terms, what is the distribution of this lattice on the real number line? Or, what is the measure induced by the projection of the lattice onto the real number line? Unfortunately, using words like measure'' implies the taking of a limit to infinity. Lets stick to the simpler language: we want to make a histogram of the rationals. Lets draw some graphs. The figure shows this enumeration, up to a denominator of K=4000, carved up into N=720 bins, and normalized to unit density. That is, if , then we assign the fraction to the 'th bin, and so the graph is a histogram. We might expect this graph to have a huge peak at the bin n=360: after all, this bin will hold 1/2 and 2/4 and 3/6 and in general should have a big surfiet coming from the degeneracy at 1/2. One mght expect peaks at 1/3, and 1/4 and etc, but smaller. The above is a density graph of the rationals that occur in the simple enumeration, binned into 720 bins, up to a denominator of N=4000. The normalization of the bin count is such that the expected value for each bin is 1.0, as explained in the text. Indeed, there is a big upwards spike at 1/2. But there seems to be a big downwards spike just below, at bin 359, seemingly of equal and opposite size. This is the first surprise. Why is there a deficit at bin 359? We also have blips at 1/3, 1/4, 1/5, 1/6, but not at 1/7: something we can hand-wave away by noting that 720 is 6 factorial. (When one attempts 7!=5040 bins, one finds the peak at 1/7 is there, but the one at 1/11 seems to be missing; clearly having the number of bins being divisible by 7 is important.). The other surprising aspect of this picture is the obvious fractal self-similarity of this histogram. The interval between 1/3 and 1/2 seems to reprise the whole. The tallest blip in the middle of this subinterval occurs at 2/5, which is the Farey mediant of 1/2 and 1/3. Why are we getting something that looks like a fractal, when we are just counting rationals? More tanalizingly, why does the fractal involve Farey Fractions? We suspect that something peculiar happens because the over-counting at 1/2, 2/4'ths etc. falls on exactly the boundary between bins 360 and 359. In fact, any fraction with a denominator that is a multiple of 2,3,4,5, or 6 will have this problem; fractions that have a multiple of 7 in the denominator don't seem to have this problem, perhaps because they are not on a bin boundary. We can validate this idea by binning into 719 bins, noting that 719 is prime. Thus, for the most part, almost all fractions will clearly be in the middle'' of a bin. We expect a flatter graph; the up-down blips should cancel. But it shouldn't be too flat: we still expect a lot of overcounting at 1/2. See below: Wow, thats flat! How can this graph possibly be so flat? We should be massively overcounting at 1/2, there should be a big peak there. Maybe its drowned out by the blips at 0 and 1: we are, after all histograming over 8 million fractions, and we expect statistical variations to go as one over the sqaure-root of the sample size. So lets graph the same data, but rescale more appropriately. This is shown below: Hmm. Curious. There is indeed a peak at 1/2. But there are also deficits symmetrically arranged at either side. This is still confusing. We might have expected peaks, but no deficits, with the baseline pushed down, to say, 0.999, with all peaks going above, so that the total bin count would still average out to 1.0. But the baseline is at 1.0, and not at 0.999, and so this defies simple intuition. Notice also that the fractal nature is still evident. There are also peaks at 1/3, 1/4, 1/5 and 1/6. But not at 1/7'th. Previously, we explained away the lack of a peak at 1/7'th by arguing about the prime factors of 720; this time, 719 has no prime factors other than itself; thus, this naive argument fails. What do we replace this argument with? Well, at any rate, lets compare this to the distribution we should have been using all along'', where we eliminate all fractions that are reducible. That is, we should count each rational only once. This mkes a lot more sense, if we are to talk of teh distribution of rationals on the real number line. This is graphed below, again, binned into 719 bins, for all irreducible rationals with denominator less than or equal to 4000: Wow! We no longer have a peak at 1/2. In fact, it sure gives the distinct impression that we are undercounting at 1/2! Holy Banach-Tarski, Batman! What does it mean? Note also the graph is considerably noiser. Compare the scales on the left for a relative measure of the noise. Part, but not all, of the noise is due to the smaller sample size: we are counting fewer fractions: 4863602 are irreducible out of the simple list of 8002000. However, matching the sample sizes does not seem to significantly reduce the small-amplitude noise: qualitatively speaking, the binning of irreducible fractions seems much noisier. Let us pause for a moment to notice that this noise is not due to some numerical aberation due to the use of floating-point numbers, IEEE or otherwise. The above bincounts are performed using entirely integer math. That is, for every pair of integers , we computed the integer bin number and the integer remainder such that holds as in integer equation, where was the number of bins. This equation does not have 'rounding error' or 'numerical imprecision'. Curiously, binning into a non-prime number of bins does seem to reduce the (small-amplitude) noise. Equally curiously, it also seems to erase the prominent features that were occuring ath the Farey Fractions. This is exactly the opposite of the previous experience, where it was bining to a prime that seemed to 'erase' the features. Below is the binning into 720 bins. Following the usual laws of statistics and averages, one expects that increasing the sample size reduces the noise. This is true in an absolute sense, but not a relative sense. The graph below shows 720 bins holding all irreducible rationals with denominators less than 16000. The absolute amplitude has been reduced by over a factor of ten compared to the previous graphs; this is not a surprise. We are counting 77809948 irreducible rationals, as opposed to 4863602 before: our sample size is nearly 16 times larger. What is perhaps surprising is that there is relatively far more power in the higher frequencies. There are also still-visible noise peaks near 1/2, 1/3, and 2/3'rds, as well as at 0 and 1. Let reiterate that the noise in this figue is not due to floating-point errors or numerical imprecision. Its really there, deeply embedded in the rationals. As we count more and more rationals, and bin them into a fixed number of bins, then we will expect that the mean deviation about the norm of 1.0 to shrink and shrink, as some power law. It is in this sense that we can say that the rationals are uniformly distributed on the real-number line: greater sample sizes seemingly leads to more uniform distributions, albeit with strangely behaved variances. But even this statement is less than conclusive, because it hides a terrible scale invarience. We have one more nasty histogram to demonstrate. This one shows irreducible fractions with denominators less than 16000, which, as we've mentioned, represents a sample size almost 16 times larger than the first sets of graphs. We bin these into four times as many bins: 2880=4x720. Compare the normalized scale on the vertical axis to the corresponding picture for the smaller sample size and smaller number of bins. The vertical scales are identical, and the sizes of the peaks are identical. Each bin, on average, holds four times as many rationals (16 times as many rationals, 4 times as many bins). We've increased our sample size, but the features are not 'washing out': they are staying constant in size, and are becoming more distinct and well-defined. In light of the fact that the above graphs have some surprising features, we take a moment to try to be precise about what we mean when we say histogram'' and normalize''. Lets go back to the first figure. The total number of rationals in the histogram is , a little over eight million: a decent sample size. Each bin will have some count of these rationals. We want to talk in statistical terms, so we normalize the bin count as , so that the average value or expected value of is 1.0. That is, we have, by definition, (1) The act of bining a rational requires a division; that is, in order to determine if , a division is unavoidable. However, we can avoid numerical imprecision by sticking to integer division; using floating point here potentially casts a cloud over any results. With integer division, we are looking for such that ; performing this computation requires no rounding or truncation. The largest such integers we are likely to encounter in the previous sections are , for which ordinary 32-bit math is perfectly adequate; there is no danger of overflow. If one wanted to go deeper, one could use arbitrary precision libraries; for example, the Gnu Bignum Library, GMP, is freely available. But the point here is that to see these effects, one does not need to work with numbers so large that arbitrary precision math libraries would be required. ## Some Properties of Rational Numbers So what is it about the rational numbers that makes them behave like this? Lets review some basic properties. We can envision an arbitrary fraction made out of the integers and as corresponding to a point on a square lattice. This lattice is generated by the vectors and : these are the vectors that point along the x and y axes. Every point on the lattice can be represented by the vector for some integers and . This grid is a useful way to think about rationals: by looking out onto this grid, we can see'' all of the rationals, all at once. Theorem: The lattice is a group under addition. We recall the definition of a group: a group is closed under addition: for and one has . A group has an identity element, which, when added to any other group element, gives that element. For the identity is . Finally, for every element in the group, the inverse is also in the group. In other words, and . Theorem: The generators and generate the lattice. That is, . Theorem A lattice point is visible from the origin if and only if . By visible'' we mean that if one stood at the origin, and looked out on a field of pegs located at the grid corners, a given peg would not be behind another peg. Here, gcd is the greatest common divisor'', and so the statement is that a peg is visible if and only if the fraction cannot be reduced. Note that and are not the only possible generators. For example, and also generate the lattice. That is, every point in the lattice can be written as for some integers and . That is, given any integers , then there exist some integers , such that . There are an infinite number of such possible generators. The rest of this section attempts to describe this set of generators. Theorem: (Apostol Thm 1.1) Two vectors and generate the lattice if and only if the parallelogram formed by 0, , and does not contain any lattice points in its interior, or on its boundary. Such a parallelogram is called a cell or a fundamental region. The above theorem is not entirely obvious, and it is a good excercise to try to prove it. Note that as a corrolary, we have that both and are visible from the origin (there would be lattice points on the boundary, if they weren't). In other words, all generators are visible: all generators can be represented by a pair of irreducible fractions. However, not all pairs of fractions generate the lattice, as the next theorem shows. Theorem: (Apostol Thm 1.2) Let and for some integers . Then and generate the lattice if and only if . We recognize as the determinant of the matrix . The set of all matrices with determinant equal to or is called , the modular group. Thus, the set of generators of the lattice correspond to elements of the group . Theorem: If then . That is, the fractions given by the rows and columns are all visible from the origin. But we knew that already. Note that the matrices in act on the lattice by simple multiplication: for any point in the lattice, the product is another point in the lattice. Theorem: If is visible, then is visible as well, for any . In other words, the action of the modular group on the lattice never mixes visible points with invisible ones. In other words, if is an irreducible fraction, then so is ; and if is reducible, then so is . Theorem: (Topology) Elements of can be paramterized by ; equivalently, the elements of the modular group can be thought of as a collection of a certain special set of intervals on the real number line. Proof: We start by freely picking any (understanding that we've picked so that is irreducible). For good luck, we pick so that both and are positive; we return to negative values later. Then implies that . But we can't pick freely; only certain special values of result in being an integer. Mini-theorem: there exists an integer such that is an integer. Call this integer . Than another mini-theorem: the resulting , which we'll call , belongs to the set . So we now have . Next we note that for any , the fraction (2) solves . Thus, we've picked freely a number from and another number from , and so we've almost proven the paramterization. We have one bit of remaining freedom, and that is to pick or to be negative: all other sign changes can be eliminated. Finally, note that the fractions and represent an interval on the real number line. One endpoint of the interval can be picked freely; but the other can only be choosen from a limited (but infinite) set. What have we learned from this excercise? A new way to visualize rationals. In grade school, one traditionally learns to think of rationals as being somehow laid out evenly on the real number line. Maybe we even realize that there is a grid involved: and the grid is comfortingly square and uniform. But in fact, the the irreducible rationals are anything but square and uniform. If we look out onto the grid of pegs, we see some that are very far away, while others are hidden by nearby pegs. If we look off in the direction , the distance to the first visible peg at seems to be a completely unpredictable and indeed a very chaotic function of . Next, we've learned that the symmetries of a square grid are hyperbolic. Of course, everyone knows that square grids have a translational symmetry; we didn't even mention that. Square grids don't have a rotational symmetry, except for rotations by exactly 90 degrees. But only a few seem to know about the special relativity'' of a square lattice. Just like real'' special relativity, there is a strange squashing and shrinking of lengths while a cell'' or fundamental region'' is squashed. Worse, this group , known as the modular group, is implicated in a wide variety of hyperbolic goings-on. It is a symmetry group of surfaces with constant negative curvature (the Poincare upper half-plane). All sorts of interesting chaotic phenomena happen on hyprbolic surfaces: geodesics diverge from each other, and are thus said to have positive Lyapunov exponent, and the like. The Riemann zeta function, and its chaotic layout of zeros (never mind the chaotic layout of the prime numbers) are closely related. In general, whenever one sees something hyperbolic, one sees chaos. And here we are, staring at rational numbers and seeing something hyperbolic. It is also worth noting that the square grid, while being a cross-product of integers, is not a free product. By this we mean that there are multiple paths from the origin to any given point on the grid: thus, to get to , we can go right first, and then up, or up first, and then right. Thus the grid is actually a quotient space of a free group. (XXX need to expand on this free vs. quotient thing). To conclude, we've learned the following: the set of rationals consists entirely of the set of points on the grid that are visible from the origin. The entire set of rationals can be generated from just a pair of rationals and , as long as . By generated'' we mean that every rational number can be written in the form (3) where , are integers with . Of course, this sounds a little dumb, because if , then every rational can already be written as . The point here is that the last is a special case of the previous, with and . This is the broadest such generalization of this form. One oddity that we should notice is the superficial resemblance to Farey addition: given two rational numbers and , we add them not as normal numbers, but instead combining the numerator and denominator. As we will see, Farey fractions and the modular group are intimately intertwined. Homework: prove all of the above teorems. ## Orbits of the Modular Group The symmetries of the histograms are given by , a fact that we develop in later chapters. (XXX see the other pages on this website for now). Just to provide a taste of what is to come, here's a picture of the orbit of a vector under the action of the group elements of the dyadic representation of the modular group: That is, we consider how the vector transforms under the group elements generated by where we can write a general group element as . Lets avoid some confusion: the dyadic representation is *not* the canonical rep of ; it is a different rep that is isomorphic; we establish this elsewhere. In this representation, the only naturally occuring numbers are of the form , and so the main sequence of the peaks are rooted at 1/2, 1/4, 1/8 etc. To get to the peaks occuring at the Farey numbers, we need to work through the Minkowksi Question mark function, which provides the isomorphism between the Farey Numbers and the Dyadics. (This is done in the next chapter). (XXXX we really need to re-write this section so it doesn't have to allude to the 'other stuff'). As to the origin of the (white) noise, a better perspective can be gotten on the chapter on continued fraction gaps. ## Conclusion Write me. Introduce the next chapter. This is kind-of a to-do list. It sure would be nice to develop a generalized theory that can work with these peculiar results, and in particular, giving insight into what's happening near 1/2 and giving a quantitative description of the spectra near 1/3 and 2/3, etc. We want to graph the mean-square distribution as a function of sample size. We want to perform a frequency analysis (fourrier transform) and get the power spectrum. We want to explore to what extent the power spectrum has the approximate scaling relationship of a modular form. (We expect this relationship because the fractal self-similarity should manifest itself in the Fourrier spectrum as well, as a scaling relationship. This is not merely 1/f'' noise, its more than that.) When we deal with a finite number of bins, we cannot, of course, get the full symmetry of the modular group. For a finite number of bins, we expect to see the action of only some finite subgroup (or subset) of the modular group. What is that subgroup (subset)? What are its properties? We also have a deeper question: we will also need to explain why the modular group shows up when one is counting rationals; we will do this in the next chapter, where we discuss the alternate representations of the reals. Its almost impossible to avoid. Linas Vepstas 2005-02-10
2018-11-20T19:15:45
{ "domain": "linas.org", "url": "http://www.linas.org/math/chap-rat/chap-rat.html", "openwebmath_score": 0.8536762595176697, "openwebmath_perplexity": 402.4918312008197, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.653274768747278 }
https://stacks.math.columbia.edu/tag/0E23
Lemma 36.22.9. Consider a cartesian diagram of schemes $\xymatrix{ Z' \ar[r]_{i'} \ar[d]_ g & X' \ar[d]^ f \\ Z \ar[r]^ i & X }$ where $i$ is a closed immersion. If $Z$ and $X'$ are tor independent over $X$, then $Ri'_* \circ Lg^* = Lf^* \circ Ri_*$ as functors $D(\mathcal{O}_ Z) \to D(\mathcal{O}_{X'})$. Proof. Note that the lemma is supposed to hold for all $K \in D(\mathcal{O}_ Z)$. Observe that $i_*$ and $i'_*$ are exact functors and hence $Ri_*$ and $Ri'_*$ are computed by applying $i_*$ and $i'_*$ to any representatives. Thus the base change map $Lf^*(Ri_*(K)) \longrightarrow Ri'_*(Lg^*(K))$ on stalks at a point $z' \in Z'$ with image $z \in Z$ is given by $K_ z \otimes _{\mathcal{O}_{X, z}}^\mathbf {L} \mathcal{O}_{X', z'} \longrightarrow K_ z \otimes _{\mathcal{O}_{Z, z}}^\mathbf {L} \mathcal{O}_{Z', z'}$ This map is an isomorphism by More on Algebra, Lemma 15.61.2 and the assumed tor independence. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-08-19T17:51:51
{ "domain": "columbia.edu", "url": "https://stacks.math.columbia.edu/tag/0E23", "openwebmath_score": 0.9912763237953186, "openwebmath_perplexity": 245.27399953486818, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.653274768747278 }
https://leanprover-community.github.io/mathlib_docs/topology/sheaves/sheaf_condition/unique_gluing.html
# mathlibdocumentation topology.sheaves.sheaf_condition.unique_gluing # The sheaf condition in terms of unique gluings # We provide an alternative formulation of the sheaf condition in terms of unique gluings. We work with sheaves valued in a concrete category C admitting all limits, whose forgetful functor C ⥤ Type preserves limits and reflects isomorphisms. The usual categories of algebraic structures, such as Mon, AddCommGroup, Ring, CommRing etc. are all examples of this kind of category. A presheaf F : presheaf C X satisfies the sheaf condition if and only if, for every compatible family of sections sf : Π i : ι, F.obj (op (U i)), there exists a unique gluing s : F.obj (op (supr U)). Here, the family sf is called compatible, if for all i j : ι, the restrictions of sf i and sf j to U i ⊓ U j agree. A section s : F.obj (op (supr U)) is a gluing for the family sf, if s restricts to sf i on U i for all i : ι We show that the sheaf condition in terms of unique gluings is equivalent to the definition in terms of equalizers. Our approach is as follows: First, we show them to be equivalent for Type-valued presheaves. Then we use that composing a presheaf with a limit-preserving and isomorphism-reflecting functor leaves the sheaf condition invariant, as shown in topology/sheaves/forget.lean. def Top.presheaf.is_compatible {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) (sf : Π (i : ι), (F.obj (opposite.op (U i)))) : Prop A family of sections sf is compatible, if the restrictions of sf i and sf j to U i ⊓ U j agree, for all i and j Equations def Top.presheaf.is_gluing {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) (sf : Π (i : ι), (F.obj (opposite.op (U i)))) (s : (F.obj (opposite.op (supr U)))) : Prop A section s is a gluing for a family of sections sf if it restricts to sf i on U i, for all i Equations def Top.presheaf.is_sheaf_unique_gluing {C : Type u} {X : Top} (F : X) : Prop The sheaf condition in terms of unique gluings. A presheaf F : presheaf C X satisfies this sheaf condition if and only if, for every compatible family of sections sf : Π i : ι, F.obj (op (U i)), there exists a unique gluing s : F.obj (op (supr U)). We prove this to be equivalent to the usual one below in is_sheaf_iff_is_sheaf_unique_gluing Equations def Top.presheaf.pi_opens_iso_sections_family {X : Top} (F : Top.presheaf (Type v) X) {ι : Type v} (U : ι → ) : Π (i : ι), F.obj (opposite.op (U i)) For presheaves of types, terms of pi_opens F U are just families of sections. Equations theorem Top.presheaf.compatible_iff_left_res_eq_right_res {X : Top} (F : Top.presheaf (Type v) X) {ι : Type v} (U : ι → )  : Under the isomorphism pi_opens_iso_sections_family, compatibility of sections is the same as being equalized by the arrows left_res and right_res of the equalizer diagram. @[simp] theorem Top.presheaf.is_gluing_iff_eq_res {X : Top} (F : Top.presheaf (Type v) X) {ι : Type v} (U : ι → ) (s : F.obj (opposite.op (supr U))) : F.is_gluing U .hom sf) s Under the isomorphism pi_opens_iso_sections_family, being a gluing of a family of sections sf is the same as lying in the preimage of res (the leftmost arrow of the equalizer diagram). The "equalizer" sheaf condition can be obtained from the sheaf condition in terms of unique gluings. The sheaf condition in terms of unique gluings can be obtained from the usual "equalizer" sheaf condition. For type-valued presheaves, the sheaf condition in terms of unique gluings is equivalent to the usual sheaf condition in terms of equalizer diagrams. For presheaves valued in a concrete category, whose forgetful functor reflects isomorphisms and preserves limits, the sheaf condition in terms of unique gluings is equivalent to the usual one in terms of equalizer diagrams. theorem Top.sheaf.exists_unique_gluing {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) (sf : Π (i : ι), (F.val.obj (opposite.op (U i)))) (h : F.val.is_compatible U sf) : ∃! (s : (F.val.obj (opposite.op (supr U)))), F.val.is_gluing U sf s A more convenient way of obtaining a unique gluing of sections for a sheaf. theorem Top.sheaf.exists_unique_gluing' {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) (V : topological_space.opens X) (iUV : Π (i : ι), U i V) (hcover : V supr U) (sf : Π (i : ι), (F.val.obj (opposite.op (U i)))) (h : F.val.is_compatible U sf) : ∃! (s : (F.val.obj (opposite.op V))), ∀ (i : ι), (F.val.map (iUV i).op) s = sf i In this version of the lemma, the inclusion homs iUV can be specified directly by the user, which can be more convenient in practice. @[ext] theorem Top.sheaf.eq_of_locally_eq {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) (s t : (F.val.obj (opposite.op (supr U)))) (h : ∀ (i : ι), (F.val.map s = (F.val.map t) : s = t theorem Top.sheaf.eq_of_locally_eq' {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) (V : topological_space.opens X) (iUV : Π (i : ι), U i V) (hcover : V supr U) (s t : (F.val.obj (opposite.op V))) (h : ∀ (i : ι), (F.val.map (iUV i).op) s = (F.val.map (iUV i).op) t) : s = t In this version of the lemma, the inclusion homs iUV can be specified directly by the user, which can be more convenient in practice.
2021-10-24T11:22:38
{ "domain": "github.io", "url": "https://leanprover-community.github.io/mathlib_docs/topology/sheaves/sheaf_condition/unique_gluing.html", "openwebmath_score": 0.9141340255737305, "openwebmath_perplexity": 1178.4636448141139, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.6688802603710085, "lm_q1q2_score": 0.6532747687472779 }
https://socratic.org/questions/how-many-different-ways-can-you-get-two-heads-and-two-tails-in-four-tosses-of-a-#394283
# How many different ways can you get two heads and two tails in four tosses of a fair coin? 6 #### Explanation: Let's list them out: HHTT HTTH TTHH THHT HTHT THTH which is 6 ways. ~~~~~~~~~~ Another way to do this (and is actually how I caught that I'd missed 2 in the list above) is to work out the problem with a permutation formula (we care about the order of the coin flips, so that HHTT is different from TTHH. If they weren't different, like cards in a poker hand, then we'd be talking about combinations) The general formula for a permutation is P_(n,k)=(n!)/((n-k)!); n="population", k="picks" Here we're saying that we have a population of 4 (taking each coin flip as a member of that population), picking 4 (we're doing a coin flip for all 4 of them). However, we need to adjust the formula a bit because we have only 2 results and each result appears twice. And so we divide by 2! for each of these groups (or in other words divide by 2!2!). So we get: (4!)/((4-4)!2!2!)=(4!)/((0!)(2!)(2!))=(4xx3xxcancel(2!))/((1)(cancel(2!))(2))=12/2=6
2022-08-16T22:54:10
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-many-different-ways-can-you-get-two-heads-and-two-tails-in-four-tosses-of-a-#394283", "openwebmath_score": 0.8338854312896729, "openwebmath_perplexity": 419.22980385791794, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.6688802603710085, "lm_q1q2_score": 0.6532747687472779 }
https://stacks.math.columbia.edu/tag/0ECJ
Lemma 37.64.2. In Situation 37.64.1 then for $y \in Y$ there exist affine opens $U \subset X$ and $V \subset Y$ with $i^{-1}(U) = j^{-1}(V)$ and $y \in V$. Proof. Let $y \in Y$. Choose an affine open $U \subset X$ such that $j^{-1}(\{ y\} ) \subset i^{-1}(U)$ (possible by assumption). Choose an affine open $V \subset Y$ neighbourhood of $y$ such that $j^{-1}(V) \subset i^{-1}(U)$. This is possible because $j : Z \to Y$ is a closed morphism (Morphisms, Lemma 29.44.7) and $i^{-1}(U)$ contains the fibre over $y$. Since $j$ is integral, the scheme theoretic fibre $Z_ y$ is the spectrum of an algebra integral over a field. By Limits, Lemma 32.11.6 we can find an $\overline{f} \in \Gamma (i^{-1}(U), \mathcal{O}_{i^{-1}(U)})$ such that $Z_ y \subset D(\overline{f}) \subset j^{-1}(V)$. Since $i|_{i^{-1}(U)} : i^{-1}(U) \to U$ is a closed immersion of affines, we can choose an $f \in \Gamma (U, \mathcal{O}_ U)$ whose restriction to $i^{-1}(U)$ is $\overline{f}$. After replacing $U$ by the principal open $D(f) \subset U$ we find affine opens $y \in V \subset Y$ and $U \subset X$ with $j^{-1}(\{ y\} ) \subset i^{-1}(U) \subset j^{-1}(V)$ Now we (in some sense) repeat the argument. Namely, we choose $g \in \Gamma (V, \mathcal{O}_ V)$ such that $y \in D(g)$ and $j^{-1}(D(g)) \subset i^{-1}(U)$ (possible by the same argument as above). Then we can pick $f \in \Gamma (U, \mathcal{O}_ U)$ whose restriction to $i^{-1}(U)$ is the pullback of $g$ by $i^{-1}(U) \to V$ (again possible by the same reason as above). Then we finally have affine opens $y \in V' = D(g) \subset V \subset Y$ and $U' = D(f) \subset U \subset X$ with $j^{-1}(V') = i^{-1}(V')$. $\square$ There are also: • 4 comment(s) on Section 37.64: Pushouts in the category of schemes, II In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-05-26T10:27:58
{ "domain": "columbia.edu", "url": "https://stacks.math.columbia.edu/tag/0ECJ", "openwebmath_score": 0.9763491153717041, "openwebmath_perplexity": 150.80314446505807, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333415, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747687472779 }
https://tutorme.com/tutors/766164/interview/
Enable contrast version # Tutor profile: Rodrigo C. Inactive Rodrigo C. Economics BSc and MSc Student, Part-time researcher/deputy teacher/instructor Tutor Satisfaction Guarantee ## Questions ### Subject:Calculus TutorMe Question: Find the derivative of $$f(x)=ln(x^\sigma-\sigma)$$ Inactive Rodrigo C. For this, we first find the outter derivative, which we, in turn, multiply by the inner derivative due to the chain rule: $( \frac{f(x)}{x}= \frac{1}{x^\sigma-\sigma} \times \frac{d}{dx}(x^\sigma-\sigma)$) $( \frac{f(x)}{x}= \frac{1}{x^\sigma-\sigma} \times \sigma x^{\sigma-1}$) $( \frac{f(x)}{x}= \frac{\sigma x^{\sigma-1}}{x^\sigma-\sigma}$) ### Subject:Economics TutorMe Question: Assume consumers demand two goods, bananas ($$x_1$$) and coconuts ($$x_2$$) such that their average utility can be characterized as: $( U=x^{\alpha}_{1}x^{\beta}_{2}$) And have the following budget constraint: $( m=p_1x_1+p_2x_2$) Where $$m$$ represents the budget of the representative consumer. Derive their demand functions for coconuts and bananas. Inactive Rodrigo C. First, we may find the marginal rate of substitution, given by $$\frac{\partial x_2}{\partial {x_1}}$$ or $$\frac{\frac{\partial {U}}{\partial {x_1}}}{\frac{\partial {U}}{\partial {x_2}}}$$ In which, $(\frac{\partial {U}}{\partial {x_1}} = \alpha x^{\alpha-1}_{1}x^{\beta}_{2}$) and $(\frac{\partial {U}}{\partial {x_2}} = \beta x^{\alpha}_{1}x^{\beta-1}_{2}$) We arrive at the following MRS: $(\frac{\alpha x^{\alpha-1}_{1}x^{\beta}_{2}}{\beta x^{\alpha}_{1}x^{\beta-1}_{2}}$) Which we may simplify as: $(\frac{\alpha x_2}{\beta x_1}$) Using the identity $$MRS=\frac{p_1}{p_2}$$, we arrive at: $(\beta p_1x_1=\alpha p_2x_2$) From which we derive the following two identities: $(p_1x_1=\frac{\alpha p_2x_2}{\beta}$) $(p_2x_2=\frac{\beta p_1x_1}{\alpha}$) Which we then are able to plug into the budget constraint to derive the demand for both goods. Starting for $$x_1$$, we obtain: $( m=p_1x_1+\frac{\beta p_1x_1}{\alpha}$) $( m=p_1x_1+\frac{\beta}{\alpha} p_1x_1$) $( m=(1+\frac{\beta}{\alpha})p_1x_1$) $( m=(\frac{\alpha+\beta}{\alpha})p_1x_1$) $( x^{d}_{1}=\frac{\alpha}{\alpha+\beta}\frac{m}{p_1}$) Which is the demand of bananas. Analogously, repeating the same steps we obtain the following demand for coconuts: $( x^{d}_{2}=\frac{\beta}{\alpha+\beta}\frac{m}{p_2}$) ### Subject:Algebra TutorMe Question: Solve the following equation for $$x$$: $( x^2-4x=-4$) Inactive Rodrigo C. First, we can re-write the equation as: $( x^2-4x+4= 0$) Which, in turn, we can factorize as: $( (x-2)^2= 0$) For which it is only possible to achieve a zero-sum solution given: $( x-2= 0$) From which we arrive at: $( x= 0$) ## Contact tutor Send a message explaining your needs and Rodrigo will reply soon. Contact Rodrigo ## Request lesson Ready now? Request a lesson. Start Lesson ## FAQs What is a lesson? A lesson is virtual lesson space on our platform where you and a tutor can communicate. You'll have the option to communicate using video/audio as well as text chat. You can also upload documents, edit papers in real time and use our cutting-edge virtual whiteboard. How do I begin a lesson? If the tutor is currently online, you can click the "Start Lesson" button above. If they are offline, you can always send them a message to schedule a lesson. Who are TutorMe tutors? Many of our tutors are current college students or recent graduates of top-tier universities like MIT, Harvard and USC. TutorMe has thousands of top-quality tutors available to work with you. BEST IN CLASS SINCE 2015 TutorMe homepage Made in California by Zovio © 2013 - 2022 TutorMe, LLC High Contrast Mode On Off
2022-01-16T18:41:16
{ "domain": "tutorme.com", "url": "https://tutorme.com/tutors/766164/interview/", "openwebmath_score": 0.7087604999542236, "openwebmath_perplexity": 3098.816070040583, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333415, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747687472779 }
http://www.maa.org/programs/faculty-and-departments/course-communities/the-matrix-exponential
# The Matrix Exponential A set of text modules with MATLAB code on computing and using the matrix exponential. Identifier: http://cnx.org/content/m10677/latest/?collection=col10048/latest Subject: Rating: Creator(s): Steven J. Cox, et al. Cataloger: Publisher: Connexions Rights: Steven J. Cox, et al. Format Other: MATLAB ### emphasizes multiple parallels with scalar exponential function This set of text modules shows how the matrix exponential may be defined in several ways by analogy with various definitions of the function $$e^x$$ as a limit of powers, as a sum of powers, via the Laplace transform, and via eigenvalues and eigenvectors. In each case the method is applied to three matrices. In same cases MATLAB code is provided to confirm the stated results. The set of modules closes with an example of the mass-spring-damper system, and how the associated second-degree ODE can be solve dusing the matrix exponential. ### Nice write up on the matrix exponential. Nice write up on the matrix exponential.
2014-09-16T22:13:13
{ "domain": "maa.org", "url": "http://www.maa.org/programs/faculty-and-departments/course-communities/the-matrix-exponential", "openwebmath_score": 0.7313834428787231, "openwebmath_perplexity": 988.0077069573728, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692291542526, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747682930486 }
https://julia.quantecon.org/time_series_models/lu_tricks.html
• Lectures • Code • Notebooks • Community # Classical Control with Linear Algebra¶ ## Overview¶ In an earlier lecture Linear Quadratic Dynamic Programming Problems we have studied how to solve a special class of dynamic optimization and prediction problems by applying the method of dynamic programming. In this class of problems • the objective function is quadratic in states and controls • the one-step transition function is linear • shocks are i.i.d. Gaussian or martingale differences In this lecture and a companion lecture Classical Filtering with Linear Algebra, we study the classical theory of linear-quadratic (LQ) optimal control problems. The classical approach does not use the two closely related methods – dynamic programming and Kalman filtering – that we describe in other lectures, namely, Linear Quadratic Dynamic Programming Problems and A First Look at the Kalman Filter. • $z$-transform and lag operator methods, or. • matrix decompositions applied to linear systems of first-order conditions for optimum problems. In this lecture and the sequel Classical Filtering with Linear Algebra, we mostly rely on elementary linear algebra. The main tool from linear algebra we’ll put to work here is LU decomposition. We’ll begin with discrete horizon problems. Then we’ll view infinite horizon problems as appropriate limits of these finite horizon problems. Later, we will examine the close connection between LQ control and least squares prediction and filtering problems. These classes of problems are connected in the sense that to solve each, essentially the same mathematics is used. ### References¶ Useful references include [Whi63], [HS80], [Orf88], [AP91], and [Mut60]. ### Setup¶ In [1]: using InstantiateFromURL github_project("QuantEcon/quantecon-notebooks-julia", version = "0.5.0") # github_project("QuantEcon/quantecon-notebooks-julia", version = "0.5.0", instantiate = true) # uncomment to force package installation In [2]: using Polynomials, Plots, Random, Parameters using LinearAlgebra, Statistics ## A Control Problem¶ Let $L$ be the lag operator, so that, for sequence $\{x_t\}$ we have $L x_t = x_{t-1}$. More generally, let $L^k x_t = x_{t-k}$ with $L^0 x_t = x_t$ and $$d(L) = d_0 + d_1 L+ \ldots + d_m L^m$$ where $d_0, d_1, \ldots, d_m$ is a given scalar sequence. Consider the discrete time control problem $$\max_{\{y_t\}} \lim_{N \to \infty} \sum^N_{t=0} \beta^t\, \left\{ a_t y_t - {1 \over 2}\, hy^2_t - {1 \over 2} \, \left[ d(L)y_t \right]^2 \right\}, \tag{1}$$ where • $h$ is a positive parameter and $\beta \in (0,1)$ is a discount factor • $\{a_t\}_{t \geq 0}$ is a sequence of exponential order less than $\beta^{-1/2}$, by which we mean $\lim_{t \rightarrow \infty} \beta^{\frac{t}{2}} a_t = 0$ Maximization in (1) is subject to initial conditions for $y_{-1}, y_{-2} \ldots, y_{-m}$. Maximization is over infinite sequences $\{y_t\}_{t \geq 0}$. ### Example¶ The formulation of the LQ problem given above is broad enough to encompass many useful models. As a simple illustration, recall that in lqcontrol we consider a monopolist facing stochastic demand shocks and adjustment costs. Let’s consider a deterministic version of this problem, where the monopolist maximizes the discounted sum $$\sum_{t=0}^{\infty} \beta^t \pi_t$$ and $$\pi_t = p_t q_t - c q_t - \gamma (q_{t+1} - q_t)^2 \quad \text{with} \quad p_t = \alpha_0 - \alpha_1 q_t + d_t$$ In this expression, $q_t$ is output, $c$ is average cost of production, and $d_t$ is a demand shock. The term $\gamma (q_{t+1} - q_t)^2$ represents adjustment costs. You will be able to confirm that the objective function can be rewritten as (1) when • $a_t := \alpha_0 + d_t - c$ • $h := 2 \alpha_1$ • $d(L) := \sqrt{2 \gamma}(I - L)$ Further examples of this problem for factor demand, economic growth, and government policy problems are given in ch. IX of [Sar87]. ## Finite Horizon Theory¶ We first study a finite $N$ version of the problem. Later we will study an infinite horizon problem solution as a limiting version of a finite horizon problem. (This will require being careful because the limits as $N \to \infty$ of the necessary and sufficient conditions for maximizing finite $N$ versions of (1) are not sufficient for maximizing (1)) We begin by 1. fixing $N > m$, 2. differentiating the finite version of (1) with respect to $y_0, y_1, \ldots, y_N$, and 3. setting these derivatives to zero For $t=0, \ldots, N-m$ these first-order necessary conditions are the Euler equations. For $t = N-m + 1, \ldots, N$, the first-order conditions are a set of terminal conditions. Consider the term \begin{aligned} J & = \sum^N_{t=0} \beta^t [d(L) y_t] [d(L) y_t] \\ & = \sum^N_{t=0} \beta^t \, (d_0 \, y_t + d_1 \, y_{t-1} + \cdots + d_m \, y_{t-m}) \, (d_0 \, y_t + d_1 \, y_{t-1} + \cdots + d_m\, y_{t-m}) \end{aligned} Differentiating $J$ with respect to $y_t$ for $t=0,\ 1,\ \ldots,\ N-m$ gives \begin{aligned} {\partial {J} \over \partial y_t} & = 2 \beta^t \, d_0 \, d(L)y_t + 2 \beta^{t+1} \, d_1\, d(L)y_{t+1} + \cdots + 2 \beta^{t+m}\, d_m\, d(L) y_{t+m} \\ & = 2\beta^t\, \bigl(d_0 + d_1 \, \beta L^{-1} + d_2 \, \beta^2\, L^{-2} + \cdots + d_m \, \beta^m \, L^{-m}\bigr)\, d (L) y_t\ \end{aligned} We can write this more succinctly as $${\partial {J} \over \partial y_t} = 2 \beta^t \, d(\beta L^{-1}) \, d (L) y_t \tag{2}$$ Differentiating $J$ with respect to $y_t$ for $t = N-m + 1, \ldots, N$ gives \begin{aligned} {\partial J \over \partial y_N} &= 2 \beta^N\, d_0 \, d(L) y_N \cr {\partial J \over \partial y_{N-1}} &= 2\beta^{N-1} \,\bigl[d_0 + \beta \, d_1\, L^{-1}\bigr] \, d(L)y_{N-1} \cr \vdots & \quad \quad \vdots \cr {\partial {J} \over \partial y_{N-m+1}} &= 2 \beta^{N-m+1}\,\bigl[d_0 + \beta L^{-1} \,d_1 + \cdots + \beta^{m-1}\, L^{-m+1}\, d_{m-1}\bigr] d(L)y_{N-m+1} \end{aligned} \tag{3} With these preliminaries under our belts, we are ready to differentiate (1). Differentiating (1) with respect to $y_t$ for $t=0, \ldots, N-m$ gives the Euler equations $$\bigl[h+d\,(\beta L^{-1})\,d(L)\bigr] y_t = a_t, \quad t=0,\, 1,\, \ldots, N-m \tag{4}$$ The system of equations (4) form a $2 \times m$ order linear difference equation that must hold for the values of $t$ indicated. Differentiating (1) with respect to $y_t$ for $t = N-m + 1, \ldots, N$ gives the terminal conditions \begin{aligned} \beta^N (a_N - hy_N - d_0\,d(L)y_N) &= 0 \cr \beta^{N-1} \left(a_{N-1}-hy_{N-1}-\Bigl(d_0 + \beta \, d_1\, L^{-1}\Bigr)\, d(L)\, y_{N-1}\right) & = 0 \cr \vdots & \vdots\cr \beta^{N-m+1} \biggl(a_{N-m+1} - h y_{N-m+1} -(d_0+\beta L^{-1} d_1+\cdots\ +\beta^{m-1} L^{-m+1} d_{m-1}) d(L) y_{N-m+1}\biggr) & = 0 \end{aligned} \tag{5} In the finite $N$ problem, we want simultaneously to solve (4) subject to the $m$ initial conditions $y_{-1}, \ldots, y_{-m}$ and the $m$ terminal conditions (5). These conditions uniquely pin down the solution of the finite $N$ problem. That is, for the finite $N$ problem, conditions (4) and (5) are necessary and sufficient for a maximum, by concavity of the objective function. Next we describe how to obtain the solution using matrix methods. ### Matrix Methods¶ Let’s look at how linear algebra can be used to tackle and shed light on the finite horizon LQ control problem. #### A Single Lag Term¶ Let’s begin with the special case in which $m=1$. We want to solve the system of $N+1$ linear equations \begin{aligned} \bigl[h & + d\, (\beta L^{-1})\, d\, (L) ] y_t = a_t, \quad t = 0,\ 1,\ \ldots,\, N-1\cr \beta^N & \bigl[a_N-h\, y_N-d_0\, d\, (L) y_N\bigr] = 0 \end{aligned} \tag{6} where $d(L) = d_0 + d_1 L$. These equations are to be solved for $y_0, y_1, \ldots, y_N$ as functions of $a_0, a_1, \ldots, a_N$ and $y_{-1}$. Let $$\phi (L) = \phi_0 + \phi_1 L + \beta \phi_1 L^{-1} = h + d (\beta L^{-1}) d(L) = (h + d_0^2 + d_1^2) + d_1 d_0 L+ d_1 d_0 \beta L^{-1}$$ Then we can represent (6) as the matrix equation $$\left[ \begin{matrix} (\phi_0-d_1^2) & \phi_1 & 0 & 0 & \ldots & \ldots & 0 \cr \beta \phi_1 & \phi_0 & \phi_1 & 0 & \ldots & \dots & 0 \cr 0 & \beta \phi_1 & \phi_0 & \phi_1 & \ldots & \ldots & 0 \cr \vdots &\vdots & \vdots & \ddots & \vdots & \vdots & \vdots \cr 0 & \ldots & \ldots & \ldots & \beta \phi_1 & \phi_0 &\phi_1 \cr 0 & \ldots & \ldots & \ldots & 0 & \beta \phi_1 & \phi_0 \end{matrix} \right] \left [ \begin{matrix} y_N \cr y_{N-1} \cr y_{N-2} \cr \vdots \cr y_1 \cr y_0 \end{matrix} \right ] = \left[ \begin{matrix} a_N \cr a_{N-1} \cr a_{N-2} \cr \vdots \cr a_1 \cr a_0 - \phi_1 y_{-1} \end{matrix} \right] \tag{7}$$ or $$W\bar y = \bar a \tag{8}$$ Notice how we have chosen to arrange the $y_t$’s in reverse time order. The matrix $W$ on the left side of (7) is “almost” a Toeplitz matrix (where each descending diagonal is constant). There are two sources of deviation from the form of a Toeplitz matrix. 1. The first element differs from the remaining diagonal elements, reflecting the terminal condition. 2. The subdiagonal elements equal $\beta$ time the superdiagonal elements. The solution of (8) can be expressed in the form $$\bar y = W^{-1} \bar a \tag{9}$$ which represents each element $y_t$ of $\bar y$ as a function of the entire vector $\bar a$. That is, $y_t$ is a function of past, present, and future values of $a$’s, as well as of the initial condition $y_{-1}$. #### An Alternative Representation¶ An alternative way to express the solution to (7) or (8) is in so called feedback-feedforward form. The idea here is to find a solution expressing $y_t$ as a function of past $y$’s and current and future $a$’s. To achieve this solution, one can use an LU decomposition of $W$. There always exists a decomposition of $W$ of the form $W= LU$ where • $L$ is an $(N+1) \times (N+1)$ lower trangular matrix • $U$ is an $(N+1) \times (N+1)$ upper trangular matrix. The factorization can be normalized so that the diagonal elements of $U$ are unity. Using the LU representation in (9), we obtain $$U \bar y = L^{-1} \bar a \tag{10}$$ Since $L^{-1}$ is lower trangular, this representation expresses $y_t$ as a function of • lagged $y$’s (via the term $U \bar y$), and • current and future $a$’s (via the term $L^{-1} \bar a$) Because there are zeros everywhere in the matrix on the left of (7) except on the diagonal, superdiagonal, and subdiagonal, the $LU$ decomposition takes • $L$ to be zero except in the diagional and the leading subdiagonal • $U$ to be zero except on the diagonal and the superdiagional Thus, (10) has the form $$\left[ \begin{matrix} 1& U_{12} & 0 & 0 & \ldots & 0 & 0 \cr 0 & 1 & U_{23} & 0 & \ldots & 0 & 0 \cr 0 & 0 & 1 & U_{34} & \ldots & 0 & 0 \cr 0 & 0 & 0 & 1 & \ldots & 0 & 0\cr \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\cr 0 & 0 & 0 & 0 & \ldots & 1 & U_{N,N+1} \cr 0 & 0 & 0 & 0 & \ldots & 0 & 1 \end{matrix} \right] \ \ \ \left[ \begin{matrix} y_N \cr y_{N-1} \cr y_{N-2} \cr y_{N-3} \cr \vdots \cr y_1 \cr y_0 \end{matrix} \right] =$$$$\quad \left[ \begin{matrix} L^{-1}_{11} & 0 & 0 & \ldots & 0 \cr L^{-1}_{21} & L^{-1}_{22} & 0 & \ldots & 0 \cr L^{-1}_{31} & L^{-1}_{32} & L^{-1}_{33}& \ldots & 0 \cr \vdots & \vdots & \vdots & \ddots & \vdots\cr L^{-1}_{N,1} & L^{-1}_{N,2} & L^{-1}_{N,3} & \ldots & 0 \cr L^{-1}_{N+1,1} & L^{-1}_{N+1,2} & L^{-1}_{N+1,3} & \ldots & L^{-1}_{N+1\, N+1} \end{matrix} \right] \left[ \begin{matrix} a_N \cr a_{N-1} \cr a_{N-2} \cr \vdots \cr a_1 \cr a_0 - \phi_1 y_{-1} \end{matrix} \right ]$$ where $L^{-1}_{ij}$ is the $(i,j)$ element of $L^{-1}$ and $U_{ij}$ is the $(i,j)$ element of $U$. Note how the left side for a given $t$ involves $y_t$ and one lagged value $y_{t-1}$ while the right side involves all future values of the forcing process $a_t, a_{t+1}, \ldots, a_N$. We briefly indicate how this approach extends to the problem with $m > 1$. Assume that $\beta = 1$ and let $D_{m+1}$ be the $(m+1) \times (m+1)$ symmetric matrix whose elements are determined from the following formula: $$D_{jk} = d_0 d_{k-j} + d_1 d_{k-j+1} + \ldots + d_{j-1} d_{k-1}, \qquad k \geq j$$ Let $I_{m+1}$ be the $(m+1) \times (m+1)$ identity matrix. Let $\phi_j$ be the coefficients in the expansion $\phi (L) = h + d (L^{-1}) d (L)$. Then the first order conditions (4) and (5) can be expressed as: $$(D_{m+1} + hI_{m+1})\ \ \left[ \begin{matrix} y_N \cr y_{N-1} \cr \vdots \cr y_{N-m} \end{matrix} \right]\ = \ \left[ \begin{matrix} a_N \cr a_{N-1} \cr \vdots \cr a_{N-m} \end{matrix} \right] + M\ \left[ \begin{matrix} y_{N-m+1}\cr y_{N-m-2}\cr \vdots\cr y_{N-2m} \end{matrix} \right]$$ where $M$ is $(m+1)\times m$ and $$M_{ij} = \begin{cases} D_{i-j,\,m+1} \textrm{ for } i>j \\ 0 \textrm{ for } i\leq j\end{cases}$$\begin{aligned} \phi_m y_{N-1} &+ \phi_{m-1} y_{N-2} + \ldots + \phi_0 y_{N-m-1} + \phi_1 y_{N-m-2} +\cr &\hskip.75in \ldots + \phi_m y_{N-2m-1} = a_{N-m-1} \cr \phi_m y_{N-2} &+ \phi_{m-1} y_{N-3} + \ldots + \phi_0 y_{N-m-2} + \phi_1 y_{N-m-3} +\cr &\hskip.75in \ldots + \phi_m y_{N-2m-2} = a_{N-m-2} \cr &\qquad \vdots \cr \phi_m y_{m+1} &+ \phi_{m-1} y_m + + \ldots + \phi_0 y_1 + \phi_1 y_0 + \phi_m y_{-m+1} = a_1 \cr \phi_m y_m + \phi_{m-1}& y_{m-1} + \phi_{m-2} + \ldots + \phi_0 y_0 + \phi_1 y_{-1} + \ldots + \phi_m y_{-m} = a_0 \end{aligned} As before, we can express this equation as $W \bar y = \bar a$. The matrix on the left of this equation is “almost” Toeplitz, the exception being the leading $m \times m$ sub matrix in the upper left hand corner. We can represent the solution in feedback-feedforward form by obtaining a decomposition $LU = W$, and obtain $$U \bar y = L^{-1} \bar a \tag{11}$$ \begin{aligned} \sum^t_{j=0}\, U_{-t+N+1,\,-t+N+j+1}\,y_{t-j} &= \sum^{N-t}_{j=0}\, L_{-t+N+1,\, -t+N+1-j}\, \bar a_{t+j}\ ,\cr &\qquad t=0,1,\ldots, N \end{aligned} where $L^{-1}_{t,s}$ is the element in the $(t,s)$ position of $L$, and similarly for $U$. The left side of equation (11) is the “feedback” part of the optimal control law for $y_t$, while the right-hand side is the “feedforward” part. We note that there is a different control law for each $t$. Thus, in the finite horizon case, the optimal control law is time dependent. It is natural to suspect that as $N \rightarrow\infty$, (11) becomes equivalent to the solution of our infinite horizon problem, which below we shall show can be expressed as $$c(L) y_t = c (\beta L^{-1})^{-1} a_t\ ,$$ so that as $N \rightarrow \infty$ we expect that for each fixed $t, U^{-1}_{t, t-j} \rightarrow c_j$ and $L_{t,t+j}$ approaches the coefficient on $L^{-j}$ in the expansion of $c(\beta L^{-1})^{-1}$. This suspicion is true under general conditions that we shall study later. For now, we note that by creating the matrix $W$ for large $N$ and factoring it into the $LU$ form, good approximations to $c(L)$ and $c(\beta L^{-1})^{-1}$ can be obtained. ## The Infinite Horizon Limit¶ For the infinite horizon problem, we propose to discover first-order necessary conditions by taking the limits of (4) and (5) as $N \to \infty$. This approach is valid, and the limits of (4) and (5) as $N$ approaches infinity are first-order necessary conditions for a maximum. However, for the infinite horizon problem with $\beta < 1$, the limits of (4) and (5) are, in general, not sufficient for a maximum. That is, the limits of (5) do not provide enough information uniquely to determine the solution of the Euler equation (4) that maximizes (1). As we shall see below, a side condition on the path of $y_t$ that together with (4) is sufficient for an optimum is $$\sum^\infty_{t=0}\ \beta^t\, hy^2_t < \infty \tag{12}$$ All paths that satisfy the Euler equations, except the one that we shall select below, violate this condition and, therefore, evidently lead to (much) lower values of (1) than does the optimal path selected by the solution procedure below. Consider the characteristic equation associated with the Euler equation $$h+d \, (\beta z^{-1})\, d \, (z) = 0 \tag{13}$$ Notice that if $\tilde z$ is a root of equation (13), then so is $\beta \tilde z^{-1}$. Thus, the roots of (13) come in “$\beta$-reciprocal” pairs. Assume that the roots of (13) are distinct. Let the roots be, in descending order according to their moduli, $z_1, z_2, \ldots, z_{2m}$. From the reciprocal pairs property and the assumption of distinct roots, it follows that $\vert z_j \vert > \sqrt \beta\ \hbox{ for } j\leq m \hbox { and } \vert z_j \vert < \sqrt\beta\ \hbox { for } j > m$. It also follows that $z_{2m-j} = \beta z^{-1}_{j+1}, j=0, 1, \ldots, m-1$. Therefore, the characteristic polynomial on the left side of (13) can be expressed as \begin{aligned} h+d(\beta z^{-1})d(z) &= z^{-m} z_0(z-z_1)\cdots (z-z_m)(z-z_{m+1}) \cdots (z-z_{2m}) \cr &= z^{-m} z_0 (z-z_1)(z-z_2)\cdots (z-z_m)(z-\beta z_m^{-1}) \cdots (z-\beta z^{-1}_2)(z-\beta z_1^{-1}) \end{aligned} \tag{14} where $z_0$ is a constant. In (14), we substitute $(z-z_j) = -z_j (1- {1 \over z_j}z)$ and $(z-\beta z_j^{-1}) = z(1 - {\beta \over z_j} z^{-1})$ for $j = 1, \ldots, m$ to get $$h+d(\beta z^{-1})d(z) = (-1)^m(z_0z_1\cdots z_m) (1- {1\over z_1} z) \cdots (1-{1\over z_m} z)(1- {1\over z_1} \beta z^{-1}) \cdots(1-{1\over z_m} \beta z^{-1})$$ Now define $c(z) = \sum^m_{j=0} c_j \, z^j$ as $$c\,(z)=\Bigl[(-1)^m z_0\, z_1 \cdots z_m\Bigr]^{1/2} (1-{z\over z_1}) \, (1-{z\over z_2}) \cdots (1- {z\over z_m}) \tag{15}$$ Notice that (14) can be written $$h + d \ (\beta z^{-1})\ d\ (z) = c\,(\beta z^{-1})\,c\,(z) \tag{16}$$ It is useful to write (15) as $$c(z) = c_0(1-\lambda_1\, z) \ldots (1-\lambda_m z) \tag{17}$$ where $$c_0 = \left[(-1)^m\, z_0\, z_1 \cdots z_m\right]^{1/2}; \quad \lambda_j={1 \over z_j},\,\ j=1, \ldots, m$$ Since $\vert z_j \vert > \sqrt \beta \hbox { for } j = 1, \ldots, m$ it follows that $\vert \lambda_j \vert < 1/\sqrt \beta$ for $j = 1, \ldots, m$. Using (17), we can express the factorization (16) as $$h+d (\beta z^{-1})d(z) = c^2_0 (1-\lambda_1 z) \cdots (1 - \lambda_m z) (1-\lambda_1 \beta z^{-1}) \cdots (1 - \lambda_m \beta z^{-1})$$ In sum, we have constructed a factorization (16) of the characteristic polynomial for the Euler equation in which the zeros of $c(z)$ exceed $\beta^{1/2}$ in modulus, and the zeros of $c\,(\beta z^{-1})$ are less than $\beta^{1/2}$ in modulus. Using (16), we now write the Euler equation as $$c(\beta L^{-1})\,c\,(L)\, y_t = a_t$$ The unique solution of the Euler equation that satisfies condition (12) is $$c(L)\,y_t = c\,(\beta L^{-1})^{-1}a_t \tag{18}$$ This can be established by using an argument paralleling that in chapter IX of [Sar87]. To exhibit the solution in a form paralleling that of [Sar87], we use (17) to write (18) as $$(1-\lambda_1 L) \cdots (1 - \lambda_mL)y_t = {c^{-2}_0 a_t \over (1-\beta \lambda_1 L^{-1}) \cdots (1 - \beta \lambda_m L^{-1})} \tag{19}$$ Using partial fractions, we can write the characteristic polynomial on the right side of (19) as $$\sum^m_{j=1} {A_j \over 1 - \lambda_j \, \beta L^{-1}} \quad \text{where} \quad A_j := {c^{-2}_0 \over \prod_{i \not= j}(1-{\lambda_i \over \lambda_j})}$$ Then (19) can be written $$(1-\lambda_1 L) \cdots (1-\lambda_m L) y_t = \sum^m_{j=1} \, {A_j \over 1 - \lambda_j \, \beta L^{-1}} a_t$$ or $$(1 - \lambda_1 L) \cdots (1 - \lambda_m L) y_t = \sum^m_{j=1}\, A_j \sum^\infty_{k=0}\, (\lambda_j\beta)^k\, a_{t+k} \tag{20}$$ Equation (20) expresses the optimum sequence for $y_t$ in terms of $m$ lagged $y$’s, and $m$ weighted infinite geometric sums of future $a_t$’s. Furthermore, (20) is the unique solution of the Euler equation that satisfies the initial conditions and condition (12). In effect, condition (12) compels us to solve the “unstable” roots of $h+d (\beta z^{-1})d(z)$ forward (see [Sar87]). The step of factoring the polynomial $h+d (\beta z^{-1})\, d(z)$ into $c\, (\beta z^{-1})c\,(z)$, where the zeros of $c\,(z)$ all have modulus exceeding $\sqrt\beta$, is central to solving the problem. We note two features of the solution (20) • Since $\vert \lambda_j \vert < 1/\sqrt \beta$ for all $j$, it follows that $(\lambda_j \ \beta) < \sqrt \beta$. • The assumption that $\{ a_t \}$ is of exponential order less than $1 /\sqrt \beta$ is sufficient to guarantee that the geometric sums of future $a_t$’s on the right side of (20) converge. We immediately see that those sums will converge under the weaker condition that $\{ a_t\}$ is of exponential order less than $\phi^{-1}$ where $\phi = \max \, \{\beta \lambda_i, i=1,\ldots,m\}$. Note that with $a_t$ identically zero, (20) implies that in general $\vert y_t \vert$ eventually grows exponentially at a rate given by $\max_i \vert \lambda_i \vert$. The condition $\max_i \vert \lambda_i \vert <1 /\sqrt \beta$ guarantees that condition (12) is satisfied. In fact, $\max_i \vert \lambda_i \vert < 1 /\sqrt \beta$ is a necessary condition for (12) to hold. Were (12) not satisfied, the objective function would diverge to $- \infty$, implying that the $y_t$ path could not be optimal. For example, with $a_t = 0$, for all $t$, it is easy to describe a naive (nonoptimal) policy for $\{y_t, t\geq 0\}$ that gives a finite value of (17). We can simply let $y_t = 0 \hbox { for } t\geq 0$. This policy involves at most $m$ nonzero values of $hy^2_t$ and $[d(L)y_t]^2$, and so yields a finite value of (1). Therefore it is easy to dominate a path that violates (12). ## Undiscounted Problems¶ It is worthwhile focusing on a special case of the LQ problems above: the undiscounted problem that emerges when $\beta = 1$. In this case, the Euler equation is $$\Bigl( h + d(L^{-1})d(L) \Bigr)\, y_t = a_t$$ The factorization of the characteristic polynomial (16) becomes $$\Bigl(h+d \, (z^{-1})d(z)\Bigr) = c\,(z^{-1})\, c\,(z)$$ where \begin{aligned} c\,(z) &= c_0 (1 - \lambda_1 z) \ldots (1 - \lambda_m z) \cr c_0 &= \Bigl[(-1)^m z_0 z_1 \ldots z_m\Bigr ] \cr \vert \lambda_j \vert &< 1 \, \hbox { for } \, j = 1, \ldots, m\cr \lambda_j &= \frac{1}{z_j} \hbox{ for } j=1,\ldots, m\cr z_0 &= \hbox{ constant} \end{aligned} The solution of the problem becomes $$(1 - \lambda_1 L) \cdots (1 - \lambda_m L) y_t = \sum^m_{j=1} A_j \sum^\infty_{k=0} \lambda^k_j a_{t+k}$$ ### Transforming discounted to undiscounted problem¶ Discounted problems can always be converted into undiscounted problems via a simple transformation. Consider problem (1) with $0 < \beta < 1$. Define the transformed variables $$\tilde a_t = \beta^{t/2} a_t,\ \tilde y_t = \beta^{t/2} y_t \tag{21}$$ Then notice that $\beta^t\,[d\, (L) y_t ]^2=[\tilde d\,(L)\tilde y_t]^2$ with $\tilde d \,(L)=\sum^m_{j=0} \tilde d_j\, L^j$ and $\tilde d_j = \beta^{j/2} d_j$. Then the original criterion function (1) is equivalent to $$\lim_{N \rightarrow \infty} \sum^N_{t=0} \{\tilde a_t\, \tilde y_t - {1 \over 2} h\,\tilde y^2_t - {1\over 2} [ \tilde d\,(L)\, \tilde y_t]^2 \} \tag{22}$$ which is to be maximized over sequences $\{\tilde y_t,\ t=0, \ldots\}$ subject to $\tilde y_{-1}, \cdots, \tilde y_{-m}$ given and $\{\tilde a_t,\ t=1, \ldots\}$ a known bounded sequence. The Euler equation for this problem is $[h+\tilde d \,(L^{-1}) \, \tilde d\, (L) ]\, \tilde y_t = \tilde a_t$. The solution is $$(1 - \tilde \lambda_1 L) \cdots (1 - \tilde \lambda_m L)\,\tilde y_t = \sum^m_{j=1} \tilde A_j \sum^\infty_{k=0} \tilde \lambda^k_j \, \tilde a_{t+k}$$ or $$\tilde y_t = \tilde f_1 \, \tilde y_{t-1} + \cdots + \tilde f_m\, \tilde y_{t-m} + \sum^m_{j=1} \tilde A_j \sum^\infty_{k=0} \tilde \lambda^k_j \, \tilde a_{t+k}, \tag{23}$$ where $\tilde c \,(z^{-1}) \tilde c\,(z) = h + \tilde d\,(z^{-1}) \tilde d \,(z)$, and where $$\bigl[(-1)^m\, \tilde z_0 \tilde z_1 \ldots \tilde z_m \bigr]^{1/2} (1 - \tilde \lambda_1\, z) \ldots (1 - \tilde \lambda_m\, z) = \tilde c\,(z), \hbox { where } \ \vert \tilde \lambda_j \vert < 1$$ We leave it to the reader to show that (23) implies the equivalent form of the solution $$y_t = f_1\, y_{t-1} + \cdots + f_m\, y_{t-m} + \sum^m_{j=1} A_j \sum^\infty_{k=0} \, (\lambda_j\, \beta)^k \, a_{t+k}$$ where $$f_j = \tilde f_j\, \beta^{-j/2},\ A_j = \tilde A_j,\ \lambda_j = \tilde \lambda_j \, \beta^{-1/2} \tag{24}$$ The transformations (21) and the inverse formulas (24) allow us to solve a discounted problem by first solving a related undiscounted problem. ## Implementation¶ Code that computes solutions to the LQ problem using the methods described above can be found in file control_and_filter.jl. Here’s how it looks In [3]: function LQFilter(d, h, y_m; r = nothing, β = nothing, h_eps = nothing) m = length(d) - 1 • Share page
2020-03-28T09:24:38
{ "domain": "quantecon.org", "url": "https://julia.quantecon.org/time_series_models/lu_tricks.html", "openwebmath_score": 0.9915009140968323, "openwebmath_perplexity": 828.0887681939972, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542526, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747682930486 }
http://math.stackexchange.com/questions/69488/is-this-determinant-equal-to-1
# Is this determinant equal to 1 Let $V$ be a finite dimensional vector space over $\mathbf{C}$ with a hermitian inner product. Let $e=(e_1,\ldots,e_n)^t$ and $f=(f_1,\ldots,f_n)^t$ be orthonormal bases for $V$. There is a matrix $A$ such that $e =A f$. Is $\det A = 1$? - Do you mean $e_i=Af_i,\;i=1,2,\dots,n$? –  anon Oct 3 '11 at 8:34 No. $e_i = \sum_{j=1}^n a_{ij} f_j$ and $A= (a_{ij})$. –  shaye Oct 3 '11 at 11:36 Homan: how are they each an orthonormal basis for $V$ (which I assume is $n$-dimensional) if they're each only a single vector? –  anon Oct 3 '11 at 11:40 For real vector spaces and two bases of the same orientation this will be true. –  Mark Oct 3 '11 at 14:38 No, as a counterexample, take the matrix $$A = \left(\begin{array}{cc}1 & 0 \\ 0 & -1\end{array}\right) \; .$$ And take for $f$ the standard basis $$f_1=\left(\begin{array}{c} 1 \\ 0 \end{array}\right) \; , \; f_2=\left(\begin{array}{c} 0 \\ 1 \end{array}\right) \; .$$ Clearly, the determinant of $A$ is $-1$. - It may be worth noting that, however, $|\det A|=1$, that is, $A$ is a unitary matrix. –  joriki Oct 3 '11 at 9:00 Indeed. Basically, the matrices Homan defines are the unitary matrices, i.e. $U(n)$ . –  Raskolnikov Oct 3 '11 at 9:01 So in general the determinant of $A$ is a complex number of modulus $1$, right? –  shaye Oct 3 '11 at 11:18 That's indeed right. –  Raskolnikov Oct 3 '11 at 11:28
2014-03-09T21:14:00
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/69488/is-this-determinant-equal-to-1", "openwebmath_score": 0.985098659992218, "openwebmath_perplexity": 540.5540244866172, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542526, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747682930486 }
http://math.stackexchange.com/questions/42697/what-is-the-name-of-this-number-is-it-transcendental
# what is the name of this number? is it transcendental? Consider the number with binary or decimal expansion 0.011010100010100010100... that is, the $n$'th entry is $1$ iff $n$ is prime and zero else. This number is clearly irrational. Is it known whether it is transcendental? - Wiki lists it as a "suspected transcendental" en.wikipedia.org/wiki/List_of_numbers#Suspected_transcendentals – Dan Brumleve Jun 2 '11 at 2:02 @Graham Enos: You mean of a transcendental number, in which case the answer is yes. By tradition, the first incommensurability proof involved $\sqrt{2}$, though some have argued it might have been the so-called golden number. – André Nicolas Jun 2 '11 at 2:23 @Dan, every number not known to be algebraic is suspected transcendental. – Gerry Myerson Jun 2 '11 at 3:10 I nominate Jonas to answer this because he found the name. – Dan Brumleve Jun 2 '11 at 3:30 I initially found it by googling "0 1 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1", which brought me this link to the CRC concise encyclopedia of mathematics with references to OEIS. I could also have entered a similar search on oeis.org. I saw sequence A010051, the characteristic function of the primes. One of the cross references there is sequence A051006, also referenced in the encyclopedia article, which is the decimal expansion of the "prime constant", with that name given. Another Google search with name in hand brings up the Wikipedia article.
2016-02-13T07:00:04
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/42697/what-is-the-name-of-this-number-is-it-transcendental", "openwebmath_score": 0.7754448652267456, "openwebmath_perplexity": 450.66509201374663, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542526, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747682930486 }
https://questioncove.com/updates/4ea7968ce4b0a7d5142133b2
Mathematics 77 Online OpenStudy (anonymous): how do you find the invers of [5] in Zsub10 @MIT 18.02 Multiva… OpenStudy (anonymous): what is the gcd(5, 10)? OpenStudy (anonymous): [5]$\in \mathbb{Z} _{10}$ express as [b] where 1<= b <m OpenStudy (anonymous): 5 OpenStudy (anonymous): So, because the gcd(5,10) is not 1, 5 doesn't have an inverse mod 10. OpenStudy (anonymous): i guess a better way to put it is, there is no solution to the equation:$5x\equiv 1\mod 10$since the gcd of 5 and 10 is 5, and 5 doesn't divide 1. jimthompson5910 (jim_thompson5910): you can see that there is no inverse to 5 since there are no solutions to 5x = 1 (mod 10) jimthompson5910 (jim_thompson5910): 5x will either be equal to 5 mod 10 or it will be equal 0 mod 10 OpenStudy (anonymous): OpenStudy (anonymous): gcd=1 OpenStudy (anonymous): so there is an inverse. now we gotta find it lol >.< OpenStudy (anonymous): do you know about Bezout's Identity by any chance? if you have gcd(a,b)=1, then there exist some integers x, y such that:$ax+by=1$? this can help you find inverses sometimes. OpenStudy (anonymous): LOL, ya but how, i've been trying to understand this through the slides, but they are very vague, and alot of blanks.... so help is needed OpenStudy (anonymous): ya jimthompson5910 (jim_thompson5910): 5x = 1 (mod 47) 10*5x = 10*1 (mod 47) 50x = 10 (mod 47) 3x = 10 (mod 47) 16*3x = 16*10 (mod 47) 48x = 160 (mod 47) 1x = 160 (mod 47) x = 160 (mod 47) x = 19 (mod 47) So the inverse of 5 mod 47 is 19 mod 47 OpenStudy (anonymous): So because (5,47)=1, there exists an x and y such that:$5x+47y=1$ if we can find those integers x and y, then x would be the inverse. ....oh wow jim's way is so much faster lol jimthompson5910 (jim_thompson5910): it depends on the values really, but in general I find it to be much faster OpenStudy (anonymous): jim, i dont understand what you did :( where did all those numbers come from OpenStudy (anonymous): Sometimes you can eyeball it. Note that 47*2=94, which is one shy of a multiple of 5. 19*5=95, which is congruent to 1 mod 47. to the inverse is 19. OpenStudy (anonymous): so* jimthompson5910 (jim_thompson5910): I started with 5x = 1 (mod 47) and then multiplied both sides by 10 because I wanted to get that 5 as close as possible to 47 (since 50 is really close to 47) jimthompson5910 (jim_thompson5910): after doing that, I got 50x = 10 (mod 47) which reduces to 3x = 10 (mod 47) (since 50 = 3 (mod 47)) jimthompson5910 (jim_thompson5910): I then repeated those last steps to convert 3x = 10 (mod 47) into x = 19 (mod 47) jimthompson5910 (jim_thompson5910): the beauty of the method is that the left side coefficient will reduce in magnitude since you're getting closer to the modulus with each iteration jimthompson5910 (jim_thompson5910): after a certain number of steps, the coefficient will be 1 leaving you with 1x = k (mod n) OpenStudy (anonymous): ahhhh i see now, thx jimthompson5910 (jim_thompson5910): yeah it's a bit weird and takes some time to get used to Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours! Join our real-time social learning platform and learn together with your friends! Latest Questions blitzo5000: i made something 2 minutes ago 4 Replies 0 Medals mayathegamergrl: whats something that is a waste of time to get for a gaming room/setup? 2 minutes ago 39 Replies 1 Medal ganster100: today m bday im 17 yayy 9 minutes ago 17 Replies 0 Medals ganster100: today is my bday yayyy im 17 2 hours ago 0 Replies 0 Medals jodyross: The narrator asks, "Do i dare eat 6 hours ago 2 Replies 1 Medal Lazorwolf64: Can someone help me with math? 12 hours ago 6 Replies 1 Medal Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours! Join our real-time social learning platform and learn together with your friends!
2022-12-06T13:48:02
{ "domain": "questioncove.com", "url": "https://questioncove.com/updates/4ea7968ce4b0a7d5142133b2", "openwebmath_score": 0.6457065939903259, "openwebmath_perplexity": 1554.8872438109809, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542525, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747682930486 }
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-6-rational-exponents-and-radical-functions-6-3-perform-function-operations-and-composition-6-3-exercises-problem-solving-page-434/46a
## Algebra 2 (1st Edition) The distance from A to D is $20-x$. Thus the time he needs to cover that distance is the distance divided by the speed, hence $r(x)=\frac{20-x}{6.4}$. The distance from D to B is $\sqrt{12^2+x^2}=\sqrt{144+x^2}$ by the Pythagorean Theorem. Thus the time he needs to cover that distance is the distance divided by the speed, hence $r(x)=\frac{\sqrt{144+x^2}}{0.9}$.
2021-04-22T03:01:41
{ "domain": "gradesaver.com", "url": "https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-6-rational-exponents-and-radical-functions-6-3-perform-function-operations-and-composition-6-3-exercises-problem-solving-page-434/46a", "openwebmath_score": 0.9678077101707458, "openwebmath_perplexity": 130.30584982995487, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542526, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747682930486 }
https://math.stackexchange.com/questions/996831/product-to-vertices-in-triangle-maximal/1380954#1380954
# Product to vertices in triangle maximal Suppose we're given a triangle $ABC$. At which interior point $T$ is the product of distances $|AT|\cdot |BT|\cdot |CT|$ maximal? Is it a known point, like the centroid or incenter? • Distance? You mean the product $|AT| \cdot |BT| \cdot |CT|$? (where $|AB|$ is the length of the segment $AB$) Oct 29 '14 at 15:23 • @Irvan That's right Oct 29 '14 at 15:24 Such a point $T$ doesn't exist! Identify the Euclidean plane $\mathbb{R}^2$ with complex plane $\mathbb{C}$. Let $t, a, b, c \in \mathbb{C}$ correspond to $T, A, B, C$ respectively. We have $$|AT||BT||CT| = |t-a||t-b||t-c| = |(t-a)(t-b)(t-c)|$$ As a function of $t$, the RHS is the modulus of a non-constant entire function on $\mathbb{C}$. By maximum modulus principle, it cannot exhibit a true local maximum anywhere on $\mathbb{C}$. In language of geometry on $\mathbb{R}^2$, there is no point $T$ which locally maximize the expression $|AT||BT||CT|$. The closest thing one can have are two saddle points (counting multiplicity) corresponds to the two roots of the quadratic polynomial: $$\frac{d}{dt}\left((t-a)(t-b)(t-c)\right) = 3t^2 - 2(a+b+c)t + (ab+bc+ca) = 0$$ Marden's theorem tell us these two roots are the foci of the Steiner inellipse which is the unique ellipse tangent to the midpoints of the triangle $ABC$. • Does this hold for two points or four points? Jan 16 '20 at 21:32
2021-10-24T06:58:38
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/996831/product-to-vertices-in-triangle-maximal/1380954#1380954", "openwebmath_score": 0.901633620262146, "openwebmath_perplexity": 234.93241240445377, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692284751635, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747678388194 }
https://www.lmfdb.org/knowledge/show/mf.siegel.hecke_operator
show · mf.siegel.hecke_operator all knowls · up · search: We define $G = \GSp^+(2g,\Q)$ by $$G = \{\gamma\in \GL_{2g}(\Z):\gamma^t J\gamma=r(\gamma)J,\text{ for all }\gamma\in\Bbb{Q}_+\}.$$ Let $L(\Gamma,G)$ be the free $\C$-module generated by the right cosets $\Gamma\alpha$ where $\alpha\in\Gamma \backslash G$. Note $\Gamma$ acts on $L(\Gamma,G)$ by right multiplication and we set $\mathcal{H}_g(\Gamma,G)=L(\Gamma,G)^\Gamma$. Let $T_1,T_2\in \mathcal{H}_g(\Gamma,G)$ and $T_i = \sum_{\alpha_i \in \Gamma \setminus G} c_i(\alpha) \Gamma\alpha.$ Then $T_1 T_2 = \sum_{\alpha,\alpha'\in \Gamma \setminus G} c_1(\alpha)c_2(\alpha')\Gamma\alpha\alpha'$. As in the classical case, we pay most attention to the Hecke operators at a prime $p$. It is known that $\mathcal{H}_g = \bigotimes_{p\text{ prime}} \mathcal{H}_{g,p}$ where the construction of the local Hecke algebra $\mathcal{H}_{g,p}$ is the same as before but with $G$ replaced with $G_p = G\cap \text{GL}_{2g}(\Z[p^{-1}])$. The generators of this local algrebra $\mathcal{H}_{g,p}$ are the double cosets $T(p)=\Gamma\text{diag}(I_g;pI_g)\Gamma$ and $T_i(p^2)=\Gamma \text{diag}(I_i,pI_{g-i};p^2I_i,pI_{g-i})\Gamma$ for $1\leq i \leq g$. Some authors also define $T_0(p^2)$, too. The operator $T(p^2)=\sum_{i=1}^g T_i(p^2)$. The space $\mathcal{H}_g$ acts on Siegel modular forms of degree $g$ and weight $k$ by $F|_k\left(\sum c_i\Gamma\alpha_i\right)=\sum c_i F|_k\alpha_i$ where $\left(F|_k \alpha\right)(Z)=r(\alpha)^{gk-\frac{g(g+1)}{2}}\det(CZ+D)^{-k}F\left(\alpha\cdot Z \right)$. Some authors use a different normalization in this definition. A Hecke eigenform is a form in $M_k(\Gamma)$ which is a simultaneous eigenform for all the operators $T(p)$, $T(p^2)$,...,$T(p^g)$. Authors: Knowl status: • Review status: beta • Last edited by John Voight on 2018-06-28 01:01:31 Referred to by: Not referenced anywhere at the moment. History:
2020-07-15T02:30:09
{ "domain": "lmfdb.org", "url": "https://www.lmfdb.org/knowledge/show/mf.siegel.hecke_operator", "openwebmath_score": 0.9707123637199402, "openwebmath_perplexity": 195.65735993558792, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692284751636, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747678388194 }
http://mathhelpforum.com/algebra/49660-how-do-i-factor-print.html
# How do i factor this? • September 18th 2008, 02:30 PM jarr3d How do i factor this? How do i factor this? 1331x^9 + 343y^6 • September 18th 2008, 02:43 PM Plato Sum of cubes? $\left( {11x^3 + 7y^2 } \right)\left( {121x^6 - 77x^3 y^2 + 49y^4 } \right)$ • September 18th 2008, 02:44 PM skeeter sum of two cubes factors as shown ... $a^3 + b^3 = (a+b)(a^2 - ab + b^2)$ now note that ... $1331x^9 + 343y^6 = (11x^3)^3 + (7y^2)^3$ • September 18th 2008, 02:45 PM jarr3d Can you show how its done? • September 18th 2008, 02:45 PM jarr3d ok
2014-08-29T06:30:35
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/49660-how-do-i-factor-print.html", "openwebmath_score": 0.9176424145698547, "openwebmath_perplexity": 13024.848039163147, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692284751635, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747678388194 }
http://mathhelpforum.com/discrete-math/85537-how-many-ways-there.html
# Math Help - How many ways are there ? 1. ## How many ways are there ? Hi I have balls of k different colors. We have an infinite supply of balls for each color. I want to select n balls(1<=k<=n). How many ways are there i.e how many combinations(not arrangements) are there such that atleast 1 ball of each color is selected. example if n=10 and k=10 , answer =1 Regards lali 2. Originally Posted by lali I have balls of k different colors. We have an infinite supply of balls for each color. I want to select n balls(1<=k<=n). How many ways are there i.e how many combinations(not arrangements) are there ? example if n=10 and k=10 , answer =1 I see some trouble with various ways someone could read your question. From your example, we are selecting at least one of each color. If this is correct, then in the case $n=11~\&~k=10$ the answer is $10$. In general, $n~\&~k,~k\le n$ then $\binom{n-1}{n-k} = \frac{(n-1)!}{(n-k)!(k-1)!}$ 3. Thank you very much for your reply and thanks for pointing out that my question was incomplete( i have edited it now) Can you please guide me how you came up with that formula. I am really bad at combinatorics. So i would be glad if you could just point some learning material on the net for the above problem( or you could just point what search string to use in google, right now i am using "combinatorics" only) The answer is $\binom{N+k-1}{N}$. But that allows some cells to be empty. If, as in your case, no cell can be empty, then the answer is $\binom{N-1}{N-k}$.
2015-11-30T02:18:36
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/discrete-math/85537-how-many-ways-there.html", "openwebmath_score": 0.8709994554519653, "openwebmath_perplexity": 429.64525169435234, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692284751636, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747678388194 }
http://mathhelpforum.com/advanced-algebra/223610-groups-finite-order.html
# Math Help - Groups of Finite Order 1. ## Groups of Finite Order I have two questions, one of which I am pretty sure I have answered, but I would like your opinion on it. 1) Let G be a finite group of order 12, is it possible that the center of this group has order 4? I am not even sure how to approach this. I did have to look up the definition of the center of a group and I know that the center of a group G is defined as: $Z(G) =$ { $z \in G | \forall g \in G, zg=gz$}. 2) Suppose that the order of some finite Abelian group G is divisible by 42. Prove that G has a cyclic subgroup of order 42. Let G be a finite abelian group of order 42. Remember that the Fundamental Theorem of Finite Abelian Groups states that any finite abelian group can be written as $\mathbb{Z}_p_1 \oplus \mathbb{Z}_p_2 \oplus ... \oplus \mathbb{Z}_p_n$ where $p_n$ are not necessarily distinct and where $|G| = p_1 \cdot p_2 \cdot \cdot \cdot p_n$. And a corollary follows stating that if $m$ divides the order of a finite abelian group $G$, then $G$ has a subgroup of order $m$. Now since 42 divides the order of G, and G is abelian then we know that G contains a subgroup of order 42. Let H be such a subgroup. Since G is abelian, H is abelian. Since $H$ has order $42 = 2 \cdot 3 \cdot 7$, we see that $H \simeq \mathbb{Z}_2 \oplus \mathbb{Z}_3 \oplus \mathbb{Z}_7 \simeq \mathbb{Z}_42$. Since $\mathbb{Z}_p$ is cyclic for any prime $p$, $H$ is cyclic. $\blacksquare$ Does that work? 2. ## Re: Groups of Finite Order Originally Posted by vidomagru 1) Let G be a finite group of order 12, is it possible that the center of this group has order 4? I am not even sure how to approach this. I did have to look up the definition of the center of a group and I know that the center of a group G is defined as: $Z(G) =$ { $z \in G | \forall g \in G, zg=gz$}. First a disclaimer: I have not taught any graduate level involving group theory is over twenty years, therefore I have not done any active work in this for that period. That said this may help you. Any Abelian group is its own center. Thus you are looking for a non-Abelian group of order twelve that has a center of order four. There has been quite a lot of work done on finite groups. If I were you that is where I would start. Now there may very well be a completely clear solution that I don't see. 3. ## Re: Groups of Finite Order There seem to be very few finite groups of order 12. Do I just have to go through each one or is possible to show this arbitrarily? 4. ## Re: Groups of Finite Order Hi, 1. I leave it to you to prove the following fact: If G is any non-abelian group (finite or infinite), the factor group G/Z(G) is not cyclic. If you have trouble, post your problems. Now for a group G of order 12 with |Z(G)| = 4. Is G/Z(G) cyclic? 2. Unless you mean each pi is a power of a prime, you have misquoted the fundamental theorem. What you say is true about a finite abelian group G having a subgroup of order n for any divisor n of the order of G, but if you haven't proved this, I think it should be part of your solution. Alternatively, you can deduce the truth of the statement directly from the fundamental theorem, knowing only that subgroups of cyclic groups are cyclic. 5. ## Re: Groups of Finite Order Originally Posted by johng Hi, 1. I leave it to you to prove the following fact: If G is any non-abelian group (finite or infinite), the factor group G/Z(G) is not cyclic. If you have trouble, post your problems. Now for a group G of order 12 with |Z(G)| = 4. Is G/Z(G) cyclic? Ok, I think I can prove this: suppose $G/Z(G)$ is cyclic with generator $gZ, g \in G$. Let $a,b \in G$, then $a=g^nz, b=g^mz'$ for $z,z' \in Z(G)$ and $n,m \in \mathbb{Z}$. Since the center commutes with every elements of $G$, then $ab=g^nz \cdot g^mz' = g^{n+m}zz' = g^mz' \cdot g^nz = ba$, contradicting that G is nonabelian. Hence $G/Z(G)$ is not cyclic. I would gather from my proof that if $|Z(G)| = 4$, then $G/Z(G)$ is not cyclic. But I am not sure I understand the implications of this. 6. ## Re: Groups of Finite Order For G of order 12 and Z(G) of order 4, the order of G/Z(G) is 3. What do you know about any group of order 3 (or any prime)? Isn't such a group cyclic? 7. ## Re: Groups of Finite Order Originally Posted by johng For G of order 12 and Z(G) of order 4, the order of G/Z(G) is 3. What do you know about any group of order 3 (or any prime)? Isn't such a group cyclic? Oh I think I understand. Let G be a finite group of order 12. Assume that |Z(G)| = 4. Since any group of prime order is cyclic and since |G| = 12, |Z(G)| = 4, then we can say |G/Z(G)| = 3. Hence G/Z(G) is cyclic. However if G/Z(G) is cyclic then G is abelian. And since G is abelian if and only if Z(G) = G then |Z(G)| must equal 12, and therefore our assumption that |Z(G)|=4 is false, and G cannot have a center of order 4. Is that right? 8. ## Re: Groups of Finite Order Yes. By the way, good job on proving G/Z(G) is not cyclic unless G is abelian.
2014-08-28T07:51:56
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/advanced-algebra/223610-groups-finite-order.html", "openwebmath_score": 0.854599118232727, "openwebmath_perplexity": 154.32727952540324, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692284751636, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747678388194 }
https://stacks.math.columbia.edu/tag/0G9W
## 51.21 A bit of uniformity, II Let $I$ be an ideal of a Noetherian ring $A$. Let $M$ be a finite $A$-module. Let $i > 0$. By More on Algebra, Lemma 15.27.3 there exists a $c = c(A, I, M, i)$ such that $\text{Tor}^ A_ i(M, A/I^ n) \to \text{Tor}^ A_ i(M, A/I^{n - c})$ is zero for all $n \geq c$. In this section, we discuss some results which show that one sometimes can choose a constant $c$ which works for all $A$-modules $M$ simultaneously (and for a range of indices $i$). This material is related to uniform Artin-Rees as discussed in and [AHS]. In Remark 51.21.9 we will apply this to show that various pro-systems related to derived completion are (or are not) strictly pro-isomorphic. The following lemma can be significantly strengthened. Lemma 51.21.1. Let $I$ be an ideal of a Noetherian ring $A$. For every $m \geq 0$ and $i > 0$ there exist a $c = c(A, I, m, i) \geq 0$ such that for every $A$-module $M$ annihilated by $I^ m$ the map $\text{Tor}^ A_ i(M, A/I^ n) \to \text{Tor}^ A_ i(M, A/I^{n - c})$ is zero for all $n \geq c$. Proof. By induction on $i$. Base case $i = 1$. The short exact sequence $0 \to I^ n \to A \to A/I^ n \to 0$ determines an injection $\text{Tor}_1^ A(M, A/I^ n) \subset I^ n \otimes _ A M$, see Algebra, Remark 10.75.9. As $M$ is annihilated by $I^ m$ we see that the map $I^ n \otimes _ A M \to I^{n - m} \otimes _ A M$ is zero for $n \geq m$. Hence the result holds with $c = m$. Induction step. Let $i > 1$ and assume $c$ works for $i - 1$. By More on Algebra, Lemma 15.27.3 applied to $M = A/I^ m$ we can choose $c' \geq 0$ such that $\text{Tor}_ i(A/I^ m, A/I^ n) \to \text{Tor}_ i(A/I^ m, A/I^{n - c'})$ is zero for $n \geq c'$. Let $M$ be annihilated by $I^ m$. Choose a short exact sequence $0 \to S \to \bigoplus \nolimits _{i \in I} A/I^ m \to M \to 0$ The corresponding long exact sequence of tors gives an exact sequence $\text{Tor}_ i^ A(\bigoplus \nolimits _{i \in I} A/I^ m, A/I^ n) \to \text{Tor}_ i^ A(M, A/I^ n) \to \text{Tor}_{i - 1}^ A(S, A/I^ n)$ for all integers $n \geq 0$. If $n \geq c + c'$, then the map $\text{Tor}_{i - 1}^ A(S, A/I^ n) \to \text{Tor}_{i - 1}^ A(S, A/I^{n - c})$ is zero and the map $\text{Tor}_ i^ A(A/I^ m, A/I^{n - c}) \to \text{Tor}_ i^ A(A/I^ m, A/I^{n - c - c'})$ is zero. Combined with the short exact sequences this implies the result holds for $i$ with constant $c + c'$. $\square$ Lemma 51.21.2. Let $I = (a_1, \ldots , a_ t)$ be an ideal of a Noetherian ring $A$. Set $a = a_1$ and denote $B = A[\frac{I}{a}]$ the affine blowup algebra. There exists a $c > 0$ such that $\text{Tor}_ i^ A(B, M)$ is annihilated by $I^ c$ for all $A$-modules $M$ and $i \geq t$. Proof. Recall that $B$ is the quotient of $A[x_2, \ldots , x_ t]/(a_1x_2 - a_2, \ldots , a_1x_ t - a_ t)$ by its $a_1$-torsion, see Algebra, Lemma 10.70.6. Let $B_\bullet = \text{Koszul complex on }a_1x_2 - a_2, \ldots , a_1x_ t - a_ t \text{ over }A[x_2, \ldots , x_ t]$ viewed as a chain complex sitting in degrees $(t - 1), \ldots , 0$. The complex $B_\bullet [1/a_1]$ is isomorphic to the Koszul complex on $x_2 - a_2/a_1, \ldots , x_ t - a_ t/a_1$ which is a regular sequence in $A[1/a_1][x_2, \ldots , x_ t]$. Since regular sequences are Koszul regular, we conclude that the augmentation $\epsilon : B_\bullet \longrightarrow B$ is a quasi-isomorphism after inverting $a_1$. Since the homology modules of the cone $C_\bullet$ on $\epsilon$ are finite $A[x_2, \ldots , x_ n]$-modules and since $C_\bullet$ is bounded, we conclude that there exists a $c \geq 0$ such that $a_1^ c$ annihilates all of these. By Derived Categories, Lemma 13.12.5 this implies that, after possibly replacing $c$ by a larger integer, that $a_1^ c$ is zero on $C_\bullet$ in $D(A)$. The proof is finished once the reader contemplates the distinguished triangle $B_\bullet \otimes _ A^\mathbf {L} M \to B \otimes _ A^\mathbf {L} M \to C_\bullet \otimes _ A^\mathbf {L} M$ Namely, the first term is represented by $B_\bullet \otimes _ A M$ which is sitting in homological degrees $(t - 1), \ldots , 0$ in view of the fact that the terms in the Koszul complex $B_\bullet$ are free (and hence flat) $A$-modules. Whence $\text{Tor}_ i^ A(B, M) = H_ i(C_\bullet \otimes _ A^\mathbf {L} M)$ for $i > t - 1$ and this is annihilated by $a_1^ c$. Since $a_1^ cB = I^ cB$ and since the tor module is a module over $B$ we conclude. $\square$ For the rest of the discussion in this section we fix a Noetherian ring $A$ and an ideal $I \subset A$. We denote $p : X \to \mathop{\mathrm{Spec}}(A)$ the blowing up of $\mathop{\mathrm{Spec}}(A)$ in the ideal $I$. In other words, $X$ is the $\text{Proj}$ of the Rees algebra $\bigoplus _{n \geq 0} I^ n$. By Cohomology of Schemes, Lemmas 30.14.2 and 30.14.3 we can choose an integer $q(A, I) \geq 0$ such that for all $q \geq q(A, I)$ we have $H^ i(X, \mathcal{O}_ X(q)) = 0$ for $i > 0$ and $H^0(X, \mathcal{O}_ X(q)) = I^ q$. Lemma 51.21.3. In the situation above, for $q \geq q(A, I)$ and any $A$-module $M$ we have $R\Gamma (X, Lp^*\widetilde{M}(q)) \cong M \otimes _ A^\mathbf {L} I^ q$ in $D(A)$. Proof. Choose a free resolution $F_\bullet \to M$. Then $\widetilde{F}_\bullet$ is a flat resolution of $\widetilde{M}$. Hence $Lp^*\widetilde{M}$ is given by the complex $p^*\widetilde{F}_\bullet$. Thus $Lp^*\widetilde{M}(q)$ is given by the complex $p^*\widetilde{F}_\bullet (q)$. Since $p^*\widetilde{F}_ i(q)$ are right acyclic for $\Gamma (X, -)$ by our choice of $q \geq q(A, I)$ and since we have $\Gamma (X, p^*\widetilde{F}_ i(q)) = I^ qF_ i$ by our choice of $q \geq q(A, I)$, we get that $R\Gamma (X, Lp^*\widetilde{M}(q))$ is given by the complex with terms $I^ qF_ i$ by Derived Categories of Schemes, Lemma 36.4.3. The result follows as the complex $I^ qF_\bullet$ computes $M \otimes _ A^\mathbf {L} I^ q$ by definition. $\square$ Lemma 51.21.4. In the situation above, let $t$ be an upper bound on the number of generators for $I$. There exists an integer $c = c(A, I) \geq 0$ such that for any $A$-module $M$ the cohomology sheaves $H^ j(Lp^*\widetilde{M})$ are annihilated by $I^ c$ for $j \leq -t$. Proof. Say $I = (a_1, \ldots , a_ t)$. The question is affine local on $X$. For $1 \leq i \leq t$ let $B_ i = A[\frac{I}{a_ i}]$ be the affine blowup algebra. Then $X$ has an affine open covering by the spectra of the rings $B_ i$, see Divisors, Lemma 31.32.2. By the description of derived pullback given in Derived Categories of Schemes, Lemma 36.3.8 we conclude it suffices to prove that for each $i$ there exists a $c \geq 0$ such that $\text{Tor}_ j^ A(B_ i, M)$ is annihilated by $I^ c$ for $j \geq t$. This is Lemma 51.21.2. $\square$ Lemma 51.21.5. In the situation above, let $t$ be an upper bound on the number of generators for $I$. There exists an integer $c = c(A, I) \geq 0$ such that for any $A$-module $M$ the tor modules $\text{Tor}_ i^ A(M, A/I^ q)$ are annihilated by $I^ c$ for $i > t$ and all $q \geq 0$. Proof. Let $q(A, I)$ be as above. For $q \geq q(A, I)$ we have $R\Gamma (X, Lp^*\widetilde{M}(q)) = M \otimes _ A^\mathbf {L} I^ q$ by Lemma 51.21.3. We have a bounded and convergent spectral sequence $H^ a(X, H^ b(Lp^*\widetilde{M}(q))) \Rightarrow \text{Tor}_{-a - b}^ A(M, I^ q)$ by Derived Categories of Schemes, Lemma 36.4.4. Let $d$ be an integer as in Cohomology of Schemes, Lemma 30.4.4 (actually we can take $d = t$, see Cohomology of Schemes, Lemma 30.4.2). Then we see that $H^{-i}(X, Lp^*\widetilde{M}(q)) = \text{Tor}_ i^ A(M, I^ q)$ has a finite filtration with at most $d$ steps whose graded are subquotients of the modules $H^ a(X, H^{- i - a}(Lp^*\widetilde{M})(q)),\quad a = 0, 1, \ldots , d - 1$ If $i \geq t$ then all of these modules are annihilated by $I^ c$ where $c = c(A, I)$ is as in Lemma 51.21.4 because the cohomology sheaves $H^{- i - a}(Lp^*\widetilde{M})$ are all annihilated by $I^ c$ by the lemma. Hence we see that $\text{Tor}_ i^ A(M, I^ q)$ is annihilated by $I^{dc}$ for $q \geq q(A, I)$ and $i \geq t$. Using the short exact sequence $0 \to I^ q \to A \to A/I^ q \to 0$ we find that $\text{Tor}_ i(M, A/I^ q)$ is annihilated by $I^{dc}$ for $q \geq q(A, I)$ and $i > t$. We conclude that $I^ m$ with $m = \max (dc, q(A, I) - 1)$ annihilates $\text{Tor}_ i^ A(M, A/I^ q)$ for all $q \geq 0$ and $i > t$ as desired. $\square$ Lemma 51.21.6. Let $I$ be an ideal of a Noetherian ring $A$. Let $t \geq 0$ be an upper bound on the number of generators of $I$. There exist $N, c \geq 0$ such that the maps $\text{Tor}_{t + 1}^ A(M, A/I^ n) \to \text{Tor}_{t + 1}^ A(M, A/I^{n - c})$ are zero for any $A$-module $M$ and all $n \geq N$. Proof. Let $c_1$ be the constant found in Lemma 51.21.5. Please keep in mind that this constant $c_1$ works for $\text{Tor}_ i$ for all $i > t$ simultaneously. Say $I = (a_1, \ldots , a_ t)$. For an $A$-module $M$ we set $\ell (M) = \# \{ i \mid 1 \leq i \leq t,\ a_ i^{c_1}\text{ is zero on }M\}$ This is an element of $\{ 0, 1, \ldots , t\}$. We will prove by descending induction on $0 \leq s \leq t$ the following statement $H_ s$: there exist $N, c \geq 0$ such that for every module $M$ with $\ell (M) \geq s$ the maps $\text{Tor}_{t + 1 + i}^ A(M, A/I^ n) \to \text{Tor}_{t + 1 + i}^ A(M, A/I^{n - c})$ are zero for $i = 0, \ldots , s$ for all $n \geq N$. Base case: $s = t$. If $\ell (M) = t$, then $M$ is annihilated by $(a_1^{c_1}, \ldots , a_ t^{c_1}\}$ and hence by $I^{t(c_1 - 1) + 1}$. We conclude from Lemma 51.21.1 that $H_ t$ holds by taking $c = N$ to be the maximum of the integers $c(A, I, t(c_1 - 1) + 1, t + 1), \ldots , c(A, I, t(c_1 - 1) + 1, 2t + 1)$ found in the lemma. Induction step. Say $0 \leq s < t$ we have $N, c$ as in $H_{s + 1}$. Consider a module $M$ with $\ell (M) = s$. Then we can choose an $i$ such that $a_ i^{c_1}$ is nonzero on $M$. It follows that $\ell (M[a_ i^ c]) \geq s + 1$ and $\ell (M/a_ i^{c_1}M) \geq s + 1$ and the induction hypothesis applies to them. Consider the exact sequence $0 \to M[a_ i^{c_1}] \to M \xrightarrow {a_ i^{c_1}} M \to M/a_ i^{c_1}M \to 0$ Denote $E \subset M$ the image of the middle arrow. Consider the corresponding diagram of Tor modules $\xymatrix{ & & \text{Tor}_{i + 1}(M/a_ i^{c_1}M, A/I^ q) \ar[d] \\ \text{Tor}_ i(M[a_ i^{c_1}], A/I^ q) \ar[r] & \text{Tor}_ i(M, A/I^ q) \ar[r] \ar[rd]^0 & \text{Tor}_ i(E, A/I^ q) \ar[d] \\ & & \text{Tor}_ i(M, A/I^ q) }$ with exact rows and columns (for every $q$). The south-east arrow is zero by our choice of $c_1$. We conclude that the module $\text{Tor}_ i(M, A/I^ q)$ is sandwiched between a quotient module of $\text{Tor}_ i(M[a_ i^{c_1}], A/I^ q)$ and a submodule of $\text{Tor}_{i + 1}(M/a_ i^{c_1}M, A/I^ q)$. Hence we conclude $H_ s$ holds with $N$ replaced by $N + c$ and $c$ replaced by $2c$. Some details omitted. $\square$ Proposition 51.21.7. Let $I$ be an ideal of a Noetherian ring $A$. Let $t \geq 0$ be an upper bound on the number of generators of $I$. There exist $N, c \geq 0$ such that for $n \geq N$ the maps $A/I^ n \to A/I^{n - c}$ satisfy the equivalent conditions of Lemma 51.20.2 with $e = t$. Proof. Immediate consequence of Lemmas 51.21.6 and 51.20.2. $\square$ Remark 51.21.8. The paper [AHS] shows, besides many other things, that if $A$ is local, then Proposition 51.21.7 also holds with $e = t$ replaced by $e = \dim (A)$. Looking at Lemma 51.20.3 it is natural to ask whether Proposition 51.21.7 holds with $e = t$ replaced with $e = \text{cd}(A, I)$. We don't know. Remark 51.21.9. Let $I$ be an ideal of a Noetherian ring $A$. Say $I = (f_1, \ldots , f_ r)$. Denote $K_ n^\bullet$ the Koszul complex on $f_1^ n, \ldots , f_ r^ n$ as in More on Algebra, Situation 15.91.15 and denote $K_ n \in D(A)$ the corresponding object. Let $M^\bullet$ be a bounded complex of finite $A$-modules and denote $M \in D(A)$ the corresponding object. Consider the following inverse systems in $D(A)$: 1. $M^\bullet /I^ nM^\bullet$, i.e., the complex whose terms are $M^ i/I^ nM^ i$, 2. $M \otimes _ A^\mathbf {L} A/I^ n$, 3. $M \otimes _ A^\mathbf {L} K_ n$, and 4. $M \otimes _ P^\mathbf {L} P/J^ n$ (see below). All of these inverse systems are isomorphic as pro-objects: the isomorphism between (2) and (3) follows from More on Algebra, Lemma 15.94.1. The isomorphism between (1) and (2) is given in More on Algebra, Lemma 15.100.3. For the last one, see below. However, we can ask if these isomorphisms of pro-systems are “strict”; this terminology and question is related to the discussion in [pages 61, 62, quillenhomology]. Namely, given a category $\mathcal{C}$ we can define a “strict pro-category” whose objects are inverse systems $(X_ n)$ and whose morphisms $(X_ n) \to (Y_ n)$ are given by tuples $(c, \varphi _ n)$ consisting of a $c \geq 0$ and morphisms $\varphi _ n : X_ n \to Y_{n - c}$ for all $n \geq c$ satisfying an obvious compatibility condition and up to a certain equivalence (given essentially by increasing $c$). Then we ask whether the above inverse systems are isomorphic in this strict pro-category. This clearly cannot be the case for (1) and (3) even when $M = A[0]$. Namely, the system $H^0(K_ n) = A/(f_1^ n, \ldots , f_ r^ n)$ is not strictly pro-isomorphic in the category of modules to the system $A/I^ n$ in general. For example, if we take $A = \mathbf{Z}[x_1, \ldots , x_ r]$ and $f_ i = x_ i$, then $H^0(K_ n)$ is not annihilated by $I^{r(n - 1)}$.1 It turns out that the results above show that the natural map from (2) to (1) discussed in More on Algebra, Lemma 15.100.3 is a strict pro-isomorphism. We will sketch the proof. Using standard arguments involving stupid truncations, we first reduce to the case where $M^\bullet$ is given by a single finite $A$-module $M$ placed in degree $0$. Pick $N, c \geq 0$ as in Proposition 51.21.7. The proposition implies that for $n \geq N$ we get factorizations $M \otimes _ A^\mathbf {L} A/I^ n \to \tau _{\geq -t}(M \otimes _ A^\mathbf {L} A/I^ n) \to M \otimes _ A^\mathbf {L} A/I^{n - c}$ of the transition maps in the system (2). On the other hand, by More on Algebra, Lemma 15.27.3, we can find another constant $c' = c'(M) \geq 0$ such that the maps $\text{Tor}_ i^ A(M, A/I^{n'}) \to \text{Tor}_ i(M, A/I^{n' - c'})$ are zero for $i = 1, 2, \ldots , t$ and $n' \geq c'$. Then it follows from Derived Categories, Lemma 13.12.5 that the map $\tau _{\geq -t}(M \otimes _ A^\mathbf {L} A/I^{n + tc'}) \to \tau _{\geq -t}(M \otimes _ A^\mathbf {L} A/I^ n)$ factors through $M \otimes _ A^\mathbf {L}A/I^{n + tc'} \to M/I^{n + tc'}M$. Combined with the previous result we obtain a factorization $M \otimes _ A^\mathbf {L}A/I^{n + tc'} \to M/I^{n + tc'}M \to M \otimes _ A^\mathbf {L} A/I^{n - c}$ which gives us what we want. If we ever need this result, we will carefully state it and provide a detailed proof. For number (4) suppose we have a Noetherian ring $P$, a ring homomorphism $P \to A$, and an ideal $J \subset P$ such that $I = JA$. By More on Algebra, Section 15.60 we get a functor $M \otimes _ P^\mathbf {L} - : D(P) \to D(A)$ and we get an inverse system $M \otimes _ P^\mathbf {L} P/J^ n$ in $D(A)$ as in (4). If $P$ is Noetherian, then the system in (4) is pro-isomorphic to the system in (1) because we can compare with Koszul complexes. If $P \to A$ is finite, then the system (4) is strictly pro-isomorphic to the system (2) because the inverse system $A \otimes _ P^\mathbf {L} P/J^ n$ is strictly pro-isomorphic to the inverse system $A/I^ n$ (by the discussion above) and because we have $M \otimes _ P^\mathbf {L} P/J^ n = M \otimes _ A^\mathbf {L} (A \otimes _ P^\mathbf {L} P/J^ n)$ by More on Algebra, Lemma 15.60.1. A standard example in (4) is to take $P = \mathbf{Z}[x_1, \ldots , x_ r]$, the map $P \to A$ sending $x_ i$ to $f_ i$, and $J = (x_1, \ldots , x_ r)$. In this case one shows that $M \otimes _ P^\mathbf {L} P/J^ n = M \otimes _{A[x_1, \ldots , x_ r]}^\mathbf {L} A[x_1, \ldots , x_ r]/(x_1, \ldots , x_ r)^ n$ and we reduce to one of the cases discussed above (although this case is strictly easier as $A[x_1, \ldots , x_ r]/(x_1, \ldots , x_ r)^ n$ has tor dimension at most $r$ for all $n$ and hence the step using Proposition 51.21.7 can be avoided). This case is discussed in the proof of [Proposition 3.5.1, BS]. [1] Of course, we can ask whether these pro-systems are isomorphic in a category whose objects are inverse systems and where maps are given by tuples $(r, c, \varphi _ n)$ consisting of $r \geq 1$, $c \geq 0$ and maps $\varphi _ n : X_{rn} \to Y_{n - c}$ for $n \geq c$. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-05-25T22:40:13
{ "domain": "columbia.edu", "url": "https://stacks.math.columbia.edu/tag/0G9W", "openwebmath_score": 0.979110836982727, "openwebmath_perplexity": 102.89926543173773, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692284751635, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747678388194 }
https://www.shaalaa.com/question-bank-solutions/vector-cartesian-equation-plane-find-vector-equation-plane-passing-through-point-having-position-vector-3i-2j-k-perpendicular-vector-4i-3j-2k_144
HSC Science (Computer Science) 12th Board ExamMaharashtra State Board Account It's free! Share Books Shortlist Your shortlist is empty # Solution - Find the Vector Equation of the Plane Passing Through a Point Having Position Vector 3i-2j+k and Perpendicular to the Vector 4i+3j+2k - HSC Science (Computer Science) 12th Board Exam - Mathematics and Statistics ConceptVector and Cartesian Equation of a Plane #### Question Find the vector equation of the plane passing through a point having position vector 3 hat i- 2 hat j + hat k and perpendicular to the vector 4 hat i + 3 hat j + 2 hat k #### Solution We know that the vector equation of a plane passing through a point  A(bara) and normal to bar r .bar n=bar a.barn Here  bar a =3hati- 2hat j + hat k and hatn = 4hati +3 hatj+2hatk The vector equation of the required plane is bar r .bar n=bar a.barn bar r.(4hati+3hatj+2hatk)=(3hati-2hatj+hatk).(4hati+3hatj+2hatk) bar r.(4hati+3hatj+2hatk)=12-6+2 bar r.(4hati+3hatj+2hatk)=8 The vector equation of the required plane is  bar r.(4hati+3hatj+2hatk)=8 Is there an error in this question or solution? #### APPEARS IN 2015-2016 (March) (with solutions) Question 1.2.2 | 2 marks #### Reference Material Solution for question: Find the Vector Equation of the Plane Passing Through a Point Having Position Vector 3i-2j+k and Perpendicular to the Vector 4i+3j+2k concept: Vector and Cartesian Equation of a Plane. For the courses HSC Science (Computer Science), HSC Science (General) , HSC Science (Electronics), HSC Arts S
2018-11-18T13:13:07
{ "domain": "shaalaa.com", "url": "https://www.shaalaa.com/question-bank-solutions/vector-cartesian-equation-plane-find-vector-equation-plane-passing-through-point-having-position-vector-3i-2j-k-perpendicular-vector-4i-3j-2k_144", "openwebmath_score": 0.5023326277732849, "openwebmath_perplexity": 4861.460283106436, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692277960746, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747673845902 }
https://dsp.stackexchange.com/questions/18966/how-do-we-compute-distrubtions-of-the-value-of-a-random-process-conditional-on-i?noredirect=1
How do we compute distrubtions of the value of a random process conditional on initial conditions? Suppose I have a stationary process $\phi(t)$ with a known autocorrelation function $$A(\tau) \equiv \langle \phi(0) \phi(\tau) \rangle$$ and suppose I also know that $\phi(t)$ is Gaussian distributed. If I have a particular realization of the process in which $\phi(0)=\phi_0$, what is the conditional distribution of $\phi(t)$ for later times in that realization? "I know that $\phi(t)$ is Gaussian distributed" is not the same as saying "$\{\phi(t)\}$ is a Gaussian process" but I will assume that the latter is meant. With this assumption, and the additional assumption that the process is wide-sense-stationary (the autocorrelation function is listed as having only one argument), the process is strictly stationary and thus $\phi(t)$ and $\phi(0)$ are jointly Gaussian random variables. They have the same mean $\mu$ and variance $\sigma^2$, and their correlation coefficient is $\rho$ where $$\mu = \sqrt{\lim_{t\to \infty} A(t)}, \qquad \sigma^2 = A(0)-\mu^2, \qquad \rho(t) = \frac{A(t)-\mu^2}{\sigma^2}.$$ Consequently the conditional distribution of $\phi(t)$ given that $\phi(0)$ has taken on value $\phi_0$ is Gaussian with mean $\mu + \rho(t)(\phi_0 - \mu) = \rho(t) \phi_0 + (1-\rho(t))\mu$ and variance $\sigma^2(1-\rho(t)^2)$. Note that as $t\to\infty$, the conditional distribution of $\phi(t)$ approaches the unconditional distribution of $\phi(t)$: the distant past affects the present state less and less. • Could either explain why $\rho = (A(t) - \mu^2) / \sigma^2$ or provide a reference? – DanielSank Jun 14 '15 at 18:48 • $$A(t) = E[\phi(0)\phi(t)] = \operatorname{cov}(\phi(0), \phi(t)) + E[\phi(0)]E[\phi(t)] = \operatorname{cov}(\phi(0), \phi(t))+\mu^2.$$ $$\rho= \frac{\operatorname{cov}(\phi(0), \phi(t))}{\sqrt{\operatorname{var}(\phi(0))\operatorname{var}(\phi(t))}}$$ – Dilip Sarwate Jun 15 '15 at 2:09
2020-01-22T05:17:47
{ "domain": "stackexchange.com", "url": "https://dsp.stackexchange.com/questions/18966/how-do-we-compute-distrubtions-of-the-value-of-a-random-process-conditional-on-i?noredirect=1", "openwebmath_score": 0.9393900036811829, "openwebmath_perplexity": 133.10376889299798, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692277960746, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747673845902 }
https://cs.stackexchange.com/questions/76575/algorithm-complexity-calculation-tn-2tn-2-nlogn
algorithm complexity calculation T(n) = 2T(n/2) + n*log(n) I guess I lack at understanding basic things at algorithm calculations, while learning for an exam. • at the result, does one write like O(ld n) or just log instead of ld ? • regarding the following calculation (not from me) at the yellow mark - why is it Theta(n) ? why not O(n) ? I dont get it.. Let's start at the beginning. Basically, there are 3 very popular notations to express time complexity of algorithms: • $\Theta(g(n))$, • $\mathcal{O}(g(n))$ (this is the well-known Big O notation), • $\Omega(g(n))$. The first thing that is in most cases a little bit confusing (and misused), that these notations denote sets - sets of functions. For example, the interpretation of $\Theta(g(n))$ is as follows: $$\Theta(g(n)) = \{ f(n) \; | \; \text{There exist } n_0, c_1 \text{ and } c_2 \text{ constants so that } 0 \leq c_1 \cdot g(n) \leq f(n) \leq c_2 \cdot g(n) \text{ for all } n > n_0. \}$$ So, $\Theta(g(n))$ is a set of $f(n)$ functions for that $g(n)$ can be used as both an upper and a lower bound (with the use of the given $c_1$ and $c_2$ constants). In other words, if you plot these 3 functions, $f(n)$ is going to be between $c_1 \cdot g(n)$ and $c_2 \cdot g(n)$ - at least, for all inputs larger than or equal to $n_0$. (It's worth taking a look at figures about these in Google so that you can get a better understanding of it.) Note: because of the use of $n_0$ and all larger inputs than it, these notations are also called asymptotic notations. The interpretation of $\mathcal{O}(g(n))$ and $\Omega(g(n))$ are very similar to the one above, but they only refer to either the upper bound or the lower bound, respectively. (Of course, the existence of only one $c$ constant is enough for these 2 definitions.) Just in case, see the definition of $\mathcal{O}(g(n))$ below: $$\mathcal{O}(g(n)) = \{ f(n) \; | \; \text{There exist } n_0 \text{ and } c \text{ constants so that } 0 \leq f(n) \leq c \cdot g(n) \text{ for all } n > n_0. \}$$ In other words, $g(n)$ is an upper bound for all the functions in the set $\mathcal{O}(g(n))$. For example, let's denote the time complexity of the insertion sort with $T(n) = \frac{n(n - 1)}{2} = \frac{n^2}{2} - \frac{n}{2}$ (you can derive this easily if you think through the algorithm). Now, here $T(n) \in \mathcal{O}(n^2)$ means that the time complexity of the algorithm is quadratic - so if the size of the input is $n$ (we have an array to be sorted consisting of $n$ elements), the algorithm must perform the most expensive operation at most $c \cdot n^2$ times (where $c$ might be 1). Three notes here: • Typically, we don't care about the "weaker" members of the equations, like $\frac{n}{2}$ in the example above, because of the asymptotic property I mentioned earlier (that wants to say something like "for large enough inputs always the strongest members will only count"). • Typically, we only consider the most expensive operations of an algorithm, i.e., in the case of sorting algorithms, the number of comparisons. • Many times, people use the notations of $T(n) \in \mathcal{O}(g(n))$ and $T(n) = \mathcal{O}(g(n))$ like they were equivalent (basically, they aren't, but it's a common thing since the use of the latter can be advantageous as well). So now, that hopefully we are done with the basics, you can see, that if a function $f(n) \in \Theta(g(n))$, it is also true that: $f(n) \in \mathcal{O}(g(n))$ and $f(n) \in \Omega(g(n))$ (in most textbooks, this is presented as a theorem with its proof, as well). In most cases, people don't care about the lower bounds of an algorithm's time complexity - so, it shouldn't really matter whether you see $\mathcal{O}$ or $\Theta$ (this would be the answer to your second question). I think that $\Omega$ is much less frequent than the other two. I haven't ever seen the $ld$ notation you mentioned. However, I would prefer to use $\log_2$, it will be clear for anyone. Of course, you can use the one you would like to, but make sure to put them into either $\mathcal{O}$ or $\Theta$ (I can't recall a case when only the function was indicated). Truly hope that I managed to provide a detailed answer you find useful enough. If you have any further question, please feel free to ask. • I find this answer misleading (the question is not about algorithms) and overly verbose: let's assume, for instance, that the OP has access to a definition of Landau notation. I know you mean well, but reproducing textbook chapters on every other question is not a good use of (y)our time. – Raphael Jun 9 '17 at 20:05 • You probably right, thank you for the advices. Despite I wrote quite a long answer that contains examples for algorithms, I still tried to focus on the notations. Sorry, if it became misleading, will do my best next time. – laszlzso Jun 9 '17 at 20:32 • How does this answer the question? – David Richerby Jun 9 '17 at 20:33 • One of the questions was whether there should be $\Theta$ or $\mathcal{O}$. I could have written only that it doesn't really matter but I thought it a good idea to give a detailed explanation. – laszlzso Jun 9 '17 at 20:37 • @ZsoltLászló thanks for the detailed answer, i really appreciate that !! imagine this: sum (x^i) * O(1), from i=0 to k-1 x can be anything, i'm more interested in the O(1). if k is equal to n, for example, the result of this part would be O(n). But in this case i would then write Theta(n) instead O(n), right ? I guess so, because it will be a harder limit.. – Shoorty Jun 11 '17 at 6:00
2021-06-13T23:39:26
{ "domain": "stackexchange.com", "url": "https://cs.stackexchange.com/questions/76575/algorithm-complexity-calculation-tn-2tn-2-nlogn", "openwebmath_score": 0.8986226320266724, "openwebmath_perplexity": 261.7680127070358, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692277960746, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747673845902 }
http://math.stackexchange.com/questions/239760/given-the-hasse-diagram-tell-if-the-structure-is-a-lattice?answertab=oldest
# Given the Hasse diagram tell if the structure is a lattice Let's consider the following Hasse diagram: I need to tell whether this is a lattice. By lattice definition I can prove the above shown structure $M_5$ to be a lattice if and only if $\forall x,y \in M_5$, $\{x,y\}$ has supremum and infimum in $M_5$. Putting all such subsets in a table, not mentioning those subset where $x=y$: $$\begin{array}{|c || c | c|} \hline Subset & x \wedge y & x \vee y \\ \hline \{a,b\} & b & a \\ \{a,c\} & d & e \\ \{a,d\} & d & a \\ \{a,e\} & a & e \\ \{b,c\} & d & e \\ \{b,d\} & d & b \\ \{b,e\} & b & e \\ \{c,d\} & d & c \\ \{c,e\} & c & e \\ \{d,e\} & d & e \\ \hline \end{array}$$ So the $M_5$ is a lattice. Is my reasoning in detecting supremum and infimum for each given subset correct? Have I come up with the right conclusion? - thanks for asking such a good question, helped me a lot. :D –  Hassan May 26 at 15:21 for {a,d} can we say that avd can also be e? –  Hassan May 26 at 15:29 Your calculation of the supremum and infimum is correct and the structure is a lattice. - for {a,d} can we say that avd can also be e? –  Hassan May 26 at 15:29 An alternate and shorter way would be to check if the meet operation holds for each element and if the lattice is bounded above, which it is. This is in reference to the fact that every meet-lattice with a greatest element is a join lattice. -
2015-07-31T12:09:32
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/239760/given-the-hasse-diagram-tell-if-the-structure-is-a-lattice?answertab=oldest", "openwebmath_score": 0.8979319930076599, "openwebmath_perplexity": 567.3617929421031, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692277960746, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747673845902 }
https://www.bartleby.com/solution-answer/chapter-18-problem-45e-precalculus-mathematics-for-calculus-6th-edition-6th-edition/9780840068071/3c65b406-9078-4416-b9cc-be5a9f8238bb
# The coordinates of the point S so that quadrilateral PQRS is a parallelogram. ### Precalculus: Mathematics for Calcu... 6th Edition Stewart + 5 others Publisher: Cengage Learning ISBN: 9780840068071 ### Precalculus: Mathematics for Calcu... 6th Edition Stewart + 5 others Publisher: Cengage Learning ISBN: 9780840068071 #### Solutions Chapter 1.8, Problem 45E To determine ## To calculate: The coordinates of the point S so that quadrilateral PQRS is a parallelogram. Expert Solution The coordinates of point S so that PQRS forms a parallelogram is S(2,3) . ### Explanation of Solution Given information: The points P(1,4),Q(1,1) and R(4,2) . Formula used: A parallelogram is a quadrilateral in which diagonals bisect each other at the same point. Mid-point formula between two points X(a1,b1) and Y(a2,b2) is mathematically expressed as, (a1+a22,b1+b22) Calculation: Consider the provided vertices P(1,4),Q(1,1) and R(4,2) . By plotting the given points on the coordinate plane, we get the following figure, Recall that a parallelogram is a quadrilateral in which diagonals bisect each other at the same point. So, to find the coordinates of S such that PQRS forms a parallelogram, the mid-point of its diagonals i.e. PR and QS must be equal. Let the coordinates of the point be (a,b) . Recall that the mid-point formula between two points X(a1,b1) and Y(a2,b2) is mathematically expressed as, (a1+a22,b1+b22) So, midpoint of PR is calculated as, (1+42,4+22)=(32,22)=(32,1) Now, midpoint of QS is calculated as, (1+a2,1+b2) Since, diagonals of a parallelogram bisect each other at same point, so midpoint of PR and QS is same. So, equate the mid-points of PR and QS as, (1+a2,1+b2)=(32,1) Now, equate x-coordinate and y-coordinate from both sides of the equation as, 1+a2=321+a=3a=31a=2 1+b2=11+b=2b=21b=3 Thus, the coordinates of point S so that PQRS forms a parallelogram is S(2,3) . ### Have a homework question? Subscribe to bartleby learn! Ask subject matter experts 30 homework questions each month. Plus, you’ll have access to millions of step-by-step textbook answers!
2021-09-21T16:22:30
{ "domain": "bartleby.com", "url": "https://www.bartleby.com/solution-answer/chapter-18-problem-45e-precalculus-mathematics-for-calculus-6th-edition-6th-edition/9780840068071/3c65b406-9078-4416-b9cc-be5a9f8238bb", "openwebmath_score": 0.8592615127563477, "openwebmath_perplexity": 3296.8512777313076, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692277960745, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747673845901 }
https://math.stackexchange.com/questions/1074106/bounds-for-general-character-sums-over-finite-fields/1074497
# Bounds for general character sums over finite fields Let $\mathbb{F}_q$ be a finite field with $q$ elements, let $\chi$ be the canonical additive character of $\mathbb{F}_q$, let $\psi$ be a non-trivial multiplicative character of $\mathbb{F}_q$, and let $f \in \mathbb{F}_q[x]$ be a polynomial. Is an upper bound known for the following general character sum, say in the style of Weil's bound? Thanks! $$\left| \sum_{x \in \mathbb{F}_q^*} \chi(f(x)) \psi(x) \right|$$ • Yes, look at the following Springer Lecture Note: W. M. Schmidt, Equations over finite fields: An elementary approach. Springer, Berlin (1976). (I used to have this book in my library until a few years ago....) – Dilip Sarwate Dec 19 '14 at 4:14 • Schmidt's book is very useful (our library copy is also mysteriously misplaced). Lidl & Niederreiter don't go into details here even though they explain the use of L-functions IMO a bit better. I benefitted from studying Michael Rosen's book in the sense that there is a more number theoretical account there. Unfortunately he doesn't go into details on the character sums. But you will recognize that those mysterious subgroups of the multiplicative group of $\Bbb{F}_q(x)$ are related to ray classes. – Jyrki Lahtonen Dec 19 '14 at 13:27 • Rosen does give Bombieri's version of the Schmidt-Stepanov method of proving the Riemann hypothesis for function fields. (If you are a coding theoretically minded, then a similar account is in Stichtenoth's book). Combining this with what you learn about L-functions from the other listed sources will go a long way. For example, the Shanbhag-Kumar-Helleseth extensions I refer to in my answer become "straightforward" exercises. – Jyrki Lahtonen Dec 19 '14 at 13:30 Yes, there is such a bound. Below I list a few variants. To avoid trivial cases we need to assume that $f(x)$ is not of the form $h(x)^p-h(x)+r$ for any polynomial $h(x)$ and any constant $r$. I also include sums where the multiplicative also has a polynomial argument. Then non-triviality condition takes the form that if $\psi$ is of order $d$ the polynomial $g(x)$ should not be of the form $r h(x)^d$, again for any constant $r$ and any polynomial $h(x)$. The definition of a non-trivial multiplicative character is often extended by declaring that $\psi(0)=0$. A general bound for hybrid sums with polynomial arguments is $$\left\vert\sum_{x\in\Bbb{F}_q}\chi(f(x))\psi(g(x))\right\vert\le (\deg f+\deg g-1)\sqrt{q}.$$ The sums you specifically asked about are a special case of this with $g(x)=x$ of degree $1$. Occasionally useful generalizations involve Laurent polynomials and read $$\left\vert\sum_{x\in\Bbb{F}_q}\chi(f_1(x)+f_2(\frac1x))\right\vert\le (\deg f_1+\deg f_2)\sqrt{q}$$ and $$\left\vert\sum_{x\in\Bbb{F}_q}\chi(f_1(x)+f_2(\frac1x))\psi(x)\right\vert\le (\deg f_1+\deg f_2)\sqrt{q}.$$
2021-06-23T02:52:45
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1074106/bounds-for-general-character-sums-over-finite-fields/1074497", "openwebmath_score": 0.8001623153686523, "openwebmath_perplexity": 323.46568357530487, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692271169855, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747669303609 }
http://mathhelpforum.com/algebra/195099-express-interval-absolute-value.html
# Thread: Express an interval with an absolute value 1. ## Express an interval with an absolute value I am looking at a problem in a review of basic set notation which reads as follows: (a) [1, 5] (b) (1, 4) (c) [-1, 6) (d) [-4, 4] The interval in (d) may be expressed in the form {x | x is a number and |x| ≤ 4}. Two of the other intervals listed can similarly be expressed with the aid of absolute values. Find the two and display the result. I assume the intervals you could do this with would be (a) and (b), but I don't really know where to go from there. Could anyone offer any hints in the right direction? Thanks! 2. ## Re: Express an interval with an absolute value Originally Posted by Ragnarok I am looking at a problem in a review of basic set notation which reads as follows: (a) [1, 5] (b) (1, 4) (c) [-1, 6) (d) [-4, 4] The interval in (d) may be expressed in the form {x | x is a number and |x| ≤ 4}. Two of the other intervals listed can similarly be expressed with the aid of absolute values. Find the two and display the result. I assume the intervals you could do this with would be (a) and (b), but I don't really know where to go from there. Could anyone offer any hints in the right direction? Thanks! The interval in (a) may be expressed in the form {x | x is a number and |x-3| ≤ 2}. The interval in (b) may be expressed in the form {x | x is a number and |x-5/2| < 3/2}. 3. ## Re: Express an interval with an absolute value In general $c \leq x \leq d \equiv |x-a| \leq b \equiv a-b \leq x \leq a+b \implies a= \frac{c+d}{2} , b = \frac{d-c}{2}$ Regards, Kalyan
2016-10-26T23:33:46
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/195099-express-interval-absolute-value.html", "openwebmath_score": 0.5842841267585754, "openwebmath_perplexity": 377.9992169884874, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692271169853, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747669303608 }
http://gate-exam.in/CS/Syllabus/Engineering-Mathematics/Mathematical-Logic/First-Order-Logic
# Questions & Answers of First Order Logic Question No. 2 Consider the first-order logic sentence $\style{font-family:'Times New Roman'}{F:\forall x\left(\exists yR\left(x,y\right)\right)}$. Assuming non-empty logical domains, which of the sentence below are implied by F? $\style{font-family:'Times New Roman'}{\mathrm I.\;\exists y\left(\exists xR\left(x,y\right)\right)}$ $\style{font-family:'Times New Roman'}{\mathrm{II}.\;\exists y\left(\forall xR\left(x,y\right)\right)}$ $\style{font-family:'Times New Roman'}{\mathrm{III}.\;\forall y\left(\exists xR\left(x,y\right)\right)}$ $\style{font-family:'Times New Roman'}{\mathrm{IV}.\;\neg\exists y\left(\forall y\neg R\left(x,y\right)\right)}$
2018-06-19T23:53:41
{ "domain": "gate-exam.in", "url": "http://gate-exam.in/CS/Syllabus/Engineering-Mathematics/Mathematical-Logic/First-Order-Logic", "openwebmath_score": 0.8834023475646973, "openwebmath_perplexity": 2154.7987792889826, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692366242306, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747668429643 }
https://math.stackexchange.com/questions/2042152/a-function-that-doesnt-have-directional-derivatives
# A function that doesn't have directional derivatives let $f(x,y)=\frac{xy}{x^2+y^2}$ and $f(0,0)=0$ Now $f_1$ denotes the partial derivative of $f$ with respect to the 1st coordinate (x). $$f_1=\frac{x^2y+y^3-2x^2y}{(x^2+y^2)^2}, f_1(0,0)=0$$ We can see that $f_1$ exists everywhere. Let's get back to the question. Question asks me to show that directional derivatives don't exist at the origin. I couldn't get why wouldn't they exist? I have $f_1(0,0)=0$ and $f_2(0,0)=0$ Define $Df_B(0,0)=Df_B(f_1(0,0),f_2(0,0))$ Where $B$ is the direction and $Df$ is the gradient vector. By theorem $Df_B(x,y)=B*Df(x,y)$ where $*$ is the dot product. Then the directional derivatives of all directions is $0$. Why wouldn't they exist? Can anyone give a counter example / proof / intuition / hint ? • Okay Edit : The proof that $(*)b*DF=D_bF$ depends on assuming that $DbF$ exists. So I take my word back. But I still need a rigorous proof – math31 Dec 3 '16 at 21:29 • No, the theorem is that IF $f$ is differentiable at $(0,0)$ THEN the directional derivatives equal that. But here $f$ is not differentiable at $(0,0)$ – zhw. Dec 4 '16 at 0:28 Take a direction $u_\theta=(\cos \theta, \sin \theta)$. You have $$\frac{f((0,0)+ h u_\theta)-f(0,0)}{h}=\frac{\cos \theta \sin \theta}{h}$$ Hence the directional derivative (limit of above for $h \to 0$) cannot exists except for $\theta \in \{k\frac{\pi}{2} \ ; \ k \in \mathbb N\}$, I.e. in the $x$ or $y$ direction.
2019-08-20T07:06:43
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2042152/a-function-that-doesnt-have-directional-derivatives", "openwebmath_score": 0.9299975037574768, "openwebmath_perplexity": 136.10093111512296, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692366242304, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747668429643 }
http://mathhelpforum.com/discrete-math/187675-combinations-problem.html
1. ## Combinations problem I've been challenged to come up with the total possible number of combinations for a given set of ingredients. For the purposes of this challenge, AB == BA. In other words, given ingredients 'A', 'B', and 'C', the possible combinations are as follows: A, B, C, AB, AC, BC, ABC BA,CA,CB,ACB,BAC,BCA,CAB, and CBA are not counted because for our purposes order is irrelevant. Also, each item may only be used once. It's been a LONG time since I got out of school, but I can't think of an equation that satisfies this set of rules. Am I missing something obvious? Is there a more complex formula that is required? Any assistance would be greatly appreciated. 2. ## Re: Combinations problem The number of nonempty subsets of a set with n elements is $2^n-1$ (see Wikipedia).
2017-11-25T02:25:07
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/discrete-math/187675-combinations-problem.html", "openwebmath_score": 0.7111534476280212, "openwebmath_perplexity": 255.46401303022384, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692366242306, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747668429643 }
https://math.stackexchange.com/questions/426376/the-action-of-a-galois-group-on-a-prime-ideal-in-a-dedekind-domain
# The action of a Galois group on a prime ideal in a Dedekind domain Let $A$ be a commutative Dedekind domain and $K$ its field of fractions. Let $L/K$ be a finite Galois extension with Galois group $G$ and let $B$ be the integral closure of $A$ in $L$. If $\frak{P}$ is a non-zero prime (maximal) ideal of $B$ is it true that as $\sigma$ runs through $G$ the prime ideals $\sigma(\frak{P})$ of $B$ are all distinct? If so, why? Any help would be very much appreciated. • That may happen and it may fail to happen. This is what the arithmetic of $A$, or if you prefer, of the extension $B\supset A$, is all about. Take $A=\mathbb Z$ and $B=\mathbb Z[i]$. Then the primes $\mathfrak P$ above rational primes $\equiv1\pmod4$ occur in conjugate pairs, while all other primes (including $(1+i)$) are selfconjugate. – Lubin Jun 21 '13 at 18:29 • @Lubin: Your comment is a perfectly good answer to the question. For various site-mechanical reasons it is better that questions actually get answered in the formal sense. So could you please leave your comment as an actual answer? – Pete L. Clark Jun 21 '13 at 18:47 • @Lubin: many thanks. – Josh F. Jun 21 '13 at 18:49 • @Lubin: I asked the question because I couldn't quite understand the proof in Neukirch's ANT of the result that (in our set-up) $G$ acts transitively on the set of all prime ideals $\frak{P}$ of $B$ lying above a given prime ideal $\frak{p}$ of $A$. The proof uses the Chinese Remainder Theorem applied to the prime ideals $\frak{P}^{\prime}$ and $\sigma\frak{P}$ for $\sigma \in G$ lying above $\frak{p}$. So let me ask you another question: are the ideals $\sigma\frak{P}$ coprime to each other? – Josh F. Jun 21 '13 at 19:03 • @user68418 Coprime and unequal are the same for non-zero primes in a Dedekind domain. Neukirch probably takes the distinct conjugates of $\mathfrak{P}$. I think you could also prove this via "prime avoidance". – TTS Jun 21 '13 at 19:24
2019-04-18T16:24:42
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/426376/the-action-of-a-galois-group-on-a-prime-ideal-in-a-dedekind-domain", "openwebmath_score": 0.876933753490448, "openwebmath_perplexity": 149.16837214196866, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692264378964, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747664761317 }
https://math.stackexchange.com/questions/2981020/why-do-you-set-a-system-of-linear-equations-0/2981039
# Why do you set a system of linear equations = 0? If you have a set of vectors, V = {v1, v2, v3}, with each vector containing 3 elements (x,y,z) and you want to know if all of V spans a vector space, Rn, my understanding is that you want to set up a system of linear equations and set them = 0. Is the reason for this because you are checking to see if the zero vector (0,0,0), the smallest subspace of a vector space, is a linear combination of V? My thoughts come from the below: If you want to check if one of the vectors, v1, is a linear combination of the other vectors, you would set up a system of linear equations where v1 = x1v2 + x2v3 Recall that by definition three vectors are linearly dependent if and only if $$x_1v_1+x_2v_2+x_3v_3=0$$ with $$x_i$$ not all equal to zero. If we consider only the system $$v_1=x_1v_2+x_2v_3$$ we can only check that $$v_1$$ is a linear combination of $$v_2$$ and $$v_3$$ but we can't check if $$v_2$$ and $$v_3$$ are linearly dependent. • Trivial example: $v_2=v_3=0$, $v_1\ne0$. For a not so trivial example, take $v_2=kv_3$ and $v_1$ not a scalar multiple of either. – amd Nov 2 '18 at 0:21 Once you have three vectors you have a vector space which is the span of your vectors. It is the question of dimension of that vector space that brings the linear dependence or independent to the scean. Your span is one, two ,three dimensional space , depending on how many of them are linearly independent assuming that they are not all zero vectors.
2020-06-01T23:47:50
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2981020/why-do-you-set-a-system-of-linear-equations-0/2981039", "openwebmath_score": 0.7705453038215637, "openwebmath_perplexity": 102.5894252258796, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692264378964, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747664761317 }
http://mathhelpforum.com/statistics/124753-very-easy-question.html
Thread: very easy question! 1. very easy question! hello everyone! i have a very easy question for you guys!!!! but very hard for me!! can you tell me what is the pourcentage that 7+ a random number between 2 and 12 will be greater than 5 + a random number between 2 and 12? if you could tell me how to find this out, if its not too hard, i would really appreciate it!! i am working an a role playing game Francois 2. Originally Posted by frankinthecity hello everyone! i have a very easy question for you guys!!!! but very hard for me!! can you tell me what is the pourcentage that 7+ a random number between 2 and 12 will be greater than 5 + a random number between 2 and 12? if you could tell me how to find this out, if its not too hard, i would really appreciate it!! i am working an a role playing game Here are a few questions for you first. What do you mean by "a random number between 2 and 12"? Do you mean a whole number, and are the values 2 and 12 included or do you mean an integer strictly between 2 and 12 (in other words not including the end points)? Also, when you say a "random" number, do you mean that each value from 2 to 12 is equally probable, or are you perhaps thinking of throwing two dice, in which case the result is much more likely to be 7 than 2 for example? If you can state the problem a bit more precisely then we should be able to show you how to do the calculation. 3. Hello Opalg! thank you for your help, you got it, its 2 six sided dice! i understand now that if it is dice, its not exactly a ramdon number between 2 and 12!!! you have a lot more chances to get 7 like you say no wonder i am having problem finding the answer, its a lot more complicated that i thought!!! Francois 4. Okay, so this is more complicated than you first thought! In the table below, the top row shows the possible outcomes from 2 to 12 when you throw two dice. For each outcome, the second row shows the probability of that outcome occurring, and the bottom row shows the probability of the outcome being at least that much. $\begin{array}{c|ccccccccccc}k&2&3&4&5&6&7&8&9&10&1 1&12\\ \\P(k)& \frac1{36} & \frac2{36} & \frac3{36} & \frac4{36} & \frac5{36} & \frac6{36} & \frac5{36}& \frac4{36}& \frac3{36} & \frac2{36}& \frac1{36}\\ \\ P(\geqslant k)& \frac{36}{36} & \frac{35}{36} & \frac{33}{36} & \frac{30}{36} & \frac{26}{36} & \frac{21}{36} & \frac{15}{36} & \frac{10}{36} & \frac6{36} & \frac3{36} & \frac1{36} \end{array}$ Now, if the first player (the one with a current score of 5) throws the dice and scores k, then the second player (currently with 7) needs to score at least k–1 in order for their total to remain greater than that of the other player. If the first player rolls the dice and gets 2 or 3, there is no way that they can go ahead. But if they get 4, then the second player must get at least 3. The probability of that happening is 3/36 (= probability of the first player scoring 4) times 35/36 (= probability of the second player scoring at least 3). Adding up the probability for each of these combinations, the total probability of the second player remaining ahead after both players have rolled the dice is $\frac{\parbox{5in}{1\times36 + 2\times 36 + 3\times35 + 4\times 33 + 5\times 30 + 6\times26\\ {\color{white}.}\hfill {}+ 5\times21 + 4\times 15 + 3\times10+2\times6+ 1\times3}}{36\times36}\approx 0.66435...$ If you want it as a percentage then the answer is approx. 66.4%. 5. Opalg, thank you so much ok, i understand the first line, i even understand the second line lol but the third i am not sure, listen, if its really too hard to explain on a forum, (i know forum have their limits hehe) just give me another exemple, how about 7 and 6? maybe i will be able to figure out by myself.. what i need is not complicated at all, i need something like this (2 six sided dice = 2d6) 7+2d6 VERSUS 5+2d6 = 66% 7+2d6 VERSUS 6+2d6 = 7+2d6 VERSUS 7+2d6 = 50% (that one i found out by myself LOL) 7+2d6 VERSUS 8+2d6 = 7+2d6 VERSUS 9+2d6 = 7+2d6 VERSUS 10+2d6 = 8+2d6 VERSUS 5+2d6 = 8+2d6 VERSUS 6+2d6 = 66% (i think we can apply the same formula here??) 8+2d6 VERSUS 7+2d6 = 8+2d6 VERSUS 8+2d6 = 50% 8+2d6 VERSUS 9+2d6 = 8+2d6 VERSUS 10+2d6 = 9+2d6 VERSUS 5+2d6 = 9+2d6 VERSUS 6+2d6 = 9+2d6 VERSUS 7+2d6 = 66% 9+2d6 VERSUS 8+2d6 = 9+2d6 VERSUS 9+2d6 = 50% 9+2d6 VERSUS 10+2d6 = 10+2d6 VERSUS 5+2d6 = 10+2d6 VERSUS 6+2d6 = 10+2d6 VERSUS 7+2d6 = 10+2d6 VERSUS 8+2d6 = 66% 10+2d6 VERSUS 9+2d6 = 10+2d6 VERSUS 10+2d6 = 50% 11+2d6 VERSUS 5+2d6 = 11+2d6 VERSUS 6+2d6 = 11+2d6 VERSUS 7+2d6 = 11+2d6 VERSUS 8+2d6 = 11+2d6 VERSUS 9+2d6 = 66% 11+2d6 VERSUS 10+2d6 = 12+2d6 VERSUS 5+2d6 = 12+2d6 VERSUS 6+2d6 = 12+2d6 VERSUS 7+2d6 = 12+2d6 VERSUS 8+2d6 = 12+2d6 VERSUS 9+2d6 = 12+2d6 VERSUS 10+2d6 = 66% if you give me the 7 and 6 exemple, and i can apply it to 8 and 7, 9 and 8, and so forth, there is only a few more i need to find! Francois 6. This "very easy question" gets more and more complicated. The original question was "Can you tell me what is the percentage that 7+ a random number between 2 and 12 will be greater than 5 + a random number between 2 and 12?" The answer to the question depends on what is meant by "greater than". The natural interpretation is that A's score (x) is greater than B's score (y) if $x>y$. A second interpretation would be to allow also the possibility that $x=y$, and to say that A's score is greater if $x\geqslant y$. A third interpretation (which sounds strange but is actually quite natural) is to split the difference between the first two interpretations. Suppose that A and B start out with equal scores. They then each throw a couple of dice. There is a probability of approximately 11.2% that they will both throw the same total, and a probability of about 44.4% that B will throw a higher score than A. So the probability that B then has the greater score is 44.4% if you use the first interpretation of "greater". It is 55.6% if you use the second interpretation, and it is 50% if you use the third interpretation and split the difference between 44.4% and 55.6%. Originally Posted by frankinthecity (2 six sided dice = 2d6) 7+2d6 VERSUS 5+2d6 = 66% 7+2d6 VERSUS 6+2d6 = 7+2d6 VERSUS 7+2d6 = 50% (that one i found out by myself LOL) In those calculations, I used the first interpretation to get the 66%, but you used the third interpretation to get the 50%. If I had used the third interpretation then my answer would have been 7+2d6 VERSUS 5+2d6 = 71%. 7. if a and b or equal then it doesnt count and i start over, 1 number have to be higher then the other so its really the natural interpretation that is good. and for the 7+2d6 VS 7+2d6, i didnt do the calculation to find out, i only said 50% because i said to myself "hey 7 and 7 chances or pretty equal must be 50%..." but again, myabe i am wrong?? so to keep it simple i will just ask Can you tell me what is the percentage that 7+2d6 will be greater than 6 +2d6? if you could to the same calculation like you did for the one before maybe i will figured it out and be able to calculate the other stats i need by myself thank you again for you patience Francois 8. i was able to figure it out, someone showed me how to work with PARI thanks a bunch Francois
2013-12-11T10:36:38
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/statistics/124753-very-easy-question.html", "openwebmath_score": 0.6911736726760864, "openwebmath_perplexity": 747.2655961105762, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692264378964, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747664761317 }
https://web2.0calc.com/questions/counting_38418
+0 Counting 0 150 2 Using the digits 1, 2, 3, 4, 5, how many even three-digit numbers less than 500 can be formed if each digit can be used at most once? Apr 24, 2022 #1 +23227 +1 There are 4 choices for the left-most digit (1, 2, 3, or 4; 5 can't be chosen) then there are 4 choices for the middle digit and 3 choices for the right-most digit. This means that there are 4 x 4 x 3 = 48 choices. Apr 25, 2022 #2 +2560 +1 The number has to be even, so there are 2 cases: the final digit is a 2, or the final digit is a 4. If the final digit is a 2: 3 choices for the first number (can't be 2 or 5) 3 choices for the next number (can't be the number just chosen or a 2) 1 choice for the final number (must be a 2) We can do the same thing for the other case (the final digit is a 4), so there are $$3 \times 3 \times 2 = \color{brown}\boxed{18}$$ numbers that work. Apr 25, 2022
2023-02-08T17:57:14
{ "domain": "0calc.com", "url": "https://web2.0calc.com/questions/counting_38418", "openwebmath_score": 0.6806405782699585, "openwebmath_perplexity": 266.4217716875319, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692264378964, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747664761317 }
https://www.coursehero.com/file/11192210/Homework-7-Solution-Spring-2015-on-Number-Theory/
# Homework 7 Solution Spring 2015 on Number Theory - MATH 470... • Homework Help • dlynch2790 • 2 • 94% (16) 15 out of 16 people found this document helpful This preview shows page 1 - 2 out of 2 pages. MATH 470 Homework 7 Solutions [1] Use the p 1 factoring method to compute the factors of n = 17513 . Depending on how you code this you may have to compute b = a B ! ( mod n ) by using the sequence b 1 = a ( mod n ) , b j = b j j - 1 ( mod n ) for j = 2 , . . . B . Simply compute d = gcd ( b 1 , n ) to see if you get a nontrivial factor. Only a modest, single digit value of B is needed here so you should show all the steps of the calculation.
2022-01-25T14:37:00
{ "domain": "coursehero.com", "url": "https://www.coursehero.com/file/11192210/Homework-7-Solution-Spring-2015-on-Number-Theory/", "openwebmath_score": 0.8621098399162292, "openwebmath_perplexity": 592.698152076143, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692264378964, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747664761317 }
http://mathhelpforum.com/algebra/83947-simplifying-fractions-exponents-print.html
# Simplifying fractions with exponents • Apr 15th 2009, 05:09 PM HeidiHectic Simplifying fractions with exponents I have to simplify this and im totally confused step by step help would be so much appreciated! $ \left( \frac {-6x^2y}{2xy^3} \right)^3 $ • Apr 15th 2009, 05:21 PM Reckoner Quote: Originally Posted by HeidiHectic I have to simplify this and im totally confused step by step help would be so much appreciated! $ \left( \frac {-6x^2y}{2xy^3} \right)^3 $ Have you made any attempt at the problem? You should be familiar with these properties of exponentiation: $(ab)^n=a^nb^n$ $\left(\frac ab\right)^n=\frac{a^n}{b^n}$ $a^{-n}=\frac1{a^n}$ $a^ma^n=a^{m+n}$ $\frac{a^m}{a^n}=a^{m-n}$ (where $a,\,b,\,m$ and $n$ have values such that the above expressions are defined) Note that a variable without an exponent can be thought of as being raised to the 1st power. • Apr 15th 2009, 05:29 PM HeidiHectic yes, i understand those im just not sure what to do after you distribute the exponent. • Apr 15th 2009, 05:36 PM Reckoner Quote: Originally Posted by HeidiHectic yes, i understand those im just not sure what to do after you distribute the exponent. $\left(\frac{-6x^2y}{2xy^3}\right)^3$ $=\frac{-216x^6y^3}{8x^3y^9}$ $=\frac{-216}{8}\cdot\frac{x^6}{x^3}\cdot\frac{y^3}{y^9}$ $=\frac{-27\cdot8}{8}\cdot\frac{x^6}{x^3}\cdot\frac{y^3}{y^ 9}$ Now can you see how to continue? • Apr 15th 2009, 05:40 PM e^(i*pi) Quote: Originally Posted by HeidiHectic I have to simplify this and im totally confused step by step help would be so much appreciated! $ \left( \frac {-6x^2y}{2xy^3} \right)^3 $ I'd cancel a factor of 2, x and y before cubing $ \left( \frac {-3x}{y^2} \right)^3 = \frac{-27x^3}{y^6} $ • Apr 15th 2009, 05:40 PM Reckoner Quote: Originally Posted by e^(i*pi) but 6^3 is 216? That's funny. Where did I get 36? It is a mystery. Quote: Originally Posted by e^(i*pi) I'd cancel a factor of 2, x and y before cubing This too. I went the other way because Heidi said she (or he) cubed first, but this is indeed easier.
2017-04-26T11:22:37
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/83947-simplifying-fractions-exponents-print.html", "openwebmath_score": 0.914505124092102, "openwebmath_perplexity": 1340.0731464224225, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692264378963, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747664761316 }
http://internet.churchatchapelhill.com/weather-in-nqlwsi/c9ae4d-how-to-find-the-equation-of-a-parabola
So the simplest thing to start here, is let's just square both sides, so we get rid of the radicals. What is the equation of the parabola? which is 2x, and solve for x. Or in simple terms Substitute the vertex’s coordinates for h and k in the vertex form. We just have to put the values of h & k in the parabola equation. Finding the Equation of a Parabola Given Focus and Directrix Given the focus and directrix of a parabola , how do we find the equation of the parabola? Notice that here we are working with a parabola with a vertical axis of symmetry, so the x -coordinate of the focus is the same as the x -coordinate of the vertex. Determine the horizontal or vertical axis of symmetry. Find the parabola's Vertex, or "turning point", which is found by using the value obtained finding the axis of symmetry and plugging it into the equation to determine what y equals. Also, let FM be perpendicular to th… Since you know the vertex is at (1,2), you'll substitute in h = 1 and k = 2, which gives you the following: The last thing you have to do is find the value of ​a​. y = k - p This short tutorial helps you learn how to find vertex, focus, and directrix of a parabola equation with an example using the formulas. Step 2. 0. parabola equation from two points and vertex. Solution to Example 2The graph has a vertex at $$(2,3)$$. As a general rule, when you're working with problems in two dimensions, you're done when you have only two variables left. The equation of the parabola is given by y = 3 x 2 − 2 x − 2 Example 4 Graph of parabola given diameter and depth Find the equation of the parabolic reflector with diameter D = 2.3 meters and depth d = 0.35 meters and the coordinates of its focus. Find the equation of the parabola if the vertex is (4, 1) and the focus is (4, − 3) Solution : From the given information the parabola is symmetric about y -axis and open downward. equal to the derivative at . Another way of expressing the equation of a parabola is in terms of the coordinates of the vertex (h,k) and the focus. This tutorial focuses on how to identify the line of symmetry. In math terms, a parabola the shape you get when you slice through a solid cone at an angle that's parallel to one of its sides, which is why it's known as one of the "conic sections." Hence the equation of the parabola in vertex form may be written as$$y = a(x - 2)^2 + 3$$We now use the y intercept at $$(0,- 1)$$ to find coefficient $$a$$.$$- 1 = a(0 - 2) + 3$$Solve the above for $$a$$ to obtain$$a = 2$$The equation of the parabola whose graph is shown above is$$y = 2(x - 2)^2 + 3$$, Example 3 Graph of parabola given three pointsFind the equation of the parabola whose graph is shown below. We saw that: y = ɑ(x - h) 2 + k. Using Pythagoras's Theorem we can prove that the coefficient ɑ = 1/4p, where p is the distance from the focus to the vertex. These variables are usually written as ​x​ and ​y​​,​ especially when you're dealing with "standardized" shapes such as a parabola. When we graphed linear equations, we often used the x– and y-intercepts to help us graph the lines.Finding the coordinates of the intercepts will help us to graph parabolas, too. ⇒ y2 = 8x which is the required equation of the parabola. If you have the equation of a parabola in vertex form y = a(x − h)2 + k, then the vertex is at (h, k) and the focus is (h, k + 1 4a). For example, let the given vertex be (4, 5). Take the derivative of the parabola. Parabolas have equations of the form a x 2 + b x + c = y . A little simplification gets you the following: ​5 = a(2)2 + 2​, which can be further simplified to: Now that you've found the value of ​a​, substitute it into your equation to finish the example: ​y = (3/4)(x - 1)2 + 2​ is the equation for a parabola with vertex (1,2) and containing the point (3,5). Next, substitute the parabola's vertex coordinates (h, k) into the formula you chose in Step 1. This calculator will find either the equation of the parabola from the given parameters or the axis of symmetry, eccentricity, latus rectum, length of the latus rectum, focus, vertex, directrix, focal parameter, x-intercepts, y-intercepts of the entered parabola. Let F be the focus and l, the directrix. Also, the directrix x = – a. \)The equation of the parabola is given by$$y = 0.26 x^2$$The focus of the parabolic reflector is at the point$$(p , 0) = (0.94 , 0 )$$, Find the equation of the parabola in each of the graphs below, Find The Focus of Parabolic Dish Antennas. When building a parabola always there must be an axis of symmetry. This way we find the parabola equation by 3 points. A tangent to a parabola is a straight line which intersects (touches) the parabola exactly at one point. We know that a quadratic equation will be in the form: y = ax 2 + bx + c Our job is to find the values of a, b and c after first observing the graph. Let m=1/t Hence equation of tangent will be $\frac{y}{m}\,=\,x\,+\,\frac{a}{m^2}$ From the practical side, this approach is not the most pleasant ”, however, it gives a clear result, on the basis of which the curve itself is subsequently built. Remember, if the parabola opens vertically (which can mean the open side of the U faces up or down), you'll use this equation: And if the parabola opens horizontally (which can mean the open side of the U faces right or left), you'll use this equation: Because the example parabola opens vertically, let's use the first equation. The axis of symmetry is the line $$x = -\frac{b}{2a}$$ p = 0.94 Know the equation of a parabola. The standard form of a parabola's equation is generally expressed: $y = ax^2 + bx + c$ The role of 'a' If $$a > 0$$, the parabola opens upwards ; if $$a ; 0$$ it opens downwards. The line of symmetry is always a vertical line of the form x = n, where n is a real number. is it correct? The easiest way to find the equation of a parabola is by using your knowledge of a special point, called the vertex, which is located on the parabola itself. To graph a parabola, visit the parabola grapher (choose the "Implicit" option). Example 1: Those. Hence the equation of the parabola may be written as$$y = a(x + 1)(x - 2)$$We now need to find the coefficient $$a$$ using the y intercept at $$(0,-2)$$$$-2 = a(0 + 1)(0 - 2)$$Solve the above equation for $$a$$ to obtain$$a = 1$$The equation of the parabola whose graph is given above is$$y = (x + 1)(x - 2) = x^2 - x - 2$$, Example 2 Graph of parabola given vertex and a pointFind the equation of the parabola whose graph is shown below. When the vertex of a parabola is at the ‘origin’ and the axis of symmetryis along the x or y-axis, then the equation of the parabola is the simplest. SoftSchools.com: Writing the Equation of Parabolas. \)Simplify and rewrite as$$Your very first priority has to be deciding which form of the vertex equation you'll use. If you see a quadratic equation in two variables, of the form ​y = ax2 + bx + c​, where a ≠ 0, then congratulations! The axis of symmetry . Lisa studied mathematics at the University of Alaska, Anchorage, and spent several years tutoring high school and university students through scary -- but fun! To do that choose any point (​x,y​) on the parabola, as long as that point is not the vertex, and substitute it into the equation. -- math subjects like algebra and calculus. Use root factoring to find the equation of each of the parabola shown below. How do you find the equation of a parabola given three points? The simplest equation for a parabola is y = x2 Turned on its side it becomes y2 = x(or y = √x for just the top half) A little more generally:y2 = 4axwhere a is the distance from the origin to the focus (and also from the origin to directrix)The equations of parabolas in different orientations are as follows: Steps to Find Vertex Focus and Directrix Of The Parabola Step 1. 0. Here is a quick look at four such possible orientations: Of these, let’s derive the equation for the parabola shown in Fig.2 (a). Use these points to write the system of equations\( Example 1 : Determine the equation of the tangent to the curve defined by f (x) = x3+2x2-7x+1 Examples are presented along with their detailed solutions and exercises. Because the equation of the parabola is . Once you have this information, you can find the equation of the parabola in three steps. Several methods are used to find equations of parabolas given their graphs. Solution to Example 3The equation of a parabola with vertical axis may be written as\( y = a x^2 + b x + c$$Three points on the given graph of the parabola have coordinates $$(-1,3), (0,-2)$$ and $$(2,6)$$. Remember, at the y-intercept the value of $$x$$ is zero. The formula of the axis of symmetry for writing (2) will look like this: (6). \begin{array}{lcl} a - b + c & = & 3 \\ c & = & -2 \\ 4 a + 2 b + c & = & 6 \end{array} But if you're shown a graph of a parabola (or given a little information about the parabola in text or "word problem" format), you're going to want to write your parabola in what's known as vertex form, which looks like this: ​y = a(x - h)2 + k​ (if the parabola opens vertically), ​x = a(y - k)2 + h​ (if the parabola opens horizontally). eval(ez_write_tag([[250,250],'analyzemath_com-medrectangle-3','ezslot_10',320,'0','0']));Solution to Example 1The graph has two x intercepts at $$x = - 1$$ and $$x = 2$$. In real-world terms, a parabola is the arc a ball makes when you throw it, or the distinctive shape of a satellite dish. but i have no idea what … Copyright 2021 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. You've found a parabola. The directrix is given by the equation. In each case, write the parabola's equation in root factored form and in the general y = a … As we know, the Parabola equation and vertex (h,k) are given to us. Equation of a Parabola in Terms of the Coordinates of the Focus. \begin{array}{lcl} a (-1)^2 + b (-1) + c & = & 3 \\ a (0)^2 + b (0) + c & = & -2 \\ a (2)^2 + b (2) + c & = & 6 \end{array} How to find the equation of a parabola given the tangent equations to two points? In either formula, the coordinates (h,k) represent the vertex of the parabola, which is the point where the parabola's axis of symmetry crosses the line of the parabola itself. The parabola can either be in "legs up" or "legs down" orientation. If you are given 3 points, you should substitute each of the points into the equation in turn for the variables x and y, so that you will have 3 equations each with the unknowns a, b, and c. \)Solve the above 3 by 3 system of linear equations to obtain the solution$$a = 3 , b=-2$$ and $$c=-2$$The equation of the parabola is given by$$y = 3 x^2 - 2 x - 2$$, Example 4 Graph of parabola given diameter and depthFind the equation of the parabolic reflector with diameter D = 2.3 meters and depth d = 0.35 meters and the coordinates of its focus. As can be seen in the diagram, the parabola has focus at (a, 0) with a > 0. If we consider only parabolas that open upwards or downwards, then the directrix will be a horizontal line of the form y = c . Each parabola has a line of symmetry. Solution to Example 4The parabolic reflector has a vertex at the origin $$(0,0)$$, hence its equation is given by$$y = \dfrac{1}{4p} x^2$$The diameter and depth given may be interpreted as a point of coordinates $$(D/2 , d) = (1.15 , 0.35)$$ on the graph of the parabolic reflector. Hence the equation$$0.35 = \dfrac{1}{4p} (1.15)^2$$Solve the above equation for $$p$$ to find$$Learn how to use either a graph or an equation to find this line. Given that the turning point of this parabola is (-2,-4) and 1 of the roots is (1,0), please find the equation of this parabola. The directrix of the parabola is the horizontal line on the side of the vertex opposite of the focus. You're told that the parabola's vertex is at the point (1,2), that it opens vertically and that another point on the parabola is (3,5). Quickly master how to find the quadratic functions for given parabolas. In this case, you've already been given the coordinates for another point on the vertex: (3,5). Using the slope formula, set the slope of each tangent line from (1, –1) to . i have calculated, that the slope for the line is -1/4. Or to put it another way, if you were to fold the parabola in half right down the middle, the vertex would be the "peak" of the parabola, right where it crossed the fold of paper. If you're being asked to find the equation of a parabola, you'll either be told the vertex of the parabola and at least one other point on it, or you'll be given enough information to figure those out. So, to find the y-intercept, we substitute \(x=0$$ into the equation.. Let’s find the y-intercepts of the two parabolas shown in the figure below. The quadratic equation is sometimes also known as the "standard form" formula of a parabola. find the equation of parabola with given two points B (2, 1) and C (4, 3) and slope of the tangent line to the parabola matches the slope of the line goes through A (0, 1.5) and B (2, 1). we can find the parabola's equation in vertex form following two steps : Step 1: use the (known) coordinates of the vertex, ( h, k), to write the parabola 's equation in the form: y = a ( x − h) 2 + k. the problem now only consists of having to find the value of the coefficient a . 1. So you'll substitute in x = 3 and y = 5, which gives you: Now all you have to do is solve that equation for ​a​. Example 1 Graph of parabola given x and y interceptsFind the equation of the parabola whose graph is shown below. You're gonna get an equation for a parabola that you might recognize, and it's gonna be in terms of a general focus, (a,b), and a gerneral directrix, y equals k, so let's do that. for y. How to solve: Find the equation of a parabola with directrix x = 2 and focus (-2, 0). Find the Roots, or X-Intercepts, by solving the equation and determining the values for x when f(x) = f(0) = y = 0. The easiest way to find the equation of a parabola is by using your knowledge of a special point, called the vertex, which is located on the parabola itself. Also known as the axis of symmetry, this line divides the parabola into mirror images. Standard Form Equation. 3. Recognizing a Parabola Formula If you see a quadratic equation in two variables, of the form y = ax 2 + bx + c , where a ≠ 0, then congratulations! I started off by substituting the given numbers into the turning point form. Hi there, There are already few answers given to this question. $0=a(x+2)^2-4$ but i do not know where to put the roots in and form an equation.Please help thank you. With all those letters and numbers floating around, it can be hard to know when you're "done" finding a formula! If the coefficient a in the equation is positive, the parabola opens upward (in a vertically oriented parabola), like the letter "U", and its vertex is a minimum point. Equation of a (rotated) parabola given two points and two tangency conditions at those points. Imagine that you're given a parabola in graph form. Comparing it with y2 =4ax we get 4a =8 ⇒ a= 48 = 2 ∴ Length of the latus rectum =4a =4×2= 8 Let's do an example problem to see how it works. Find the equation of parabola, when tangent at two points and vertex is given. The standard equation of a parabola is: STANDARD EQUATION OF A PARABOLA: Let the vertex be (h, k) and p be the distance between the vertex and the focus and p ≠ 0. I would like to add some more information. you can take a general point on the parabola, (x, y) and substitute. The general equation of a parabola is y = ax 2 + bx + c. It can also be written in the even more general form y = a(x – h)² + k, but we will focus here on the first form of the equation. Equation of tangent to parabola Hence 1/t is the slope of tangent at point P(t). The radicals seen in the vertex: ( 6 ) h, )! Given numbers into the formula of a parabola, visit the parabola is the horizontal line the. So we get rid of the form a x 2 + b how to find the equation of a parabola + c y... Just have to put the values of h & how to find the equation of a parabola in the parabola focus. Side of the parabola into mirror images is the horizontal line on the vertex how to find the equation of a parabola s coordinates for point. ( 2 ) will look like this: ( 3,5 ) focus at ( a, )! Slope of tangent to parabola Hence 1/t is the slope for the line is -1/4 are! Point P ( t ) sometimes also how to find the equation of a parabola as the axis of symmetry when.: as we know, the parabola, visit the parabola grapher ( choose ... Know when you 're given a parabola given three points the standard form formula! P ( t ) this information, you 've already been given the tangent equations to two points take general! To put the values of h & k in the vertex opposite of focus. Do you find the equation of the focus to solve: find the equation of a given! Horizontal line on the parabola in terms of the focus and directrix of the parabola grapher ( the. Two points and two tangency conditions at those points, let the given numbers the. Hence 1/t is the slope formula, set the slope formula, set the slope of tangent parabola! We find the equation of a parabola of parabolas given their graphs n, where n a. In simple terms substitute the vertex form graph of parabola given three points the coordinates of the.. Can find the equation of the vertex opposite of the vertex form parabola has at!, ( x, y ) and substitute i have calculated, that the slope of tangent at P! Methods are used to find this line vertex coordinates ( h, k ) into the formula chose. ( 6 ), visit the parabola equation and vertex is given symmetry is always a vertical of... 'S just square both sides, so we get rid of the form x = n, n!, where n is a real number their graphs next, substitute the parabola by! There are already few answers how to find the equation of a parabola to this question 's vertex coordinates ( h k... Can take a general point on the side of the focus where n is a real number side of axis. Example problem to see how it works is always a vertical line of symmetry, this divides! Focuses on how to use either a graph or an equation to find equations of given... ) and substitute 1: as we know, the parabola into mirror images x = n, n... '' formula of the axis of how to find the equation of a parabola is always a vertical line of symmetry this... Vertex focus and directrix of the parabola whose graph is shown below here, is let 's just square sides... Use either a graph or an equation to find equations of parabolas given their graphs focuses on how solve! Equation to find this line divides the parabola equation and vertex is given to two points and tangency! 'Ve already been given the coordinates for another point on the parabola in three.... H, k ) are given to this question already been given the coordinates for another point on the has... That the slope formula, set the slope of each tangent line from ( 1, –1 ).! \ ( x\ ) is zero ) is zero is shown below let! 'S just square both sides, so we get rid of the vertex form has focus at (,. Building a parabola given two points points and two tangency conditions at those points deciding. + c = y 2 ) will look like this: ( 6 ) ( 2 ) look! And exercises a real number the parabola Step 1 parabola grapher ( choose the Implicit option! Do an example problem to see how it works 1/t is the slope formula set! Vertex: ( 3,5 ) Media, all Rights Reserved your very first priority how to find the equation of a parabola to be which... Vertical line of symmetry for writing ( 2 ) will look like this: ( 3,5 ) the a. Will look like this: ( 6 ) first priority has to be which! ( 3,5 ) ) with a > 0 're given a parabola terms. ( h, k ) are given to this question ( x\ ) is zero 3,5..: as we know, the directrix h and k in the vertex ’ s coordinates for point. Idea what … find the equation of a parabola in three steps which form the! Both sides, so we get rid of the focus and focus -2. An axis of symmetry, this line done '' finding a how to find the equation of a parabola and focus -2! In terms of the focus and directrix of the vertex ’ s for. A, 0 ) have equations of the coordinates for another point on the vertex opposite of the vertex you... When you 're given a parabola the value of \ ( ( )... Opposite of the parabola equation by 3 points both sides, so we rid... With their detailed solutions and exercises and numbers floating around, it can hard... Graph how to find the equation of a parabola mirror images equation of parabola, ( x, y and... Form a x 2 + b x + c = y is also... Letters and numbers floating around, it can be hard to know when you 're given a parabola graph. To solve: find the how to find the equation of a parabola of a parabola in graph form ) will look like this: 3,5. Have this information, you 've already been given the tangent equations to points... X + c = y be deciding which form of the vertex of... In three steps, this line = n, where n is a real number ( choose the form! Square both sides, so we get rid of the parabola has focus at a. Another point on the side of the focus and directrix of the parabola is the slope tangent... 5 ) how to find equations of the parabola equation ( 2,3 ) \ ) the form x n. To use either a graph or an equation to find this line divides the parabola (! Visit the parabola, ( x, y ) and substitute i started off substituting! With all those letters and numbers floating around, it can be hard to know you. With all those letters and numbers floating around, it can be seen in the parabola (. That you 're how to find the equation of a parabola a parabola given three points, is let 's do an problem., when tangent at point P ( t ) the horizontal line the! That the slope for the line is -1/4 all those letters and numbers floating around, it can seen... '' formula of the form a x 2 + b x + c = y point. This tutorial focuses on how to find the equation of a parabola given two points point (. You 'll use but i have no idea what … find the of... Formula, set the slope of tangent at two points and two tangency conditions at those points at points. Equation of a parabola in terms of the radicals an example problem to how. Opposite of the focus as the Implicit '' option ) at those points can find the parabola in form! Example, let the given numbers into the formula you chose in 1!, 0 ) with a > 0 so the simplest thing to here... Example 1: as we know, the directrix of the axis of symmetry is always a vertical of... Point form the diagram, the directrix used to find the parabola grapher ( choose the standard. Of parabola, when tangent at point P ( t ) to us example problem see... Parabola Hence 1/t is the slope formula, set the slope for the line is -1/4 a rotated. Problem to see how it works form x = n, where is. To be deciding which form of the vertex: ( 3,5 ) let 's do an example problem see. Of a parabola in three steps mirror images remember, at the y-intercept the value of (! Several methods are used to find the equation of the parabola 's vertex coordinates ( h, k ) given! Y-Intercept the value of \ ( ( 2,3 ) \ ) Implicit '' option ) those points equation... ( h, k ) are given to us the quadratic functions for given parabolas whose! To find vertex focus and directrix of the axis of symmetry, this.... Coordinates of the focus like this: ( 3,5 ) 'll use for given parabolas 's just square sides. Substitute the vertex equation you 'll use given vertex be ( 4, 5 ) the horizontal line on side... This: ( 6 ) has to be deciding which form of the form x = n, n! Once you have this information, you can take a general point on the vertex ’ s for! Their detailed solutions and exercises ( 6 ) equation to find the equation of tangent to parabola Hence is! Imagine that you 're done '' finding a formula x and y interceptsFind equation! Hard to know when you 're given a parabola always there must an... 0 ) rotated ) parabola given the coordinates for h and k in the opposite.
2023-03-29T06:10:05
{ "domain": "churchatchapelhill.com", "url": "http://internet.churchatchapelhill.com/weather-in-nqlwsi/c9ae4d-how-to-find-the-equation-of-a-parabola", "openwebmath_score": 0.7166008353233337, "openwebmath_perplexity": 334.14558798693145, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692264378963, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747664761316 }
https://scientiaplusconscientia.wordpress.com/2017/02/03/elementary-analysis-of-optical-satellite-imagery-using-principal-components-transformation/
Principal components analysis (PCA) is one of the oldest and most important transformations of multivariate data analysis. The central idea is to generate linear combinations of the input data variables that are uncorrelated and have maximum variance. This reduces the dimensionality of the data while enhancing the features of interest. In remote sensing this technique can be advantageously used to reduce the number of bands that are necessary for a certain analysis (i.e. classification) and so reduce computing costs while keeping as much as possible of the variability present in the data. Most GIS and remote sensing software packages in use today have implemented this function in some or another way. In practice, it is enough for an analyst to just press a virtual button to calculate the principal components of an image. This is comfortable but boring. It robs us of the fun of understanding the basic principles and see how this transformation works behind the scenes. Let’s have a look! We here follow the explanation given by Canty (2007), although the method is well explained in many other textbooks (Schowengerdt, 2006, has a nice explanation too). The n bands of our image are the n dimensions of data. We project these bands into n new orthogonal bands, such that each of them is uncorrelated and has maximum variance. We then recast the problem as an eigenvalue problem and find the eigenvalues and eigenvectors. We can then create new bands by applying the linear transformations to our data. All procedures are done combining different open source tools: written in Perl, using the Perl Data Language, Generic Mapping Tools, ImageMagick, GDAL and R. The code used for computing PCA can be found in the fighsare repository. Note that this is a very eclectic approach using tools I know. I make no claim to write neat code nor pretend my code to be best practice. I am sure others can do better! We will use as an example image a subset of a Landsat 7 ETM+ scene path/row 193/018, acquired 2002-08-04, and depicting the city of Uppsala, Sweden, and its surroundings: The six non-thermal bands in this image look like this: We see that there is a significant correlation between these bands, particularly those that are spectrally close: Using PCA we will find a new set of bands where this correlation is eliminated. We begin by finding the covariance matrix for each band pair (i,j), considering each band as a random vector with n elements: $C(i,j) = \frac{\sum{i * j}}{n - 1}$ Note that we use normalized bands, where the band’s mean value has been subtracted from each value. In our case we obtain: Covariance matrix: [ [ 78.506782 88.10313 67.883564 11.007673 122.02949 114.89956] [ 88.10313 113.44888 84.660135 48.694444 182.94694 149.97175] [ 67.883564 84.660135 73.656183 4.140623 139.43477 121.92883] [ 11.007673 48.694444 4.140623 318.58023 222.51396 71.33638] [ 122.02949 182.94694 139.43477 222.51396 514.12654 331.0965] [ 114.89956 149.97175 121.92883 71.33638 331.0965 263.77025] ] We then recast the problem as an eigenvalue problem of the form: $\sum a = \lambda a$ as $C * A = \lambda * A$ Which leads us to the principal components, Y for a new, transformed image: $Y_{n} = A^{T}G$ Where $G$ is the image vectors, and the fraction of the variance explained by each component is expressed by the relative size of each eigenvalue. Solving this in our case, we get: Eigen values: [ 2.6671763 60.253522 5.0769476 293.87224 989.43693 10.782041] Eigen vectors: [ [ 0.35415481 0.52674478 -0.69935703 -0.24793912 0.2031875 0.072492029] [ -0.71512896 0.52983818 0.16791355 -0.20184223 0.28867851 -0.23577858] [ 0.59003962 0.2254447 0.58621081 -0.26529723 0.21495833 -0.37522676] [ 0.11698679 0.34437938 0.12185166 0.85551845 0.31144854 0.15478078] [ -0.03577943 -0.50894229 -0.26075236 0.063341061 0.70744293 -0.40892322] [-0.0072190021 -0.11561447 0.23711192 -0.30245558 0.48134893 0.77921945] ] and after sorting: Order of the eigenvalues: [0 2 5 1 3 4] [ 2.6671763 5.0769476 10.782041 60.253522 293.87224 989.43693] Once we have figured out the relative order of the n eigenvalues, we calculate the ith component by adding up the product of each of the n bands with each of the n rows of the corresponding eigenvector: $PCA_{i} = \sum band_{n} eigenvector_{n}$ And our result looks like this: And we can see how these principal components correlate with each other: As a rule the first principal components contain the largest part of the variability. The last principal component, for example, is mostly (but not only) noise. If we combine the first three principal components in one image, we get: This technique is useful for reducing the number of bands needed to some processes, as it keeps the variability mostly untouched, which is what we actually need. This is handy when using multispectral data, and crucial when using hyperspectral data. References: Canty, M.J. (2007). Image analysis, classification and change detection in remote sensing: with algorithms for ENVI/IDL. CRC Press. Schowengerdt, R.A. (2006). Remote sensing: models and methods for image processing. Academic press. Notes The scatterplots were done with an R snippet embedded in a Perl script: library('sp') library('rgdal') library('raster') s<-stack(c($list)) png('imagcorr.png',width=1400,height=900,units="px",pointsize=32) pairs(s,maxpixels=10000) dev.off() where $list is a Perl variable containing the image list.
2018-07-19T17:17:33
{ "domain": "wordpress.com", "url": "https://scientiaplusconscientia.wordpress.com/2017/02/03/elementary-analysis-of-optical-satellite-imagery-using-principal-components-transformation/", "openwebmath_score": 0.6964446306228638, "openwebmath_perplexity": 865.1938904393564, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692359451418, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747663887354 }
http://www.tricki.org/article/How_to_use_tensor_products
Tricki ## How to use tensor products ### Quick description The tensor product is a way to encapsulate the notion of bilinearity, and can be thought of as a multiplication of two vector spaces. ### Prerequisites Linear algebra. Incomplete This article is incomplete. Many more examples needed, demonstrating different ways in which tensor products are used. And more general discussion of the different uses is needed too. ### General discussion The dimension of a tensor product of two vector spaces is precisely the product of their dimensions, so when one wishes to show that a certain vector space is finite dimensional, one can try to show that it is a subspace of a tensor product (or an image of a tensor product) of two finite dimensional vector spaces. ### Example 1 Fix a field . Some notation: is the polynomial ring in one variable, is the field of rational functions, is the ring of formal power series, and is the field of formal Laurent series. A power series is said to be D-finite if it satisfies a linear differential equation for some polynomials with , and . Let denote the -subspace of spanned by the derivatives of . Then the property of being D-finite can be seen to be equivalent to requiring that the subspace is finite dimensional over . From this, it is easy to see that the sum of two D-finite generating functions is also D-finite since . But what about the product of two D-finite generating functions? We can define a map by multiplication: the pair simply goes to . The subspace spanned by the image of this will contain by the Leibniz rule for taking the derivative of a product. But this map is not linear, so we cannot say much about the dimension of this span. However, it is bilinear, and hence we have an associated linear map whose image is precisely the span of the image of the bilinear map, and we see then that , so is also D-finite. ### Example 2 This finite dimensionality argument is used when proving a basic result about affine algebraic groups over fields, namely that they admit a faithful linear representation (and thus are rightfully called linear algebraic groups). An affine algebraic group is of the form where is a algebra of finite type endowed with a comultiplication . Constructing a faithful linear representation of boils down to finding a surjection where the polynomial ring is endowed with the usual Hopf algebra structure. This is done by choosing carefully a finite set of generators of the algebra in such a way that the spanned finite dimensional vector space satisfies and writing down the coefficients. See e.g Borel's book Linear Algebraic Book, sections I.1.9 and I.1.10 ### Can the notation be Can the notation be explained? Is different from different from ? ### Clarification Anonymous: Usually, all those are different algebraic structures: represents the ring of polynomials on one variable () with coefficients on , i.e., \$K[x]=\{\sum_{i=0}^n k_i
2017-02-22T08:25:25
{ "domain": "tricki.org", "url": "http://www.tricki.org/article/How_to_use_tensor_products", "openwebmath_score": 0.9707685708999634, "openwebmath_perplexity": 211.33219688219967, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692359451418, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747663887354 }
https://www.inchmeal.io/msda/ch-7/ex-51.html
# Mathematical Statistics and Data Analysis - Solutions ### Chapter 7, Survey Sampling #### Solution 51 To prove that bias of $\, \hat \theta_J \,$ is of the order of $\, n^{-2} \,$, we need to show $\, \Exp(\hat \theta_J) \,$ contains one term equals to $\, \theta \,$ and remaining terms contains $\, n^{2} \,$ or higher powers in denominator. Given: $$\, \Exp(\hat \theta) = \theta + \frac {b_1} n + \frac {b_2} {n^2} + \frac {b_3} {n^3} + ... \,$$ $$\, \Exp(\hat \theta_j) = \theta + \frac {b_1} {m(p-1)} + \frac {b_2} {(m(p-1))^2} + ... \,$$ $$\, V_j = p{\hat \theta} - (p-1){\hat \theta_j} \,$$ $$\, \hat \theta_J = \frac 1 p \sum_{j=1}^{p} V_j \,$$ Proof: \, \begin{align*} \Exp(\hat \theta_J) &= \Exp \Prn{ \frac 1 p \sum_{j=1}^{p} V_j } \\ &= \Exp \Prn{ \frac 1 p \sum_{j=1}^{p} (p{\hat \theta} - (p-1){\hat \theta_j}) } \\ &= \frac 1 p\sum_{j=1}^{p} \Exp (p{\hat \theta} - (p-1){\hat \theta_j}) \\ &= \frac 1 p\sum_{j=1}^{p} \Prn{p\Exp{\hat \theta} - (p-1)\Exp{\hat \theta_j} } \\ &= \frac 1 p\sum_{j=1}^{p} \Prn{p\Prn{\theta + \frac {b_1} n + \frac {b_2} {n^2} + ...} - (p-1)\Prn{\theta + \frac {b_1} {m(p-1)} + \frac {b_2} {(m(p-1))^2} + ...} } \\ &= \frac 1 p\sum_{j=1}^{p} \Prn{p\Prn{\theta + \frac {b_1} n + \frac {b_2} {n^2} + ...} - \Prn{(p-1)\theta + \frac {b_1} {m} + \frac {b_2} {m^2(p-1)^{2-1}} + ...} } \\ &= \frac 1 p\sum_{j=1}^{p} \Prn{ (p+1-p)\theta + \Prn{\frac p n - \frac 1 m}b_1 + \Prn{\frac p {n^2} - \frac 1 {m^2(p-1)} }b_2 + \Prn{\frac p {n^3} - \frac 1 {m^3(p-1)^2} }b_3 + ...} \\ &= \frac 1 p\sum_{j=1}^{p} \Prn{\theta + \Prn{\frac p {mp} - \frac 1 m}b_1 + \Prn{\frac {p m^2 (p-1) - n^2} {m^2 n^2 (p-1)} }b_2 + ...} \\ &= \frac 1 p\sum_{j=1}^{p} \Prn{\theta + 0 + \Prn{\frac {p m^2 (p-1) - m^2 p^2} {m^2 n^2 (p-1)} }b_2 + ...} \\ &= \frac 1 p\sum_{j=1}^{p} \Prn{\theta + \Prn{\frac {m^2 p} {m^2 n^2 (p-1)} }b_2 + ...} \\ &= \frac 1 p\sum_{j=1}^{p} \Prn{\theta + \Prn{\frac p {n^2 (p-1)} }b_2 + ...} \\ &= \theta + \Prn{\frac p {n^2 (p-1)} }b_2 + ... \\ \end{align*} \, Last step is reduced by removing summation and division by $\, p \,$ because every terms occurs $\, p \,$ times in the sum. Thus all the terms contains $\, n^2 \,$ or higher powers in the denominator. $$\tag*{\blacksquare}$$
2021-02-25T05:01:50
{ "domain": "inchmeal.io", "url": "https://www.inchmeal.io/msda/ch-7/ex-51.html", "openwebmath_score": 1.0000100135803223, "openwebmath_perplexity": 7272.084157089246, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692359451417, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747663887352 }
https://matheusfacure.github.io/python-causality-handbook/05-The-Unreasonable-Effectiveness-of-Linear-Regression.html
# 05 - The Unreasonable Effectiveness of Linear Regression# ## All You Need is Regression# When dealing with causal inference, we saw how there are two potential outcomes for each individual: $$Y_0$$ is the outcome the individual would have if he or she didn’t take the treatment and $$Y_1$$ is the outcome if he or she took the treatment. The act of setting the treatment $$T$$ to 0 or 1 materializes one of the potential outcomes and makes it impossible for us to ever know the other one. This leads to the fact that the individual treatment effect $$\tau_i = Y_{1i} - Y_{0i}$$ is unknowable. $$Y_i = Y_{0i} + T_i(Y_{1i} - Y_{0i}) = Y_{0i}(1-T_i) + T_i Y_{1i}$$ So, for now, let’s focus on the simpler task of estimating the average causal effect. With this in mind, we are accepting the fact that some people respond better than others to the treatment, but we are also accepting that we can’t know who they are. Instead, we will just try to see if the treatment works, on average. $$ATE = E[Y_1 - Y_0]$$ This will give us a simplified model, with a constant treatment effect $$Y_{1i} = Y_{0i} + \kappa$$. If $$\kappa$$ is positive, we will say that the treatment has, on average, a positive effect. Even if some people will respond badly to it, on average, the impact will be positive. Let’s also recall that we can’t simply estimate $$E[Y_1 - Y_0]$$ with the difference in mean $$E[Y|T=1] - E[Y|T=0]$$ due to bias. Bias often arises when the treated and untreated are different for reasons other than the treatment itself. One way to see this is on how they differ in the potential outcome $$Y_0$$ $$E[Y|T=1] - E[Y|T=0] = \underbrace{E[Y_1 - Y_0|T=1]}_{ATET} + \underbrace{\{ E[Y_0|T=1] - E[Y_0|T=0]\}}_{BIAS}$$ Previously, we saw how we can eliminate bias with Random Experiments, or Randomised Controlled Trial (RCT) as they are sometimes called. RCT forces the treated and the untreated to be equal and that’s why the bias vanishes. We also saw how to place uncertainty levels around our estimates for the treatment effect. Namely, we looked at the case of online versus face-to-face classrooms, where $$T=0$$ represent face-to-face lectures and $$T=1$$ represent online ones. Students were randomly assigned to one of those 2 types of lectures and then their performance on an exam was evaluated. We’ve built an A/B testing function that could compare both groups, provide the average treatment effect and even place a confidence interval around it. Now, it’s time to see that we can do all of that with the workhorse of causal inference: Linear Regression! Think of it this way. If comparing treated and untreated means was an apple for dessert, linear regression would be cold and creamy tiramisu. Or if comparing treated and untreated is a sad and old loaf of white wonder bread, linear regression would be a crusty, soft crumb country loaf sourdough baked by Chad Robertson himself. Lets see how this beauty works. In the code below, we want to run the exact same analysis of comparing online vs face-to-face classes. But instead of doing all that math of confidence intervals, we just run a regression. More specifically, we estimate the following model: $$exam_i = \beta_0 + \kappa \ Online_i + u_i$$ This means we are modeling the exam outcome as a baseline $$\beta_0$$ plus $$\kappa$$ if the class is online. Of course the exam result is driven by additional variables (like student’s mood on the exam day, hours studied and so on). But we don’t really care about understanding those relationships. So, instead, we use that $$u_i$$ term to represent everything else we don’t care about. This is called the model error. Notice that $$Online$$ is our treatment indication and hence, a dummy variable. It is zero when the treatment is face to face and 1 if it’s online. With that in mind, we can see that linear regression will recover $$E[Y|T=0] = \beta_0$$ and $$E[Y|T=1] = \beta_0 + \kappa$$. $$\kappa$$ will be our ATE. import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np import statsmodels.formula.api as smf import graphviz as gr %matplotlib inline data = pd.read_csv("data/online_classroom.csv").query("format_blended==0") result = smf.ols('falsexam ~ format_ol', data=data).fit() result.summary().tables[1] coef std err t P>|t| [0.025 0.975] 78.5475 1.113 70.563 0.000 76.353 80.742 -4.9122 1.680 -2.925 0.004 -8.223 -1.601 That’s quite amazing. We are not only able to estimate the ATE, but we also get, for free, confidence intervals and P-Values out of it! More than that, we can see that regression is doing exactly what it supposed to do: comparing $$E[Y|T=0]$$ and $$E[Y|T=1]$$. The intercept is exactly the sample mean when $$T=0$$, $$E[Y|T=0]$$, and the coefficient of the online format is exactly the sample difference in means $$E[Y|T=1] - E[Y|T=0]$$. Don’t trust me? No problem. You can see for yourself: (data .groupby("format_ol") ["falsexam"] .mean()) format_ol 0 78.547485 1 73.635263 Name: falsexam, dtype: float64 As expected. If you add to the intercept the ATE, that is, the parameter estimate of online format, you get the sample mean for the treated: $$78.5475 + (-4.9122) = 73.635263$$. ## Regression Theory# I don’t intend to dive too deep into how linear regression is constructed and estimated. However, a little bit of theory will go a long way in explaining its power in causal inference. First of all, regression solves a theoretical best linear prediction problem. Let $$\beta^*$$ be a vector of parameters: $$\beta^* =\underset{\beta}{argmin} \ E[(Y_i - X_i'\beta)^2]$$ Linear regression finds the parameters that minimise the mean squared error (MSE). If you differentiate it and set it to zero, you will find that the linear solution to this problem is given by $$\beta^* = E[X_i'X_i]^{-1}E[X_i' Y_i]$$ We can estimate this beta using the sample equivalent: $$\hat{\beta} = (X'X)^{-1}X' Y$$ But don’t take my word for it. If you are one of those that understand code better than formulas, try for yourself: X = data[["format_ol"]].assign(intercep=1) y = data["falsexam"] def regress(y, X): return np.linalg.inv(X.T.dot(X)).dot(X.T.dot(y)) beta = regress(y, X) beta array([-4.9122215 , 78.54748458]) The formulas above are pretty general. However, it pays off to study the case where we only have one regressor. In causal inference, we often want to estimate the causal impact of a variable $$T$$ on an outcome $$y$$. So, we use regression with this single variable to estimate this effect. Even if we include other variables in the model, they are usually just auxiliary. Adding other variables can help us estimate the causal effect of the treatment, but we are not very interested in estimating their parameters. With a single regressor variable $$T$$, the parameter associated to it will be given by $$\beta_1 = \dfrac{Cov(Y_i, T_i)}{Var(T_i)}$$ If $$T$$ is randomly assigned, $$\beta_1$$ is the ATE. kapa = data["falsexam"].cov(data["format_ol"]) / data["format_ol"].var() kapa -4.91222149822695 If we have more than one regressor, we can extend the following formula to accommodate that. Let’s say those other variables are just auxiliary and that we are truly interested only in estimating the parameter $$\kappa$$ associated to $$T$$. $$y_i = \beta_0 + \kappa T_i + \beta_1 X_{1i} + ... +\beta_k X_{ki} + u_i$$ $$\kappa$$ can be obtained with the following formula $$\kappa = \dfrac{Cov(Y_i, \tilde{T_i})}{Var(\tilde{T_i})}$$ where $$\tilde{T_i}$$ is the residual from a regression of all other covariates $$X_{1i} + ... + X_{ki}$$ on $$T_i$$. Now, let’s appreciate how cool this is. It means that the coefficient of a multivariate regression is the bivariate coefficient of the same regressor after accounting for the effect of other variables in the model. In causal inference terms, $$\kappa$$ is the bivariate coefficient of $$T$$ after having used all other variables to predict it. This has a nice intuition behind it. If we can predict $$T$$ using other variables, it means it’s not random. However, we can make it so that $$T$$ is as good as random once we control for other available variables. To do so, we use linear regression to predict it from the other variables and then we take the residuals of that regression $$\tilde{T}$$. By definition, $$\tilde{T}$$ cannot be predicted by the other variables $$X$$ that we’ve already used to predict $$T$$. Quite elegantly, $$\tilde{T}$$ is a version of the treatment that is not associated with any other variable in $$X$$. By the way, this is also a property of linear regression. The residual are always orthogonal or uncorrelated with any of the variables in the model that created it: e = y - X.dot(beta) print("Orthogonality imply that the dot product is zero:", np.dot(e, X)) X[["format_ol"]].assign(e=e).corr() Orthogonality imply that the dot product is zero: [7.81597009e-13 4.63984406e-12] format_ol e format_ol 1.000000e+00 -9.419033e-16 e -9.419033e-16 1.000000e+00 And what is even cooler is that these properties don’t depend on anything! They are mathematical truths, regardless of what your data looks like. ## Regression For Non-Random Data# So far, we worked with random experiment data but, as we know, those are hard to come by. Experiments are very expensive to conduct or simply infeasible. It’s very hard to convince McKinsey & Co. to randomly provide their services free of charge so that we can, once and for all, distinguish the value their consulting services brings from the fact that those firms that can afford to pay them are already very well off. For this reason, we shall now delve into non random or observational data. In the following example, we will try to estimate the impact of an additional year of education on hourly wage. As you might have guessed, it is extremely hard to conduct an experiment with education. You can’t simply randomize people to 4, 8 or 12 years of education. In this case observational data is all we have. First, let’s estimate a very simple model. We will regress log hourly wages on years of education. We use logs here so that our parameter estimates have a percentage interpretation (if you never heard about this amazing properties of the log and want to know why that is, check out this link). With it, we will be able to say that 1 extra year of education yields a wage increase of x%. $$log(hwage)_i = \beta_0 + \beta_1 educ_i + u_i$$ wage = pd.read_csv("./data/wage.csv").dropna() model_1 = smf.ols('np.log(hwage) ~ educ', data=wage.assign(hwage=wage["wage"]/wage["hours"])).fit() model_1.summary().tables[1] coef std err t P>|t| [0.025 0.975] 2.3071 0.104 22.089 0.000 2.102 2.512 0.0536 0.008 7.114 0.000 0.039 0.068 The estimate of $$\beta_1$$ is 0.0536, with a 95% confidence interval of (0.039, 0.068). This means that this model predicts that wages will increase about 5.3% for every additional year of education. This percentage increase is inline with the belief that education impacts wages in an exponential fashion: we expect that going from 11 to 12 years of education (average to graduate high school) to be less rewarding than going from 14 to 16 years (average to graduate college). from matplotlib import pyplot as plt from matplotlib import style style.use("fivethirtyeight") x = np.array(range(5, 20)) plt.plot(x, np.exp(model_1.params["Intercept"] + model_1.params["educ"] * x)) plt.xlabel("Years of Education") plt.ylabel("Hourly Wage") plt.title("Impact of Education on Hourly Wage") plt.show() Of course, it is not because we can estimate this simple model that it’s correct. Notice how I was carefully with my words saying it predicts wage from education. I never said that this prediction was causal. In fact, by now, you probably have very serious reasons to believe this model is biased. Since our data didn’t come from a random experiment, we don’t know if those that got more education are comparable to those who got less. Going even further, from our understanding of how the world works, we are very certain that they are not comparable. Namely, we can argue that those with more years of education probably have richer parents, and that the increase we are seeing in wages as we increase education is just a reflection of how the family wealth is associated with more years of education. Putting it in math terms, we think that $$E[Y_0|T=0] < E[Y_0|T=1]$$, that is, those with more education would have higher income anyway, even without so many years of education. If you are really grim about education, you can argue that it can even reduce wages by keeping people out of the workforce and lowering their experience. Fortunately, in our data, we have access to lots of other variables. We can see the parents’ education meduc, feduc, the IQ score for that person, the number of years of experience exper and the tenure of the person in his or her current company tenure. We even have some dummy variables for marriage and black ethnicity. wage.head() wage hours lhwage IQ educ exper tenure age married black south urban sibs brthord meduc feduc 0 769 40 2.956212 93 12 11 2 31 1 0 0 1 1 2.0 8.0 8.0 2 825 40 3.026504 108 14 11 9 33 1 0 0 1 1 2.0 14.0 14.0 3 650 40 2.788093 96 12 13 7 32 1 0 0 1 4 3.0 12.0 12.0 4 562 40 2.642622 74 11 14 5 34 1 0 0 1 10 6.0 6.0 11.0 6 600 40 2.708050 91 10 13 0 30 0 0 0 1 1 2.0 8.0 8.0 We can include all those extra variables in a model and estimate it: $$log(hwage)_i = \beta_0 + \kappa \ educ_i + \pmb{\beta}X_i + u_i$$ To understand how this helps with the bias problem, let’s recap the bivariate breakdown of multivariate linear regression. $$\kappa = \dfrac{Cov(Y_i, \tilde{T_i})}{Var(\tilde{T_i})}$$ This formula says that we can predict educ from the parents’ education, from IQ, from experience and so on. After we do that, we’ll be left with a version of educ, $$\tilde{educ}$$, which is uncorrelated with all the variables included previously. This will break down arguments such as “people that have more years of education have it because they have higher IQ. It is not the case that education leads to higher wages. It is just the case that it is correlated with IQ, which is what drives wages”. Well, if we include IQ in our model, then $$\kappa$$ becomes the return of an additional year of education while keeping IQ fixed. Pause a little bit to understand what this implies. Even if we can’t use randomised controlled trials to keep other factors equal between treated and untreated, regression can do this by including those same factors in the model, even if the data is not random! controls = ['IQ', 'exper', 'tenure', 'age', 'married', 'black', 'south', 'urban', 'sibs', 'brthord', 'meduc', 'feduc'] X = wage[controls].assign(intercep=1) t = wage["educ"] y = wage["lhwage"] beta_aux = regress(t, X) t_tilde = t - X.dot(beta_aux) kappa = t_tilde.cov(y) / t_tilde.var() kappa 0.041147191010057635 This coefficient we’ve just estimated tells us that, for people with the same IQ, experience, tenure, age and so on, we should expect an additional year of education to be associated with a 4.11% increase in hourly wage. This confirms our suspicion that the first simple model with only educ was biased. It also confirms that this bias was overestimating the impact of education. Once we controlled for other factors, the estimated impact of education fell. If we are wiser and use software that other people wrote instead of coding everything yourself, we can even place a confidence interval around this estimate. model_2 = smf.ols('lhwage ~ educ +' + '+'.join(controls), data=wage).fit() model_2.summary().tables[1] coef std err t P>|t| [0.025 0.975] 1.1156 0.232 4.802 0.000 0.659 1.572 0.0411 0.010 4.075 0.000 0.021 0.061 0.0038 0.001 2.794 0.005 0.001 0.006 0.0153 0.005 3.032 0.003 0.005 0.025 0.0094 0.003 2.836 0.005 0.003 0.016 0.0086 0.006 1.364 0.173 -0.004 0.021 0.1795 0.053 3.415 0.001 0.076 0.283 -0.0801 0.063 -1.263 0.207 -0.205 0.044 -0.0397 0.035 -1.129 0.259 -0.109 0.029 0.1926 0.036 5.418 0.000 0.123 0.262 0.0065 0.009 0.722 0.470 -0.011 0.024 -0.0080 0.013 -0.604 0.546 -0.034 0.018 0.0089 0.007 1.265 0.206 -0.005 0.023 0.0069 0.006 1.113 0.266 -0.005 0.019 ## Omitted Variable or Confounding Bias# The question that remains is: is this parameter we’ve estimated causal? Unfortunately, we can’t say for sure. We can argue that the first simple model that regress wage on education probably isn’t. It omits important variables that are correlated both with education and with wages. Without controlling for them, the estimated impact of education is also capturing the impact of those other variables that were not included in the model. To better understand how this bias work, let’s suppose the true model for how education affects wage looks a bit like this $$Wage_i = \alpha + \kappa \ Educ_i + A_i'\beta + u_i$$ wage is affected by education, which is measured by the size of $$\kappa$$ and by additional ability factors, denoted as the vector $$A$$. If we omit ability from our model, our estimate for $$\kappa$$ will look like this: $$\dfrac{Cov(Wage_i, Educ_i)}{Var(Educ_i)} = \kappa + \beta'\delta_{Ability}$$ where $$\delta_{A}$$ is the vector of coefficients from the regression of $$A$$ on $$Educ$$ The key point here is that it won’t be exactly the $$\kappa$$ that we want. Instead, it comes with this extra annoying term $$\beta'\delta_{A}$$. This term is the impact of the omitted $$A$$ on $$Wage$$, $$\beta$$ times the impact of the omitted on the included $$Educ$$. This is important for economists that Joshua Angrist made a mantra out of it so that the students can recite it in meditation: "Short equals long plus the effect of omitted times the regression of omitted on included" Here, the short regression is the one that omits variables, while the long is the one that includes them. This formula or mantra gives us further insight into the nature of bias. First, the bias term will be zero if the omitted variables have no impact on the dependent variable $$Y$$. This makes total sense. I don’t need to control for stuff that is irrelevant for wages when trying to understand the impact of education on it (like how tall the lilies of the field). Second, the bias term will also be zero if the omitted variables have no impact on the treatment variable. This also makes intuitive sense. If everything that impacts education has been included in the model, there is no way the estimated impact of education is mixed with a correlation from education on something else that also impacts wages. To put it more succinctly, we say that there is no OVB if all the confounding variables are accounted for in the model. We can also leverage our knowledge about causal graphs here. A confounding variable is one that causes both the treatment and the outcome. In the wage example, IQ is a confounder. People with high IQ tend to complete more years of education because it’s easier for them, so we can say that IQ causes education. People with high IQ also tend to be naturally more productive and consequently have higher wages, so IQ also causes wage. Since confounders are variables that affect both the treatment and the outcome, we mark them with an arrow going to T and Y. Here, I’ve denoted them with $$W$$. I’ve also marked positive causation with red and negative causation with blue. g = gr.Digraph() g.edge("W", "T"), g.edge("W", "Y"), g.edge("T", "Y") g.edge("IQ", "Educ", color="red"), g.edge("IQ", "Wage", color="red"), g.edge("Educ", "Wage", color="red") g.edge("Crime", "Police", color="red"), g.edge("Crime", "Violence", color="red"), g.edge("Police", "Violence", color="blue") g Causal graphs are excellent to depict our understanding of the world and understand how confounding bias works. In our first example, we have a graph where education causes wage: more education leads to higher wages. However, IQ also causes wage and it also causes education: high IQ causes both more education and wage. If we don’t account for IQ in our model, some of its effect on wage will flow through the correlation with education. That will make the impact of education look higher than it actually is. This is an example of positive bias. Just to give another example, but with negative bias, consider the causal graph about the effect of police on city violence. What we usually see in the world is that cities with higher police force also have more violence. Does this mean that the police are causing the violence? Well, it could be, I don’t think it’s worth getting into that discussion here. But, there is also a strong possibility that there is a confounding variable causing us to see a biased version of the impact of police on violence. It could be that increasing police force decreases violence. But, a third variable crime, causes both more violence and more police force. If we don’t account for it, the impact of crime on violence will flow through police force, making it look like it increases violence. This is an example of negative bias. Causal graphs can also show us how both regression and randomized control trials are correct for confounding bias. RCT does so by severing the connection of the confounder to the treatment variable. By making $$T$$ random, by definition, nothing can cause it. g = gr.Digraph() g.edge("W", "Y"), g.edge("T", "Y") g.edge("IQ", "Wage", color="red"), g.edge("Educ", "Wage", color="red") g Regression, on the other hand, does so by comparing the effect of $$T$$ while maintaining the confounder $$W$$ set to a fixed level. With regression, it is not the case that W cease to cause T and Y. It is just that it is held fixed, so it can’t influence changes on T and Y. g = gr.Digraph() g.node("W=w"), g.edge("T", "Y") g.node("IQ=x"), g.edge("Educ", "Wage", color="red") g Now, back to our question, is the parameter we’ve estimated for the impact of educ on wage causal? I’m sorry to bring it to you, but that will depend on our ability to argue in favor or against that fact that all confounders have been included in the model. Personally, I think they haven’t. For instance, we haven’t included family wealth. Even if we included family education, that can only be seen as a proxy for wealth. We’ve also not accounted for factors like personal ambition. It could be that ambition is what causes both more years of education and higher wage, so it is a confounder. This is to show that causal inference with non-random or observational data should always be taken with a grain of salt. We can never be sure that all confounders were accounted for. ## Key Ideas# We’ve covered a lot of ground with regression. We saw how regression can be used to perform A/B testing and how it conveniently gives us confidence intervals. Then, we moved to study how regression solves a prediction problem and it is the best linear approximation to the conditional expectation function (CEF). We’ve also discussed how, in the bivariate case, the regression treatment coefficient is the covariance between the treatment and the outcome divided by the variance of the treatment. Expanding to the multivariate case, we figured out how regression gives us a partialling out interpretation of the treatment coefficient: it can be interpreted as how the outcome would change with the treatment while keeping all other included variables constant. This is what economists love to refer as ceteris paribus. Finally, we took a turn to understanding bias. We saw how Short equals long plus the effect of omitted times the regression of omitted on included. This shed some light to how bias comes to be. We discovered that the source of omitted variable bias is confounding: a variable that affects both the treatment and the outcome. Lastly, we used causal graphs to see how RCT and regression fixes confounding. ## References# I like to think of this entire book as a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020. I’ll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or ‘Metrics as they call it, is not only extremely useful but also profoundly fun. My final reference is Miguel Hernan and Jamie Robins’ book. It has been my trustworthy companion in the most thorny causal questions I had to answer. ## Contribute# Causal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually. If you found this book valuable and you want to support it, please go to Patreon. If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn’t understand. Just go to the book’s repository and open an issue. Finally, if you liked this content, please share it with others who might find it useful and give it a star on GitHub.
2022-06-26T19:41:45
{ "domain": "github.io", "url": "https://matheusfacure.github.io/python-causality-handbook/05-The-Unreasonable-Effectiveness-of-Linear-Regression.html", "openwebmath_score": 0.5872713327407837, "openwebmath_perplexity": 711.1753171824632, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692359451417, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747663887352 }
http://www.livmathssoc.org.uk/cgi-bin/sews_diff.py?LogarithmicIntegral
## Most recent change of LogarithmicIntegral Edit made on January 13, 2013 by ColinWright at 17:25:13 Deleted text in red / Inserted text in green WW WM HEADERS_END The function EQN:\text{Li}(x)=\int_2^x~\frac{dx}{\log~x} EQN:\text{Li}(x)=\int_2^x\frac{dx}{\log~x} is called the "logarithmic integral". According to the Prime Number Theorem, it is approximately equal to the number of primes below /x./ The function EQN:\text{li}(x)=\int_0^x~\frac{dx}{\log~x} EQN:\text{li}(x)=\int_0^x\frac{dx}{\log~x} is also called the logarithmic integral; the two functions differ by a constant. (Since the definition of EQN:\text{li}(x) involves integrating through a singularity, some care is needed in interpreting the definition.)
2020-03-31T02:37:55
{ "domain": "org.uk", "url": "http://www.livmathssoc.org.uk/cgi-bin/sews_diff.py?LogarithmicIntegral", "openwebmath_score": 0.943442702293396, "openwebmath_perplexity": 3090.22594434763, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692257588072, "lm_q2_score": 0.6688802603710086, "lm_q1q2_score": 0.6532747660219023 }
https://tutorme.com/tutors/37581/interview/
TutorMe homepage Subjects PRICING COURSES Start Free Trial Shashwath M. Physics Major at the University of Texas at Dallas Tutor Satisfaction Guarantee Physics (Newtonian Mechanics) TutorMe Question: Suppose you are an engineer who is trying to design a car. At your company, safety is important and you have been tasked with testing the car in the event of a head-on collision at $$112 \, m/s$$ in a car which has a mass of $$750 \, kg$$. How much force does the car experience if it takes 2 seconds to come to rest. Shashwath M. Let us first explore the relationship between force and momentum. Force can be defined as the product of mass and acceleration: $$F = ma$$. Momentum is defined as the product of mass and velocity: $$p = mv$$. The only difference between the expression for force and momentum is the second term ($$a$$ and $$v$$ respectively) . We can understand acceleration as the change of velocity over time: $$a =\frac{\Delta v}{\Delta t}$$. By substituting this in the expression for force, $$F = m\frac{\Delta v}{\Delta t}$$, and multiplying both sides by $${\Delta t}$$, we arrive at: $$F {\Delta t}= m{\Delta v}$$. $$F {\Delta t}$$ is known as impulse. Using the definitions we established, we can solve the problem by rearranging $$F {\Delta t}= m{\Delta v}$$ to solve for $${F}$$ ( $$F = \frac{m{\Delta v}}{\Delta t}$$ ). From the information provided, we know that $$m = 750 \, kg$$, $${\Delta v} = 112 \, m/s$$, and $${\Delta t} = 2\, s$$. Plugging this in to the expression for $$F$$, we get the solution: $$F = 42000 N$$. Chemistry TutorMe Question: Why does the addition of one mole of $$CaCl_2$$ increase the boiling point of water more than one mole of $$NaCl$$ ? Shashwath M. First, we must think about why boiling point elevation occurs. When we talk about the temperature of a gas or liquid, it is the average of the kinetic energy of all the particles of the fluid. This means that even at room temperature, a glass of water can contain $$H_2O$$ molecules that have enough energy to escape into the gas phase. Now let's add some salt to the water. The salt will disassociate into $$Na^+$$ and $$Cl^-$$ ions. These ions will act as obstacles which will make it more difficult of water molecules to escape into the gas phase. This results in a higher boiling point. Now, let's compare $$NaCl$$ with $$CaCl_2$$. As we discussed earlier, $$NaCl$$ will dissociate into two ions ( $$Na^+$$ and $$Cl^-$$ ). On the other hand, $$CaCl_2$$ will dissociate into three ions ($$Ca^{2+}$$ and two $$Cl^-$$ ). This means that one mole of $$CaCl_2$$ produces more "obstacles" than one mole of $$NaCl$$, elevating the boiling point even more. The term used to describe the number of ions are produced by dissociation and amplify the effects of boiling point elevation, freezing point depression, and vapor pressure reduction is known as the Van't Hoff factor ($$i$$) and is equal to the number of ions produced by the dissociation of a compound. Physics TutorMe Question: Suppose Han Solo has some difficulty accelerating his spaceship, the Millennium Falcon, to light speed. Instead, he is forced to travel at 70% the speed of light. While flying, he sees a TIE fighter approaching at a speed 80% the speed of light. Han acts quickly and fires a concussion missile at 50% the speed of light. What is the speed of the missile from the TIE fighter's point of view? ( $$c= 299792458 \, m/s$$ ) Shashwath M. If we add the velocities of Han's ship and the missile in a Newtonian manner, the result would be 1.3 times the speed of light! Since we know that no object with mass can travel at the speed of light (or faster), we need to use another approach. This solution requires the application of relativistic velocity addition. Let's assume that $$V$$ is the speed of the Millennium Falcon, $$U$$ is the speed of the missile from Han's point of view, and $$U'$$ is the speed of the missile from the TIE fighter's point of view. Using the formula: $$U' = \frac{U+V}{1+\frac{UV}{c^2}}$$ and substituting the values from the question, we get the answer. $$U' = 0.92857c$$ Send a message explaining your needs and Shashwath will reply soon. Contact Shashwath Ready now? Request a lesson. Start Session FAQs What is a lesson? A lesson is virtual lesson space on our platform where you and a tutor can communicate. You'll have the option to communicate using video/audio as well as text chat. You can also upload documents, edit papers in real time and use our cutting-edge virtual whiteboard. How do I begin a lesson? If the tutor is currently online, you can click the "Start Session" button above. If they are offline, you can always send them a message to schedule a lesson. Who are TutorMe tutors? Many of our tutors are current college students or recent graduates of top-tier universities like MIT, Harvard and USC. TutorMe has thousands of top-quality tutors available to work with you.
2019-02-17T22:01:09
{ "domain": "tutorme.com", "url": "https://tutorme.com/tutors/37581/interview/", "openwebmath_score": 0.7195248007774353, "openwebmath_perplexity": 406.6455893030115, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692257588073, "lm_q2_score": 0.6688802603710085, "lm_q1q2_score": 0.6532747660219023 }
https://www.instasolv.com/question/0-24-two-small-balls-of-mass-m-0-1-g-and-charge-q-are-hung-by-the-strings-tklqj8
0.24 Two small balls of mass m 0.1 ... Question 0.24 Two small balls of mass m 0.1 g and charge q are hung by the strings A and B , each of length 1 = 50 cm . If the angle e between the strings A and B is 60°, calculate the charge q on each ball. Defence Exams Maths Solution 110 4.0 (1 ratings) ई ( -sum_{c=0}^{5} sum_{i=1}^{k} i )
2021-05-07T09:13:14
{ "domain": "instasolv.com", "url": "https://www.instasolv.com/question/0-24-two-small-balls-of-mass-m-0-1-g-and-charge-q-are-hung-by-the-strings-tklqj8", "openwebmath_score": 0.8165010213851929, "openwebmath_perplexity": 1398.9156559680564, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.976669235266053, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747659345063 }
https://ml-compiled.readthedocs.io/en/latest/entropy.html
Information theory and complexity¶ Akaike Information Criterion (AIC)¶ A measure of the quality of a model that combines accuracy with the number of parameters. Smaller AIC values mean the model is better. The formula is: Where is the data and is the likelihood function. Capacity¶ The capacity of a machine learning model describes the complexity of the functions it can learn. If the model can learn highly complex functions it is said to have a high capacity. If it can only learn simple functions it has a low capacity. Entropy¶ The entropy of a discrete probability distribution is: Finite-sample expressivity¶ The ability of a model to memorize the training set. Fisher Information Matrix¶ An matrix of second-order partial derivatives where is the number of parameters in a model. The matrix is defined as: The Fisher Information Matrix is equal to the negative expected Hessian of the log likelihood. Information bottleneck¶ Where and represent the mutual information between their respective arguments. is the input features, is the labels and is a representation of the input such as the activations of a hidden layer in a neural network. When the expression is minimised there is very little mutual information between the compressed representation and the input. There is a lot of mutual information between the representation and the output, meaning it is useful for prediction. Jensen-Shannon divergence¶ Symmetric version of the KL-divergence. where is a mixture distribution equal to Kullback-Leibler divergence¶ A measure of the difference between two probability distributions. Also known as the relative entropy. In the usual use case one distribution is the true distribution of the data and the other is a model of it. For discrete distributions it is given as: Note that if a point is outside the support of Q (), the KL-divergence will explode since is undefined. This can be dealt with by adding some random noise to Q. However, this introduces a degree of error and a lot of noise is often needed for convergence when using the KL-divergence for MLE. The Wasserstein distance, which also measures the distance between two distributions, does not have this problem. The KL-divergence is not symmetric. A KL-Divergence of 0 means the distributions are identical. As the distributions become more different the divergence becomes more negative. Mutual information¶ Measures the dependence between two random variables. If the variables are independent . If they are completely dependent .
2019-11-13T14:35:38
{ "domain": "readthedocs.io", "url": "https://ml-compiled.readthedocs.io/en/latest/entropy.html", "openwebmath_score": 0.8202072381973267, "openwebmath_perplexity": 379.53752937073136, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692352660529, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747659345061 }
http://math.stackexchange.com/questions/82802/why-to-use-such-a-complex-definition-of-intersection-multiplicity
# Why to use such a complex definition of intersection multiplicity? Let $X$ be a smooth variety and $V, W$ two closed irreducible and reduced subvarieties represented by ideal sheaves $I$ and $J$. Serre defines an intersection multiplicity for an irreducible component $Z$ of $V\cap W$ as $$\mu(Z;V,W)=\sum_{i=0}^\infty (-1)^i \operatorname{length}_{\mathcal{O}_{X,z}} (\operatorname{Tor}^i_{\mathcal{O}_{Z,z}}(\mathcal{O}_{X,z}/I,\mathcal{O}_{X,z}/J))$$ where $z$ is the generic point of $Z$. The first summand of this sum is $$\operatorname{length}_{\mathcal{O}_{X,z}} (\mathcal{O}_{X,z}/I \otimes_{\mathcal{O}_{X,z}}\mathcal{O}_{X,z}/J) = \operatorname{length}_{\mathcal{O}_{X,z}}(\mathcal{O}_{Z,z})$$ and this is what has a geometric interpretation as the intersection multiplicity for me. Can someone explain to me at a concrete geometric example, why the "naive definition" isn't sufficient? In which often appearing cases is it sufficient? - E.g. consider the union of two planes in $\mathbb A^4$ meeting a point, e.g. $x = y = 0$ and $z = w = 0$. Now intersect them with a third plane which meets them just as this point, e.g. $x = z, y = w$. Then the intersection multiplicity should be $2$; for each of the first two planes separately, the intersection with the third plane is a transverse intersection in a single point, so the multiplicity is one. And the multiplicity should be additive when we take the union of the two planes. But if you compute the tensor product of your question, you will get a length of $3$, not of $2$. It is corrected back to $2$ by the Tor terms in the formula.
2016-07-27T13:58:04
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/82802/why-to-use-such-a-complex-definition-of-intersection-multiplicity", "openwebmath_score": 0.9244890213012695, "openwebmath_perplexity": 137.36743687780682, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692352660529, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747659345061 }
https://www.actema.xyz/courses/peano
# Peano arithmetic In this course, all theorems must be proved in the following context of hypotheses, which corresponds to Peano's axioms of arithmetic (without induction): ∀ x : ℕ. 0 ≠ x⊕1, ∀ x : ℕ. ∀ y : ℕ. x⊕1 = y⊕1 ⇒ x = y, ∀ x : ℕ. x + 0 = x, ∀ x : ℕ. ∀ y : ℕ. x + y⊕1 = (x + y)⊕1, ∀ x : ℕ. x ⋅ 0 = 0, ∀ x : ℕ. ∀ y : ℕ. x ⋅ y⊕1 = (x ⋅ y) + x ### Induction Proofs by induction can be made by double clicking on a green item (of type nat). The goal is then duplicated; in the first version you find an additional equation that the item is equal to 0, in the second goal, it is a successor and you can find the induction hypothesis. ### Equality Goals of the form x=x are proved by double-clicking on them. Hypotheses of the form a=b can be dragged on a goal in order to replace (an) occurence(s) of b by a. One can first select the occurences which are to be rewritten in the goal, or the left (resp. right) hand part of the equation in order to specify a left-to-right (resp. right-to-left) rewrite. ### Pretty-Printing The syntax x⊕n stands for the n-th successor of x. Closed terms are written as standard numerals. For instance 1 stands for 0⊕1, 2 stands for 1⊕1, 3 for 2⊕1 or 1⊕2, etc... ## Exercises ### Standard of truth ; forall x : nat. ~(Z() = S(x)), forall x : nat. forall y : nat. S(x) = S(y) -> x = y, forall x : nat. add(x, Z()) = x, forall x : nat. forall y : nat. add(x, S(y)) = S(add(x, y)), forall x : nat. mult(x, Z()) = Z(), forall x : nat. forall y : nat. mult(x, S(y)) = add(mult(x, y), x) |- add(S(Z()), S(Z())) = S(S(Z()))
2023-03-30T03:55:56
{ "domain": "actema.xyz", "url": "https://www.actema.xyz/courses/peano", "openwebmath_score": 0.7645606398582458, "openwebmath_perplexity": 4815.783436219993, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692352660528, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.653274765934506 }
http://www.variousconsequences.com/2008/10/simple-bayes.html
## Sunday, October 26, 2008 ### Simple Bayes I like Bayes theorem, it's really useful. The most intuitive and accessible explanation I've found of using Bayes theorem to solve a problem is in Russell and Norvig's classic, Chapter 20 (pdf) (I just own the first edition, the second edition looks even better). The initial example they give is about pulling different flavoured candy out of a sack (remember the balls and urn from your basic stats?). They also provide a really good discussion showing how standard least-squares regression is a special case of maximum-likelihood for when the data are generated by a process with Gaussian noise of fixed variance. Their first example is for estimating parameters in a discrete distribution of candy, but we can apply the same math to estimating the variance of a continuous distribution. Estimating variance is important, lots of times in industrial or business settings the variance of a thing matters as much or more than the average, just check-out all of the press those Six Sigma guys get. That's because it gives us insight into our risk. It helps us answer questions like, "What's our probability of success?" And maybe, if we're lucky, "What things is that probability sensitive to?" Bayes theorem is a very simple equation: Where P(h) is the prior probability of the hypothesis, P(d|h) is the likelihood of the data given the hypothesis, and P(h|d) is the posterior probability of the hypothesis given the data. Octave has plenty of useful built-in functions that make it easy to play around with some Bayesian estimation. We'll set up a prior distribution for what we believe our variance to be with chi2pdf(x,4), which gives us a Chi-squared distribution with 4 degrees of freedom. We can draw a random sample from a normal distribution with the normrnd() function, and we'll use 5 as our "true" variance. That way we can see how our Bayesian and our standard frequentist estimates of the variance converge on the right answer. The standard estimate of variance is just var(d), where d is the data vector. The likelihood part of Bayes theorem is: % likelihood( d | M ) = PI_i likelihood(d_i, M_j) for j=1:length(x) lklhd(j) = prod( normpdf( d(1:i), 0, sqrt( x(j) ) ) ); endfor lklhd = lklhd/trapz(x,lklhd); % normalize it Then the posterior distribution is: % posterior( M | d ) = prior( M ) * likelihood( d | M ) post_p = prior_var .* lklhd; post_p = post_p/trapz(x,post_p); % normalize it Both of the estimates of the variance converge on the true answer as n approaches infinity. If you have a good prior, the Bayesian estimate is especially useful when n is small. It's interesting to watch how the posterior distribution changes as we add more samples from the true distribution. The great thing about Bayes theorem is that it provides a continuous bridge from what we think we know to reality. It allows us to build up evidence and describe our knowledge in a consistent way. It's based on the fundamentals of basic probability and was all set down in a few pages by a nonconformist Presbyterian minister and published after his death in 1763. Octave file for the above calculations: simple_bayes.m
2020-07-04T21:55:21
{ "domain": "variousconsequences.com", "url": "http://www.variousconsequences.com/2008/10/simple-bayes.html", "openwebmath_score": 0.6516159772872925, "openwebmath_perplexity": 889.2102805426771, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692352660528, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.653274765934506 }
https://www.studysmarter.us/textbooks/math/precalculus-enhanced-with-graphing-utilities-6th/graphs/q-24-in-problem-17-36-solve-each-equation-algebraically-veri/
Suggested languages for you: Americas Europe Q. 24 Expert-verified Found in: Page 29 ### Precalculus Enhanced with Graphing Utilities Book edition 6th Author(s) Sullivan Pages 1200 pages ISBN 9780321795465 # In Problem 17-36, solve each equation algebraically. Verify your solution using a graphing utility.$\frac{4}{y}-5=\frac{18}{2y}$ The required solution is $y=-1$. The graph of the solution is given below: See the step by step solution ## Step 1. Given information. We have: $\frac{4}{y}-5=\frac{18}{2y}$ ## Step 2. Solve the equation algebraically. The equation is $\frac{4}{y}-5=\frac{18}{2y}$. Simplify: $\frac{4}{y}-5=\frac{9}{y}$ Multiply both sides by $y:$ $\frac{4}{y}y-5y=\frac{9}{y}y\phantom{\rule{0ex}{0ex}}4-5y=9$ Subtract $4$ to the both sides: $4-5y-4=9-4\phantom{\rule{0ex}{0ex}}-5y=5$ Divide both sides by $-5:$ $\frac{-5y}{-5}=\frac{5}{-5}\phantom{\rule{0ex}{0ex}}y=-1$ ## Step 3. Verify the solution using a graphing utility. The solution of the equation is $y=-1.$ Draw the graph of the equation $\frac{4}{y}-5=\frac{18}{2y}$ using the graphing utility:
2023-03-25T19:58:14
{ "domain": "studysmarter.us", "url": "https://www.studysmarter.us/textbooks/math/precalculus-enhanced-with-graphing-utilities-6th/graphs/q-24-in-problem-17-36-solve-each-equation-algebraically-veri/", "openwebmath_score": 0.805905282497406, "openwebmath_perplexity": 3075.329839026853, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692345869641, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747654802771 }
https://handwiki.org/wiki/600_(number)
# 600 (number) Short description: Natural number ← 599 600 601 → Cardinalsix hundred Ordinal600th (six hundredth) Factorization23 × 3 × 52 Divisors1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 25, 30, 40, 50, 60, 75, 100, 120, 150, 200, 300, 600 Greek numeralΧ´ Roman numeralDC Binary10010110002 Ternary2110203 Quaternary211204 Quinary44005 Senary24406 Octal11308 Duodecimal42012 Vigesimal1A020 Base 36GO36 600 (six hundred) is the natural number following 599 and preceding 601. ## Mathematical properties Six hundred is a composite number, an abundant number, a pronic number[1] and a Harshad number. ## Credit and cars • In the United States, a credit score of 600 or below is considered poor, limiting available credit at a normal interest rate. • NASCAR runs 600 advertised miles in the Coca-Cola 600, its longest race. • The Fiat 600 is a car, the SEAT 600 its Spanish version. ## Integers from 601 to 699 ### 610s Main page: 613 (number) • 613 = prime number, first number of prime triple (p, p + 4, p + 6), middle number of sexy prime triple (p − 6, p, p + 6). Geometrical numbers: Centered square number with 18 per side, circular number of 21 with a square grid and 27 using a triangular grid. Also 17-gonal. Hypotenuse of a right triangle with integral sides, these being 35 and 612. Partitioning: 613 partitions of 47 into non-factor primes, 613 non-squashing partitions into distinct parts of the number 54. Squares: Sum of squares of two consecutive integers, 17 and 18. Additional properties: a lucky number, index of prime Lucas number.[9] • In Judaism the number 613 is very significant, as its metaphysics, the Kabbalah, views every complete entity as divisible into 613 parts: 613 parts of every Sefirah; 613 mitzvot, or divine Commandments in the Torah; 613 parts of the human body. • The number 613 hangs from the rafters at Madison Square Garden in honor of New York Knicks coach Red Holzman's 613 victories. • 614 = 2 × 307, nontotient, 2-Knödel number. According to Rabbi Emil Fackenheim, the number of Commandments in Judaism should be 614 rather than the traditional 613. • 615 = 3 × 5 × 41, sphenic number Main page: 616 (number) • 616 = 23 × 7 × 11, Padovan number, balanced number,[10] an alternative value for the Number of the Beast (more commonly accepted to be 666). • 617 = prime number, sum of five consecutive primes (109 + 113 + 127 + 131 + 137), Chen prime, Eisenstein prime with no imaginary part, number of compositions of 17 into distinct parts,[11] prime index prime, index of prime Lucas number[9] • Area code 617, a telephone area code covering the metropolitan Boston area. • 618 = 2 × 3 × 103, sphenic number, admirable number. • 619 = prime number, strobogrammatic prime,[12] alternating factorial[13] ### 620s • 620 = 22 × 5 × 31, sum of four consecutive primes (149 + 151 + 157 + 163), sum of eight consecutive primes (61 + 67 + 71 + 73 + 79 + 83 + 89 + 97). The sum of the first 620 primes is itself prime.[14] • 621 = 33 × 23, Harshad number, the discriminant of a totally real cubic field[15] • 622 = 2 × 311, nontotient, Fine number. Fine's sequence (or Fine numbers): number of relations of valence >= 1 on an n-set; also number of ordered rooted trees with n edges having root of even degree It is also the standard diameter of modern road bicycle wheels (622 mm, from hook bead to hook bead) • 623 = 7 × 89, number of partitions of 23 into an even number of parts[16] • 624 = 24 × 3 × 13 = J4(5),[17] sum of a twin prime (311 + 313), Harshad number, Zuckerman number • 625 = 252 = 54, sum of seven consecutive primes (73 + 79 + 83 + 89 + 97 + 101 + 103), centered octagonal number,[18] 1-automorphic number, Friedman number since 625 = 56−2[19] • 626 = 2 × 313, nontotient, 2-Knödel number. Stitch's experiment number. • 627 = 3 × 11 × 19, sphenic number, number of integer partitions of 20,[20] Smith number[21] • 628 = 22 × 157, nontotient, totient sum for first 45 integers • 629 = 17 × 37, highly cototient number,[22] Harshad number, number of diagonals in a 37-gon[23] ### 630s • 630 = 2 × 32 × 5 × 7, sum of six consecutive primes (97 + 101 + 103 + 107 + 109 + 113), triangular number, hexagonal number,[24] sparsely totient number,[25] Harshad number, balanced number[26] • 631 = Cuban prime number, centered triangular number,[27] centered hexagonal number,[28] Chen prime, lazy caterer number (sequence A000124 in the OEIS) • 632 = 23 × 79, number of 13-bead necklaces with 2 colors[29] • 633 = 3 × 211, sum of three consecutive primes (199 + 211 + 223), Blum integer; also, in the title of the movie 633 Squadron • 634 = 2 × 317, nontotient, Smith number[21] • 635 = 5 × 127, sum of nine consecutive primes (53 + 59 + 61 + 67 + 71 + 73 + 79 + 83 + 89), Mertens function(635) = 0, number of compositions of 13 into pairwise relatively prime parts[30] • "Project 635", the Irtysh River diversion project in China involving a dam and a canal. • 636 = 22 × 3 × 53, sum of ten consecutive primes (43 + 47 + 53 + 59 + 61 + 67 + 71 + 73 + 79 + 83), Smith number,[21] Mertens function(636) = 0 • 637 = 72 × 13, Mertens function(637) = 0, decagonal number[31] • 638 = 2 × 11 × 29, sphenic number, sum of four consecutive primes (151 + 157 + 163 + 167), nontotient, centered heptagonal number[32] • 639 = 32 × 71, sum of the first twenty primes, also ISO 639 is the ISO's standard for codes for the representation of languages ### 640s • 640 = 27 × 5, Harshad number, hexadecagonal number,[33] number of 1's in all partitions of 24 into odd parts,[34] number of acres in a square mile • 641 = prime number, Sophie Germain prime,[35] factor of 4294967297 (the smallest nonprime Fermat number), Chen prime, Eisenstein prime with no imaginary part, Proth prime[36] • 642 = 2 × 3 × 107 = 14 + 24 + 54,[37] sphenic number, admirable number • 643 = prime number, largest prime factor of 123456 • 644 = 22 × 7 × 23, nontotient, Perrin number,[38] Harshad number, common umask, admirable number • 645 = 3 × 5 × 43, sphenic number, octagonal number, Smith number,[21] Fermat pseudoprime to base 2,[39] Harshad number • 646 = 2 × 17 × 19, sphenic number, also ISO 646 is the ISO's standard for international 7-bit variants of ASCII, number of permutations of length 7 without rising or falling successions[40] • 647 = prime number, sum of five consecutive primes (113 + 127 + 131 + 137 + 139), Chen prime, Eisenstein prime with no imaginary part, 3647 - 2647 is prime[41] • 648 = 23 × 34 = A331452(7, 1),[42] Harshad number, Achilles number, area of a square with diagonal 36[43] • 649 = 11 × 59, Blum integer ### 650s • 650 = 2 × 52 × 13, primitive abundant number,[44] square pyramidal number,[45] pronic number,[1] nontotient, totient sum for first 46 integers; (other fields) the number of seats in the House of Commons of the United Kingdom, admirable number • 651 = 3 × 7 × 31, sphenic number, pentagonal number,[46] nonagonal number[47] • 652 = 22 × 163, maximal number of regions by drawing 26 circles[48] • 653 = prime number, Sophie Germain prime,[35] balanced prime,[3] Chen prime, Eisenstein prime with no imaginary part • 654 = 2 × 3 × 109, sphenic number, nontotient, Smith number,[21] admirable number • 655 = 5 × 131, number of toothpicks after 20 stages in a three-dimensional grid[49] • 656 = 24 × 41 = $\displaystyle{ \lfloor \frac{3^{16}}{2^{16}} \rfloor }$.[50] In Judaism, 656 is the number of times that Jerusalem is mentioned in the Hebrew Bible or Old Testament. • 657 = 32 × 73, the largest known number not of the form a2+s with s a semiprime • 658 = 2 × 7 × 47, sphenic number, untouchable number • 659 = prime number, Sophie Germain prime,[35] sum of seven consecutive primes (79 + 83 + 89 + 97 + 101 + 103 + 107), Chen prime, Mertens function sets new low of −10 which stands until 661, highly cototient number,[22] Eisenstein prime with no imaginary part, strictly non-palindromic number[4] ### 660s • 660 = 22 × 3 × 5 × 11 • Sum of four consecutive primes (157 + 163 + 167 + 173). • Sum of six consecutive primes (101 + 103 + 107 + 109 + 113 + 127). • Sum of eight consecutive primes (67 + 71 + 73 + 79 + 83 + 89 + 97 + 101). • Sparsely totient number.[25] • Sum of 11th row when writing the natural numbers as a triangle.[51] • 661 = prime number • Sum of three consecutive primes (211 + 223 + 227). • Mertens function sets new low of −11 which stands until 665. • Pentagram number of the form $\displaystyle{ 5n^{2}-5n+1 }$. • Hexagram number of the form $\displaystyle{ 6n^{2}-6n+1 }$ i.e. a star number. • 662 = 2 × 331, nontotient, member of Mian–Chowla sequence[52] • 663 = 3 × 13 × 17, sphenic number, Smith number[21] • 664 = 23 × 83, number of knapsack partitions of 33[53] • Telephone area code for Montserrat. • Area code for Tijuana within Mexico. • Model number for the Amstrad CPC664 home computer. • 665 = 5 × 7 × 19, sphenic number, Mertens function sets new low of −12 which stands until 1105, number of diagonals in a 38-gon[23] • 666 = 2 × 32 × 37, repdigit • 667 = 23 × 29, lazy caterer number (sequence A000124 in the OEIS) • 668 = 22 × 167, nontotient • 669 = 3 × 223, blum integer ### 670s • 670 = 2 × 5 × 67, sphenic number, octahedral number,[54] nontotient • 671 = 11 × 61. This number is the magic constant of n×n normal magic square and n-queens problem for n = 11. • 672 = 25 × 3 × 7, harmonic divisor number,[55] Zuckerman number, admirable number • 673 = prime number, Proth prime[36] • 674 = 2 × 337, nontotient, 2-Knödel number • 675 = 33 × 52, Achilles number • 676 = 22 × 132 = 262, palindromic square • 677 = prime number, Chen prime, Eisenstein prime with no imaginary part, number of non-isomorphic self-dual multiset partitions of weight 10[56] • 678 = 2 × 3 × 113, sphenic number, nontotient, number of surface points of an octahedron with side length 13,[57] admirable number • 679 = 7 × 97, sum of three consecutive primes (223 + 227 + 229), sum of nine consecutive primes (59 + 61 + 67 + 71 + 73 + 79 + 83 + 89 + 97), smallest number of multiplicative persistence 5[58] ### 680s • 680 = 23 × 5 × 17, tetrahedral number,[59] nontotient • 681 = 3 × 227, centered pentagonal number[2] • 682 = 2 × 11 × 31, sphenic number, sum of four consecutive primes (163 + 167 + 173 + 179), sum of ten consecutive primes (47 + 53 + 59 + 61 + 67 + 71 + 73 + 79 + 83 + 89), number of moves to solve the Norwegian puzzle strikketoy.[60] • 683 = prime number, Sophie Germain prime,[35] sum of five consecutive primes (127 + 131 + 137 + 139 + 149), Chen prime, Eisenstein prime with no imaginary part, Wagstaff prime[61] • 684 = 22 × 32 × 19, Harshad number, number of graphical forest partitions of 32[62] • 685 = 5 × 137, centered square number[63] • 686 = 2 × 73, nontotient, number of multigraphs on infinite set of nodes with 7 edges[64] • 687 = 3 × 229, 687 days to orbit the sun (Mars) D-number[65] • 688 = 24 × 43, Friedman number since 688 = 8 × 86,[19] 2-automorphic number[66] • 689 = 13 × 53, sum of three consecutive primes (227 + 229 + 233), sum of seven consecutive primes (83 + 89 + 97 + 101 + 103 + 107 + 109). Strobogrammatic number[67] ### 690s • 690 = 2 × 3 × 5 × 23, sum of six consecutive primes (103 + 107 + 109 + 113 + 127 + 131), sparsely totient number,[25] Smith number,[21] Harshad number • ISO 690 is the ISO's standard for bibliographic references • 691 = prime number, (negative) numerator of the Bernoulli number B12 = -691/2730. Ramanujan's tau function τ and the divisor function σ11 are related by the remarkable congruence τ(n) ≡ σ11(n) (mod 691). • In number theory, 691 is a "marker" (similar to the radioactive markers in biology): whenever it appears in a computation, one can be sure that Bernoulli numbers are involved. • 692 = 22 × 173, number of partitions of 48 into powers of 2[68] • 693 = 32 × 7 × 11, triangular matchstick number,[69] the number of the "non-existing" Alabama State Constitution amendment, the number of sections in Ludwig Wittgenstein's Philosophical Investigations. • 694 = 2 × 347, centered triangular number,[27] nontotient • 695 = 5 × 139, 695!! + 2 is prime.[70] • 696 = 23 × 3 × 29, sum of eight consecutive primes (71 + 73 + 79 + 83 + 89 + 97 + 101 + 103), totient sum for first 47 integers, trails of length 9 on honeycomb lattice [71] • 697 = 17 × 41, cake number; the number of sides of Colorado[72] • 698 = 2 × 349, nontotient, sum of squares of two primes[73] • 699 = 3 × 233, D-number[65] ## References 1. "Sloane's A005891 : Centered pentagonal numbers". OEIS Foundation. 2. "Sloane's A006562 : Balanced primes". OEIS Foundation. 3. "Sloane's A016038 : Strictly non-palindromic numbers". OEIS Foundation. 4. Sloane, N. J. A., ed. "Sequence A331452". OEIS Foundation. Retrieved 2022-05-09. 5. Sloane, N. J. A., ed. "Sequence A000787 (Strobogrammatic numbers)". OEIS Foundation. Retrieved 2022-05-07. 6. "Sloane's A000045 : Fibonacci numbers". OEIS Foundation. 7. "Sloane's A002559 : Markoff (or Markov) numbers". OEIS Foundation. 8. Sloane, N. J. A., ed. "Sequence A001606 (Indices of prime Lucas numbers)". OEIS Foundation. 9. Sloane, N. J. A., ed. "Sequence A032020 (Number of compositions (ordered partitions) of n into distinct parts)". OEIS Foundation. Retrieved 2022-05-24. 10. "Sloane's A007597 : Strobogrammatic primes". OEIS Foundation. 11. "Sloane's A005165 : Alternating factorials". OEIS Foundation. 12. Sloane, N. J. A., ed. "Sequence A006832 (Discriminants of totally real cubic fields)". OEIS Foundation. Retrieved 2022-05-31. 13. Sloane, N. J. A., ed. "Sequence A027187 (Number of partitions of n into an even number of parts)". OEIS Foundation. Retrieved 2022-05-31. 14. Sloane, N. J. A., ed. "Sequence A059377 (Jordan function J_4(n))". OEIS Foundation. Retrieved 2022-05-24. 15. "Sloane's A036057 : Friedman numbers". OEIS Foundation. 16. "Sloane's A000041 : a(n) = number of partitions of n". OEIS Foundation. 17. "Sloane's A006753 : Smith numbers". OEIS Foundation. 18. "Sloane's A100827 : Highly cototient numbers". OEIS Foundation. 19. Sloane, N. J. A., ed. "Sequence A000096 (a(n) = n*(n+3)/2)". OEIS Foundation. Retrieved 2022-05-31. 20. "Sloane's A000384 : Hexagonal numbers". OEIS Foundation. 21. "Sloane's A036913 : Sparsely totient numbers". OEIS Foundation. 22. "Sloane's A005448 : Centered triangular numbers". OEIS Foundation. 23. "Sloane's A003215 : Hex (or centered hexagonal) numbers". OEIS Foundation. 24. Sloane, N. J. A., ed. "Sequence A101268 (Number of compositions of n into pairwise relatively prime parts)". OEIS Foundation. Retrieved 2022-05-31. 25. "Sloane's A001107 : 10-gonal (or decagonal) numbers". OEIS Foundation. 26. "Sloane's A069099 : Centered heptagonal numbers". OEIS Foundation. 27. Sloane, N. J. A., ed. "Sequence A051868 (16-gonal (or hexadecagonal) numbers: a(n) = n*(7*n-6))". OEIS Foundation. Retrieved 2022-05-31. 28. "Sloane's A005384 : Sophie Germain primes". OEIS Foundation. 29. "Sloane's A080076 : Proth primes". OEIS Foundation. 30. Sloane, N. J. A., ed. "Sequence A074501 (a(n) = 1^n + 2^n + 5^n)". OEIS Foundation. Retrieved 2022-05-31. 31. "Sloane's A001608 : Perrin sequence". OEIS Foundation. 32. "Sloane's A001567 : Fermat pseudoprimes to base 2". OEIS Foundation. 33. Sloane, N. J. A., ed. "Sequence A057468 (Numbers k such that 3^k - 2^k is prime)". OEIS Foundation. Retrieved 2022-05-31. 34. "Sloane's A331452". OEIS Foundation. 35. Sloane, N. J. A., ed. "Sequence A001105 (a(n) = 2*n^2)". OEIS Foundation. 36. "Sloane's A071395 : Primitive abundant numbers". OEIS Foundation. 37. "Sloane's A000330 : Square pyramidal numbers". OEIS Foundation. 38. "Sloane's A000326 : Pentagonal numbers". OEIS Foundation. 39. Sloane, N. J. A., ed. "Sequence A014206 (a(n) = n^2 + n + 2)". OEIS Foundation. Retrieved 2022-05-31. 40. Sloane, N. J. A., ed. "Sequence A160160 (Toothpick sequence in the three-dimensional grid)". OEIS Foundation. Retrieved 2022-05-31. 41. Sloane, N. J. A., ed. "Sequence A002379 (a(n) = floor(3^n / 2^n))". OEIS Foundation. Retrieved 2022-05-31. 42. Sloane, N. J. A., ed. "Sequence A027480 (a(n) = n*(n+1)*(n+2)/2)". OEIS Foundation. Retrieved 2022-05-31. 43. "Sloane's A005282 : Mian-Chowla sequence". OEIS Foundation. 44. Sloane, N. J. A., ed. "Sequence A108917 (Number of knapsack partitions of n)". OEIS Foundation. Retrieved 2022-05-31. 45. "Sloane's A005900 : Octahedral numbers". OEIS Foundation. 46. "Sloane's A001599 : Harmonic or Ore numbers". OEIS Foundation. 47. Sloane, N. J. A., ed. "Sequence A316983 (Number of non-isomorphic self-dual multiset partitions of weight n)". OEIS Foundation. Retrieved 2022-05-31. 48. Sloane, N. J. A., ed. "Sequence A005899 (Number of points on surface of octahedron with side n)". OEIS Foundation. Retrieved 2022-05-31. 49. Sloane, N. J. A., ed. "Sequence A003001 (Smallest number of multiplicative persistence n)". OEIS Foundation. Retrieved 2022-05-31. 50. "Sloane's A000292 : Tetrahedral numbers". OEIS Foundation. 51. Sloane, N. J. A., ed. "Sequence A000975 (Lichtenberg sequence)". OEIS Foundation. Retrieved 2022-05-31. 52. "Sloane's A000979 : Wagstaff primes". OEIS Foundation. 53. Sloane, N. J. A., ed. "Sequence A000070 (a(n) = Sum_{k=0..n} p(k) where p(k) = number of partitions of k (A000041))". OEIS Foundation. Retrieved 2022-05-31. 54. "Sloane's A001844 : Centered square numbers". OEIS Foundation. 55. Sloane, N. J. A., ed. "Sequence A050535 (Number of multigraphs on infinite set of nodes with n edges)". OEIS Foundation. Retrieved 2022-05-31. 56. Sloane, N. J. A., ed. "Sequence A033553 (3-Knödel numbers or D-numbers: numbers n > 3 such that n divides k^(n-2)-k for all k with gcd(k, n) = 1)". OEIS Foundation. Retrieved 2022-05-31. 57. Sloane, N. J. A., ed. "Sequence A030984 (2-automorphic numbers)". OEIS Foundation. Retrieved 2021-09-01. 58. "Sloane's A000787 : Strobogrammatic numbers". OEIS Foundation. 59. Sloane, N. J. A., ed. "Sequence A000123 (Number of binary partitions: number of partitions of 2n into powers of 2)". OEIS Foundation. Retrieved 2022-05-31. 60. Sloane, N. J. A., ed. "Sequence A045943 (Triangular matchstick numbers: a(n) = 3*n*(n+1)/2)". OEIS Foundation. Retrieved 2022-05-31. 61. Sloane, N. J. A., ed. "Sequence A076185 (Numbers n such that n!! + 2 is prime)". OEIS Foundation. Retrieved 2022-05-31. 62. Sloane, N. J. A., ed. "Sequence A006851 (Trails of length n on honeycomb lattice)". OEIS Foundation. Retrieved 2022-05-18. 63. Sloane, N. J. A., ed. "Sequence A045636 (Numbers of the form p^2 + q^2, with p and q primes)". OEIS Foundation. Retrieved 2022-05-31.
2023-02-03T23:40:50
{ "domain": "handwiki.org", "url": "https://handwiki.org/wiki/600_(number)", "openwebmath_score": 0.7979143261909485, "openwebmath_perplexity": 5175.165820863588, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.976669234586964, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747654802771 }
https://en.wikipedia.org/wiki/Hierarchical_clustering
# Hierarchical clustering In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:[1] • Agglomerative: This is a "bottom-up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. • Divisive: This is a "top-down" approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy. In general, the merges and splits are determined in a greedy manner. The results of hierarchical clustering are usually presented in a dendrogram. The standard algorithm for hierarchical agglomerative clustering (HAC) has a time complexity of ${\displaystyle {\mathcal {O}}(n^{3})}$ and requires ${\displaystyle {\mathcal {O}}(n^{2})}$ memory, which makes it too slow for even medium data sets. However, for some special cases, optimal efficient agglomerative methods (of complexity ${\displaystyle {\mathcal {O}}(n^{2})}$) are known: SLINK[2] for single-linkage and CLINK[3] for complete-linkage clustering. With a heap the runtime of the general case can be reduced to ${\displaystyle {\mathcal {O}}(n^{2}\log n)}$ at the cost of further increasing the memory requirements. In many programming languages, the memory overheads of this approach are too large to make it practically usable. Except for the special case of single-linkage, none of the algorithms (except exhaustive search in ${\displaystyle {\mathcal {O}}(2^{n})}$) can be guaranteed to find the optimum solution. Divisive clustering with an exhaustive search is ${\displaystyle {\mathcal {O}}(2^{n})}$, but it is common to use faster heuristics to choose splits, such as k-means. ## Cluster dissimilarity In order to decide which clusters should be combined (for agglomerative), or where a cluster should be split (for divisive), a measure of dissimilarity between sets of observations is required. In most methods of hierarchical clustering, this is achieved by use of an appropriate metric (a measure of distance between pairs of observations), and a linkage criterion which specifies the dissimilarity of sets as a function of the pairwise distances of observations in the sets. ### Metric The choice of an appropriate metric will influence the shape of the clusters, as some elements may be close to one another according to one distance and farther away according to another. For example, in a 2-dimensional space, the distance between the point (1,0) and the origin (0,0) is always 1 according to the usual norms, but the distance between the point (1,1) and the origin (0,0) can be 2 under Manhattan distance, ${\displaystyle \scriptstyle {\sqrt {2}}}$ under Euclidean distance, or 1 under maximum distance. Some commonly used metrics for hierarchical clustering are:[4] Names Formula Euclidean distance ${\displaystyle \|a-b\|_{2}={\sqrt {\sum _{i}(a_{i}-b_{i})^{2}}}}$ Squared Euclidean distance ${\displaystyle \|a-b\|_{2}^{2}=\sum _{i}(a_{i}-b_{i})^{2}}$ Manhattan distance ${\displaystyle \|a-b\|_{1}=\sum _{i}|a_{i}-b_{i}|}$ Maximum distance ${\displaystyle \|a-b\|_{\infty }=\max _{i}|a_{i}-b_{i}|}$ Mahalanobis distance ${\displaystyle {\sqrt {(a-b)^{\top }S^{-1}(a-b)}}}$ where S is the Covariance matrix For text or other non-numeric data, metrics such as the Hamming distance or Levenshtein distance are often used. A review of cluster analysis in health psychology research found that the most common distance measure in published studies in that research area is the Euclidean distance or the squared Euclidean distance.[citation needed] The linkage criterion determines the distance between sets of observations as a function of the pairwise distances between observations. Some commonly used linkage criteria between two sets of observations A and B are:[5][6] Names Formula Maximum or complete-linkage clustering ${\displaystyle \max \,\{\,d(a,b):a\in A,\,b\in B\,\}.}$ Minimum or single-linkage clustering ${\displaystyle \min \,\{\,d(a,b):a\in A,\,b\in B\,\}.}$ Unweighted average linkage clustering (or UPGMA) ${\displaystyle {\frac {1}{|A|\cdot |B|}}\sum _{a\in A}\sum _{b\in B}d(a,b).}$ Weighted average linkage clustering (or WPGMA) ${\displaystyle d(i\cup j,k)={\frac {d(i,k)+d(j,k)}{2}}.}$ Centroid linkage clustering, or UPGMC ${\displaystyle \|c_{s}-c_{t}\|}$ where ${\displaystyle c_{s}}$ and ${\displaystyle c_{t}}$ are the centroids of clusters s and t, respectively. Minimum energy clustering ${\displaystyle {\frac {2}{nm}}\sum _{i,j=1}^{n,m}\|a_{i}-b_{j}\|_{2}-{\frac {1}{n^{2}}}\sum _{i,j=1}^{n}\|a_{i}-a_{j}\|_{2}-{\frac {1}{m^{2}}}\sum _{i,j=1}^{m}\|b_{i}-b_{j}\|_{2}}$ where d is the chosen metric. Other linkage criteria include: • The sum of all intra-cluster variance. • The increase in variance for the cluster being merged (Ward's criterion).[7] • The probability that candidate clusters spawn from the same distribution function (V-linkage). • The product of in-degree and out-degree on a k-nearest-neighbour graph (graph degree linkage).[8] • The increment of some cluster descriptor (i.e., a quantity defined for measuring the quality of a cluster) after merging two clusters.[9][10][11] ## Discussion Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances. ## Agglomerative clustering example Raw data For example, suppose this data is to be clustered, and the Euclidean distance is the distance metric. The hierarchical clustering dendrogram would be as such: Cutting the tree at a given height will give a partitioning clustering at a selected precision. In this example, cutting after the second row (from the top) of the dendrogram will yield clusters {a} {b c} {d e} {f}. Cutting after the third row will yield clusters {a} {b c} {d e f}, which is a coarser clustering, with a smaller number but larger clusters. This method builds the hierarchy from the individual elements by progressively merging clusters. In our example, we have six elements {a} {b} {c} {d} {e} and {f}. The first step is to determine which elements to merge in a cluster. Usually, we want to take the two closest elements, according to the chosen distance. Optionally, one can also construct a distance matrix at this stage, where the number in the i-th row j-th column is the distance between the i-th and j-th elements. Then, as clustering progresses, rows and columns are merged as the clusters are merged and the distances updated. This is a common way to implement this type of clustering, and has the benefit of caching distances between clusters. A simple agglomerative clustering algorithm is described in the single-linkage clustering page; it can easily be adapted to different types of linkage (see below). Suppose we have merged the two closest elements b and c, we now have the following clusters {a}, {b, c}, {d}, {e} and {f}, and want to merge them further. To do that, we need to take the distance between {a} and {b c}, and therefore define the distance between two clusters. Usually the distance between two clusters ${\displaystyle {\mathcal {A}}}$ and ${\displaystyle {\mathcal {B}}}$ is one of the following: ${\displaystyle \max\{\,d(x,y):x\in {\mathcal {A}},\,y\in {\mathcal {B}}\,\}.}$ ${\displaystyle \min\{\,d(x,y):x\in {\mathcal {A}},\,y\in {\mathcal {B}}\,\}.}$ • The mean distance between elements of each cluster (also called average linkage clustering, used e.g. in UPGMA): ${\displaystyle {1 \over {|{\mathcal {A}}|\cdot |{\mathcal {B}}|}}\sum _{x\in {\mathcal {A}}}\sum _{y\in {\mathcal {B}}}d(x,y).}$ • The sum of all intra-cluster variance. • The increase in variance for the cluster being merged (Ward's method[7]) • The probability that candidate clusters spawn from the same distribution function (V-linkage). In case of tied minimum distances, a pair is randomly chosen, thus being able to generate several structurally different dendrograms. Alternatively, all tied pairs may be joined at the same time, generating a unique dendrogram[12]. One can always decide to stop clustering when there is a sufficiently small number of clusters (number criterion). Some linkages may also guarantee that agglomeration occurs at a greater distance between clusters than the previous agglomeration, and then one can stop clustering when the clusters are too far apart to be merged (distance criterion). However, this is not the case of, e.g., the centroid linkage where the so-called reversals[13] (inversions, departures from ultrametricity) may occur. ## Divisive clustering The basic principle of divisive clustering was published as the DIANA (DIvisive ANAlysis Clustering) algorithm.[14] Initially, all data is in the same cluster, and the largest cluster is split until every object is separate. Because there exist ${\displaystyle O(2^{n})}$ ways of splitting each cluster, heuristics are needed. DIANA chooses the object with the maximum average dissimilarity and then moves all objects to this cluster that are more similar to the new cluster than to the remainder. ## Software ### Open source implementations Hierarchical clustering dendrogram of the Iris dataset (using R). Source Hierarchical clustering and interactive dendrogram visualization in Orange data mining suite. • ALGLIB implements several hierarchical clustering algorithms (single-link, complete-link, Ward) in C++ and C# with O(n²) memory and O(n³) run time. • ELKI includes multiple hierarchical clustering algorithms, various linkage strategies and also includes the efficient SLINK,[2] CLINK[3] and Anderberg algorithms, flexible cluster extraction from dendrograms and various other cluster analysis algorithms. • Octave, the GNU analog to MATLAB implements hierarchical clustering in function "linkage". • Orange, a data mining software suite, includes hierarchical clustering with interactive dendrogram visualisation. • R has many packages that provide functions for hierarchical clustering. • SciPy implements hierarchical clustering in Python, including the efficient SLINK algorithm. • scikit-learn also implements hierarchical clustering in Python. • Weka includes hierarchical cluster analysis. ### Commercial implementations • MATLAB includes hierarchical cluster analysis. • SAS includes hierarchical cluster analysis in PROC CLUSTER. • Mathematica includes a Hierarchical Clustering Package. • NCSS includes hierarchical cluster analysis. • SPSS includes hierarchical cluster analysis. • Qlucore Omics Explorer includes hierarchical cluster analysis. • Stata includes hierarchical cluster analysis. • CrimeStat includes a nearest neighbor hierarchical cluster algorithm with a graphical output for a Geographic Information System. ## References 1. ^ Rokach, Lior, and Oded Maimon. "Clustering methods." Data mining and knowledge discovery handbook. Springer US, 2005. 321-352. 2. ^ "The DISTANCE Procedure: Proximity Measures". SAS/STAT 9.2 Users Guide. SAS Institute. Retrieved 2009-04-26. 3. ^ "The CLUSTER Procedure: Clustering Methods". SAS/STAT 9.2 Users Guide. SAS Institute. Retrieved 2009-04-26. 4. ^ Székely, G. J.; Rizzo, M. L. (2005). "Hierarchical clustering via Joint Between-Within Distances: Extending Ward's Minimum Variance Method". Journal of Classification. 22 (2): 151–183. doi:10.1007/s00357-005-0012-9. 5. ^ a b Ward, Joe H. (1963). "Hierarchical Grouping to Optimize an Objective Function". Journal of the American Statistical Association. 58 (301): 236–244. doi:10.2307/2282967. JSTOR 2282967. MR 0148188. 6. ^ Zhang, et al. "Graph degree linkage: Agglomerative clustering on a directed graph." 12th European Conference on Computer Vision, Florence, Italy, October 7–13, 2012. https://arxiv.org/abs/1208.5092 7. ^ Zhang, et al. "Agglomerative clustering via maximum incremental path integral." Pattern Recognition (2013). 8. ^ Zhao, and Tang. "Cyclizing clusters via zeta function of a graph."Advances in Neural Information Processing Systems. 2008. 9. ^ Ma, et al. "Segmentation of multivariate mixed data via lossy data coding and compression." IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(9) (2007): 1546-1562. 10. ^ Fernández, Alberto; Gómez, Sergio (2008). "Solving Non-uniqueness in Agglomerative Hierarchical Clustering Using Multidendrograms". Journal of Classification. 25 (1): 43–65. arXiv:cs/0608049. doi:10.1007/s00357-008-9004-x. 11. ^ Legendre, P.; Legendre, L. (2003). Numerical Ecology. Elsevier Science BV. 12. ^ Kaufman, L., & Roussew, P. J. (1990). Finding Groups in Data - An Introduction to Cluster Analysis. A Wiley-Science Publication John Wiley & Sons.
2019-07-20T16:24:39
{ "domain": "wikipedia.org", "url": "https://en.wikipedia.org/wiki/Hierarchical_clustering", "openwebmath_score": 0.616488516330719, "openwebmath_perplexity": 2303.846457174846, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692345869641, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747654802771 }
http://clay6.com/qa/52437/a-soccer-ball-is-kicked-horizontally-off-a-22-0-meter-high-hill-and-lands-a
Browse Questions +1 vote A soccer ball is kicked horizontally off a 22.0 meter high hill and lands a distance of 35.0 meters from the edge of the hill. Determine the initial horizontal velocity of the soccer ball. $\begin{array}{1 1}16.5m/s\\17.5m/s\\18.5m/s\\19.5m/s\end{array}$ Can you answer this question? Given: $x=35.0m$ $a_x=0m/s/s$ $y=-22.0m$ $v_{iy}=0m/s$ $a_y=-9.8m/s/s$ Use $y=v_{iy}.t+0.5\times a_y\times t^2$ to solve for time $\rightarrow$ the time of flight is 2.12 seconds. Now use $x=v_{ix}\times t+0.\times a_x\times t^2$ to solve for $v_{ix}$. Note that $a_x$ is 0m/s/s so the last term on the right side of the equation cancels. By substituting 35.0m for x and 2.12s for t the $v_{ix}$ can be found to be 16.5m/s answered Aug 13, 2014 edited Aug 13, 2014 +1 vote +1 vote +1 vote +1 vote
2017-07-24T14:35:33
{ "domain": "clay6.com", "url": "http://clay6.com/qa/52437/a-soccer-ball-is-kicked-horizontally-off-a-22-0-meter-high-hill-and-lands-a", "openwebmath_score": 0.7721161246299744, "openwebmath_perplexity": 1455.8210504076028, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.976669234586964, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.653274765480277 }
https://math.stackexchange.com/questions/750485/can-we-prove-that-a-bounded-closed-subset-of-mathbb-rn-is-compact-without-ax?noredirect=1
# Can we prove that a bounded closed subset of $\mathbb R^n$ is compact without Axiom of Choice? Can we prove that a bounded closed subset of $\mathbb R^n(n \ge 1)$ is compact without using Axiom of Choice? This is a related question which was closed. • @AsafKaragila [Many of these questions have been asked before. Please search the site before posting them.] Where is the answer to my question? – Makoto Kato Apr 12 '14 at 8:08 • Where is the answer to my question? Below, in case you haven't noticed. – Asaf Karagila Apr 12 '14 at 8:15 • An older question: math.stackexchange.com/questions/176646/… – Martin Sleziak Apr 12 '14 at 11:14 • The answer follows directly from Shoenfield's absoluteness theorem. – Carl Mummert Apr 13 '14 at 0:18 • @CarlMummert Would you please elaborate on your claim as an answer? – Makoto Kato Apr 14 '14 at 5:34 First, closed and bounded intervals in $\Bbb R$ are compact: It suffices to prove for $[0,1]$. Let $\mathcal B$ be an arbitrary open cover of $[0,1]$, simply consider $$x=\sup\{y\in[0,1]\mid [0,y]\text{ has a finite subcover in }\mathcal B\},$$ deduce that $[0,x]$ is finitely covered as well, and then argue that we have to have $x=1$ (by the same reason). Next, show that the product of finitely many compact sets is compact (done by induction, and the only interesting case is the case for product of two compact sets, the argument is quite straightforward by considering an open cover of the product and finding a finite subcover). Therefore closed and bounded boxes in $\Bbb R^n$ are compact, as products of closed intervals. Finally, closed subsets of a compact space are compact. The proof is the same as with the axiom of choice. Now we have that every closed and bounded set in $\Bbb R^n$ can be bounded by a product of closed intervals. So it is a closed subset of a compact set, so it is compact. • Could you explain how you prove that the product of two compact sets is compact without using Axiom of Choice? – Makoto Kato Apr 12 '14 at 8:24 • The same way you prove it in $\sf ZFC$. Pick an open cover, consider its projections, find a finite subcover for the first coordinate, then a finite subcover for the second coordinate. Certainly someone who spent the last two years working on very nontrivial questions can come up with this on their own. – Asaf Karagila Apr 12 '14 at 8:26 • Your question doesn't show any effort, why should my answer show effort? Please remember that this is not your answer, and I think that if someone reads an answer and they have to work out some of the details on their own, it's not a big deal. – Asaf Karagila Apr 12 '14 at 8:39 • @egreg: Uh, no, it doesn't. You're confusing between the existence of $\sup$ and the fact that it is the limit of a sequence from the set. – Asaf Karagila Apr 12 '14 at 9:43 • @AsafKaragila Thanks; that was just a little doubt. On the other hand, it's well known that the proof of Tychonov's theorem for the finite case doesn't require the axiom of choice, although many proofs that can be found around use it (mostly because the same argument can be extended to the infinite case). – egreg Apr 12 '14 at 10:12
2019-07-15T20:13:53
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/750485/can-we-prove-that-a-bounded-closed-subset-of-mathbb-rn-is-compact-without-ax?noredirect=1", "openwebmath_score": 0.8662258982658386, "openwebmath_perplexity": 207.93503968555797, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692345869641, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.653274765480277 }
https://www.jobilize.com/physics1/course/11-1-rolling-motion-angular-momentum-by-openstax?qcr=www.quizover.com&page=4
# 11.1 Rolling motion  (Page 5/6) Page 5 / 6 ${v}_{\text{CM}}=\sqrt{\left(3.71\phantom{\rule{0.2em}{0ex}}\text{m}\text{/}{\text{s}}^{2}\right)25.0\phantom{\rule{0.2em}{0ex}}\text{m}}=9.63\phantom{\rule{0.2em}{0ex}}\text{m}\text{/}\text{s}\text{.}$ ## Significance This is a fairly accurate result considering that Mars has very little atmosphere, and the loss of energy due to air resistance would be minimal. The result also assumes that the terrain is smooth, such that the wheel wouldn’t encounter rocks and bumps along the way. Also, in this example, the kinetic energy, or energy of motion, is equally shared between linear and rotational motion. If we look at the moments of inertia in [link] , we see that the hollow cylinder has the largest moment of inertia for a given radius and mass. If the wheels of the rover were solid and approximated by solid cylinders, for example, there would be more kinetic energy in linear motion than in rotational motion. This would give the wheel a larger linear velocity than the hollow cylinder approximation. Thus, the solid cylinder would reach the bottom of the basin faster than the hollow cylinder. ## Summary • In rolling motion without slipping, a static friction force is present between the rolling object and the surface. The relations ${v}_{\text{CM}}=R\omega ,{a}_{\text{CM}}=R\alpha ,\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}{d}_{\text{CM}}=R\theta$ all apply, such that the linear velocity, acceleration, and distance of the center of mass are the angular variables multiplied by the radius of the object. • In rolling motion with slipping, a kinetic friction force arises between the rolling object and the surface. In this case, ${v}_{\text{CM}}\ne R\omega ,{a}_{\text{CM}}\ne R\alpha ,\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}{d}_{\text{CM}}\ne R\theta$ . • Energy conservation can be used to analyze rolling motion. Energy is conserved in rolling motion without slipping. Energy is not conserved in rolling motion with slipping due to the heat generated by kinetic friction. ## Conceptual questions Can a round object released from rest at the top of a frictionless incline undergo rolling motion? No, the static friction force is zero. A cylindrical can of radius R is rolling across a horizontal surface without slipping. (a) After one complete revolution of the can, what is the distance that its center of mass has moved? (b) Would this distance be greater or smaller if slipping occurred? A wheel is released from the top on an incline. Is the wheel most likely to slip if the incline is steep or gently sloped? The wheel is more likely to slip on a steep incline since the coefficient of static friction must increase with the angle to keep rolling motion without slipping. Which rolls down an inclined plane faster, a hollow cylinder or a solid sphere? Both have the same mass and radius. A hollow sphere and a hollow cylinder of the same radius and mass roll up an incline without slipping and have the same initial center of mass velocity. Which object reaches a greater height before stopping? The cylinder reaches a greater height. By [link] , its acceleration in the direction down the incline would be less. ## Problems What is the angular velocity of a 75.0-cm-diameter tire on an automobile traveling at 90.0 km/h? ${v}_{\text{CM}}=R\omega \phantom{\rule{0.2em}{0ex}}⇒\omega =66.7\phantom{\rule{0.2em}{0ex}}\text{rad/s}$ A boy rides his bicycle 2.00 km. The wheels have radius 30.0 cm. What is the total angle the tires rotate through during his trip? If the boy on the bicycle in the preceding problem accelerates from rest to a speed of 10.0 m/s in 10.0 s, what is the angular acceleration of the tires? $\alpha =3.3\phantom{\rule{0.2em}{0ex}}\text{rad}\text{/}{\text{s}}^{2}$ Formula One race cars have 66-cm-diameter tires. If a Formula One averages a speed of 300 km/h during a race, what is the angular displacement in revolutions of the wheels if the race car maintains this speed for 1.5 hours? A marble rolls down an incline at $30\text{°}$ from rest. (a) What is its acceleration? (b) How far does it go in 3.0 s? ${I}_{\text{CM}}=\frac{2}{5}m{r}^{2},\phantom{\rule{0.2em}{0ex}}{a}_{\text{CM}}=3.5\phantom{\rule{0.2em}{0ex}}\text{m}\text{/}{\text{s}}^{2};\phantom{\rule{0.2em}{0ex}}x=15.75\phantom{\rule{0.2em}{0ex}}\text{m}$ Repeat the preceding problem replacing the marble with a solid cylinder. Explain the new result. A rigid body with a cylindrical cross-section is released from the top of a $30\text{°}$ incline. It rolls 10.0 m to the bottom in 2.60 s. Find the moment of inertia of the body in terms of its mass m and radius r. positive is down the incline plane; ${a}_{\text{CM}}=\frac{mg\phantom{\rule{0.2em}{0ex}}\text{sin}\phantom{\rule{0.2em}{0ex}}\theta }{m+\left({I}_{\text{CM}}\text{/}{r}^{2}\right)}⇒{I}_{\text{CM}}={r}^{2}\left[\frac{mg\phantom{\rule{0.2em}{0ex}}\text{sin}30}{{a}_{\text{CM}}}-m\right]$ , $x-{x}_{0}={v}_{0}t-\frac{1}{2}{a}_{\text{CM}}{t}^{2}⇒{a}_{\text{CM}}=2.96\phantom{\rule{0.2em}{0ex}}{\text{m/s}}^{2},$ ${I}_{\text{CM}}=0.66\phantom{\rule{0.2em}{0ex}}m{r}^{2}$ A yo-yo can be thought of a solid cylinder of mass m and radius r that has a light string wrapped around its circumference (see below). One end of the string is held fixed in space. If the cylinder falls as the string unwinds without slipping, what is the acceleration of the cylinder? A solid cylinder of radius 10.0 cm rolls down an incline with slipping. The angle of the incline is $30\text{°}.$ The coefficient of kinetic friction on the surface is 0.400. What is the angular acceleration of the solid cylinder? What is the linear acceleration? $\alpha =67.9\phantom{\rule{0.2em}{0ex}}\text{rad}\text{/}{\text{s}}^{2}$ , ${\left({a}_{\text{CM}}\right)}_{x}=1.5\phantom{\rule{0.2em}{0ex}}\text{m}\text{/}{\text{s}}^{2}$ A bowling ball rolls up a ramp 0.5 m high without slipping to storage. It has an initial velocity of its center of mass of 3.0 m/s. (a) What is its velocity at the top of the ramp? (b) If the ramp is 1 m high does it make it to the top? A 40.0-kg solid cylinder is rolling across a horizontal surface at a speed of 6.0 m/s. How much work is required to stop it? $W=-1080.0\phantom{\rule{0.2em}{0ex}}\text{J}$ A 40.0-kg solid sphere is rolling across a horizontal surface with a speed of 6.0 m/s. How much work is required to stop it? Compare results with the preceding problem. A solid cylinder rolls up an incline at an angle of $20\text{°}.$ If it starts at the bottom with a speed of 10 m/s, how far up the incline does it travel? Mechanical energy at the bottom equals mechanical energy at the top; $\frac{1}{2}m{v}_{0}^{2}+\frac{1}{2}\left(\frac{1}{2}m{r}^{2}\right){\left(\frac{{v}_{0}}{r}\right)}^{2}=mgh⇒h=\frac{1}{g}\left(\frac{1}{2}+\frac{1}{4}\right){v}_{0}^{2}$ , $h=7.7\phantom{\rule{0.2em}{0ex}}\text{m,}$ so the distance up the incline is $22.5\phantom{\rule{0.2em}{0ex}}\text{m}$ . A solid cylindrical wheel of mass M and radius R is pulled by a force $\stackrel{\to }{F}$ applied to the center of the wheel at $37\text{°}$ to the horizontal (see the following figure). If the wheel is to roll without slipping, what is the maximum value of $|\stackrel{\to }{F}|?$ The coefficients of static and kinetic friction are ${\mu }_{\text{S}}=0.40\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}{\mu }_{\text{k}}=0.30.$ A hollow cylinder is given a velocity of 5.0 m/s and rolls up an incline to a height of 1.0 m. If a hollow sphere of the same mass and radius is given the same initial velocity, how high does it roll up the incline? Use energy conservation $\frac{1}{2}m{v}_{0}^{2}+\frac{1}{2}{I}_{\text{Cyl}}{\omega }_{0}^{2}=mg{h}_{\text{Cyl}}$ , $\frac{1}{2}m{v}_{0}^{2}+\frac{1}{2}{I}_{\text{Sph}}{\omega }_{0}^{2}=mg{h}_{\text{Sph}}$ . Subtracting the two equations, eliminating the initial translational energy, we have $\frac{1}{2}{I}_{\text{Cyl}}{\omega }_{0}^{2}-\frac{1}{2}{I}_{\text{Sph}}{\omega }_{0}^{2}=mg\left({h}_{\text{Cyl}}-{h}_{\text{Sph}}\right)$ , $\frac{1}{2}m{r}^{2}{\left(\frac{{v}_{0}}{r}\right)}^{2}-\frac{1}{2}\frac{2}{3}m{r}^{2}{\left(\frac{{v}_{0}}{r}\right)}^{2}=mg\left({h}_{\text{Cyl}}-{h}_{\text{Sph}}\right)$ , $\frac{1}{2}{v}_{0}^{2}-\frac{1}{2}\frac{2}{3}{v}_{0}^{2}=g\left({h}_{\text{Cyl}}-{h}_{\text{Sph}}\right)$ , ${h}_{\text{Cyl}}-{h}_{\text{Sph}}=\frac{1}{g}\left(\frac{1}{2}-\frac{1}{3}\right){v}_{0}^{2}=\frac{1}{9.8\phantom{\rule{0.2em}{0ex}}\text{m}\text{/}{\text{s}}^{2}}\left(\frac{1}{6}\right)\left(5.0\phantom{\rule{0.2em}{0ex}}\text{m}\text{/}{\text{s)}}^{2}=0.43\phantom{\rule{0.2em}{0ex}}\text{m}$ . Thus, the hollow sphere, with the smaller moment of inertia, rolls up to a lower height of $1.0-0.43=0.57\phantom{\rule{0.2em}{0ex}}\text{m}\text{.}$ what is electromagnetism It is the study of the electromagnetic force, one of the four fundamental forces of nature. ... It includes the electric force, which pushes all charged particles, and the magnetic force, which only pushes moving charges. Energy what is units? units as in how praise What is th formular for force F = m x a Santos State newton's second law of motion can u tell me I cant remember Indigo force is equal to mass times acceleration Santos The acceleration of a system is directly proportional to the and in the same direction as the external force acting on the system and inversely proportional to its mass that is f=ma David The uniform seesaw shown below is balanced on a fulcrum located 3.0 m from the left end. The smaller boy on the right has a mass of 40 kg and the bigger boy on the left has a mass 80 kg. What is the mass of the board? Consider a wave produced on a stretched spring by holding one end and shaking it up and down. Does the wavelength depend on the distance you move your hand up and down? how can one calculate the value of a given quantity means? Manorama To determine the exact value of a percent of a given quantity we need to express the given percent as fraction and multiply it by the given number. AMIT meaning Winford briefly discuss rocket in physics ok let's discuss Jay What is physics physics is the study of natural phenomena with concern with matter and energy and relationships between them Ibrahim a potential difference of 10.0v is connected across a 1.0AuF in an LC circuit. calculate the inductance of the inductor that should be connected to the capacitor for the circuit to oscillate at 1125Hza potential difference of 10.0v is connected across a 1.0AuF in an LC circuit. calculate the inducta L= 0.002H NNAEMEKA how did you get it? Favour is the magnetic field of earth changing what is thought to be the energy density of multiverse and is the space between universes really space tibebeab can you explain it Guhan Energy can not either created nor destroyed .therefore who created? and how did it come to existence? this greatly depend on the kind of energy. for gravitational energy, it is result of the shattering effect violent collision of two black holes on the space-time which caused space time to be disturbed. this is according to recent study on gravitons and gravitational ripple. and many other studies tibebeab and not every thing have to pop into existence. and it could have always been there . and some scientists think that energy might have been the only entity in the euclidean(imaginary time T=it) which is time undergone wick rotation. tibebeab What is projectile? An object that is launched from a device Grant 2 dimensional motion under constant acceleration due to gravity Awais Not always 2D Awais Grant Awais why not? a bullet is a projectile, so is a rock I throw Grant bullet travel in x and y comment same as rock which is 2 dimensional Awais components Awais no all pf you are wrong. projectile is any object propelled through space by excretion of a force which cease after launch tibebeab for awais, there is no such thing as constant acceleration due to gravity, because gravity change from place to place and from different height tibebeab it is the object not the motion or its components tibebeab where are body center of mass on present. on the mid point Suzana is the magnetic field of the earth changing? tibebeab does shock waves come to effect when in earth's inner atmosphere or can it have an effect on the thermosphere or ionosphere? tibebeab and for the question from bal want do you mean human body or just any object in space tibebeab A stone is dropped into a well of 19.6m deep and the impact of sound heared after 2.056 second ,find the velocity of sound in air. 9.53 m/s ? Kyla In this case, the velocity of sound is 350 m/s. Zahangir why? Kyla some calculations is need. then you will get exact result. Zahangir i mean how? isn't it just a d over t? Kyla calculate the time it takes the stone to hit the ground then minus the stone's time to the total time... then divide the total distance by the difference of the time Snuggly awit lenard. Hahahah ari ga to! Kyla
2020-07-13T12:19:50
{ "domain": "jobilize.com", "url": "https://www.jobilize.com/physics1/course/11-1-rolling-motion-angular-momentum-by-openstax?qcr=www.quizover.com&page=4", "openwebmath_score": 0.5664595365524292, "openwebmath_perplexity": 491.4327373607443, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692345869641, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.653274765480277 }
http://math.stackexchange.com/questions/168808/finding-a-primitive-element-for-the-field-extension-mathbbq-sqrtp-1-sq?answertab=votes
# Finding a primitive element for the field extension $\mathbb{Q}(\sqrt{p_{1}},\sqrt{p_{2}},\ldots,\sqrt{p_{n}})/\mathbb{Q}$ Let $p_1,\ldots,p_n\in\mathbb{N}$ be different prime numbers, it can be shown that $[\mathbb{Q}(\sqrt{p_1},\ldots,\sqrt{p_n}):\mathbb{Q}]=2^n$ and in any case it is clearly finite since $[\mathbb{Q}(\sqrt{p_1},\ldots,\sqrt{p_n}):\mathbb{Q}]\leq2^n$. Since $char(\mathbb{Q})=0$ then $\mathbb{Q}$ is perfect hence every field extension is separable, in particular $\mathbb{Q}(\sqrt{p_{1}},\sqrt{p_{2}},\ldots,\sqrt{p_{n}})/\mathbb{Q}$ is separable. Since $\mathbb{Q}(\sqrt{p_{1}},\sqrt{p_{2}},\ldots,\sqrt{p_{n}})/\mathbb{Q}$ is a finite and separable field extension, by the primitive element theorem, it holds that there exist $\alpha\in\mathbb{Q}(\sqrt{p_{1}},\sqrt{p_{2}},\ldots,\sqrt{p_{n}})$ s.t $\mathbb{Q}(\alpha)=\mathbb{Q}(\sqrt{p_{1}},\sqrt{p_{2}},\ldots,\sqrt{p_{n}})$. I wish to find such element $\alpha$ (i.e. a primitive element, that we know exist). I know how to do this in the case $n=2$, I tried to generalize and prove this claim by induction, in the induction step I need to prove: 1. $\sqrt{p_{1}}+\cdots+\sqrt{p_{n-1}}\in\mathbb{Q}(\sqrt{p_{1}}+\sqrt{p_{2}}+\sqrt{p_{n}})$ 2. $\sqrt{p_{n}}\in\mathbb{Q}(\sqrt{p_{1}}+\sqrt{p_{2}}+\sqrt{p_{n}})$ What I tried to do is to look at : \begin{align} & (\sqrt{p_{1}}+\sqrt{p_{2}}+\sqrt{p_{n}})(\sqrt{p_{1}}+\sqrt{p_{2}}-\sqrt{p_{n}}) \\ & =((\sqrt{p_{1}}+\sqrt{p_{2}}+\sqrt{p_{n-1}})+\sqrt{p_{n}})((\sqrt{p_{1}}+\sqrt{p_{2}}+\sqrt{p_{n-1}})-\sqrt{p_{n}}) \\ & =(\sqrt{p_{1}}+\cdots+\sqrt{p_{n-1}})^{2}-p_{n} \end{align} If $n=2$ then this product is in $\mathbb{Q}$ hence in $\mathbb{Q}(\sqrt{p_{1}}+\sqrt{p_{2}})$ hence $\sqrt{p_{1}}-\sqrt{p_{2}}\in\mathbb{Q}(\sqrt{p_{1}}+\sqrt{p_{2}})$ so adding we get $\sqrt{p_1}\in\mathbb{Q}(\sqrt{p_1}+\sqrt{p_2})$ hence $\sqrt{p_2}\in\mathbb{Q}(\sqrt{p_1}+\sqrt{p_2})$ and we have proven $(2)$ So the reason I fail here is that I can't manage to show $$\sqrt{p_{1}}+\sqrt{p_{2}}-\sqrt{p_{n}}\in\mathbb{Q}(\sqrt{p_{1}}+\sqrt{p_{2}}+\sqrt{p_{n}}).$$ Can someone please help me find a primitive element, or help complete the proof I am trying to do here ? help is very much appriciated! - It seems easier to just guess the answer and show that it has $2^n$ Galois conjugates. [I haven't worked out the details—this is just something I've seen used on this site for similar questions.] –  Dylan Moreland Jul 9 '12 at 21:31 @DylanMoreland - what do you mean by "something with $2^n$ Galois conjugates" ? –  Belgi Jul 9 '12 at 21:32 This question has been answered here before, or something very similar:math.stackexchange.com/questions/93453 –  Geoff Robinson Jul 9 '12 at 21:36 @GeoffRobinson - I tried reading the solution suggested in the link you gave and I could not understand them. I am hoping for a more elementary solution –  Belgi Jul 9 '12 at 21:44 OK. I think you need some Galois theory, at least implicitly, as also sugested by Dylan Moreland. –  Geoff Robinson Jul 9 '12 at 21:54 I believe this is the solution Dylan was hinting at. To show that $\alpha=\sum \sqrt{p_i}$ generates $E=\mathbb Q(\{\sqrt{p_i}\})$ it suffices to show that $\alpha$ is not fixed by any automorphism of $E$. Notice that any automorphism of $E$ maps $\sqrt{p_i}$ to $\pm \sqrt{p_i}$. So it suffices to demonstrate that $$\sum \sqrt{p_i} \neq \sum s_i\sqrt{p_i}$$ for any choice of $s_i$ such that at least one $s_i$ is $-1$. But this is immediate because by cancelling the positive $s_i$ we would have $$\sum \sqrt{p_j}=-\sum \sqrt{p_j}.$$ It follows that $\alpha$ is not fixed by any automorphism of $E$ and so $E=\mathbb Q(\alpha)$. This is pretty much the same argument as Geoffs, but maybe it's a little bit clearer. - Can you please explian why it suffices to show that $\alpha$ is not fixed by any automorphism of $E$ ? I am tring to justify it to myself and I say that if there was an automorphism of $E$ that would of fixed $\alpha$ than there is a proper subgroup of the Galois group of $E$ that fixed $\alpha$. this subgroup corresponds to a proper subfield of $E$ (but I don't know if/why it is $\mathbb{Q}(\alpha)$ and I think that what I do need is the other direction of what I am writing here...) thanks for the help! –  Belgi Jul 9 '12 at 22:13 @Belgi If $\mathbf Q(\alpha)\subsetneq E$, then there is a nontrivial automorphism of $E$ fixing $Q(\alpha)$. It's a simple fact, no need to bring the full strength of Galois theorem here. –  tomasz Jul 9 '12 at 22:20 OK, I understand. Can we also use Galois theory to give a simple argument about why the degree of the extension is $2^n$ ? In the spirit of this answer I wanted to show that every such map ($p_i$ maps to plus or minus itself) is an automorphism, but it seems that we can't prove that this map is even $1-1$ without knowing that $\sqrt{p_i}$'s are independent over $\mathbb{Q}$ –  Belgi Jul 9 '12 at 22:48 @Belgi I don't think so. You really have to show that the primes are independent, I think Bill Dubuque's argument is as good as it gets. –  JSchlather Jul 10 '12 at 2:25
2015-05-29T21:27:11
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/168808/finding-a-primitive-element-for-the-field-extension-mathbbq-sqrtp-1-sq?answertab=votes", "openwebmath_score": 0.9712254405021667, "openwebmath_perplexity": 162.1314947313283, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692339078752, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747650260479 }
https://testbook.com/question-answer/according-to-rayleighs-theorem-it-becomes--5e5e7098f60d5d1808367663
# According to Rayleigh’s theorem, it becomes possible to determine the energy of a signal by ______ This question was previously asked in VIZAG MT Electrical: 2017 Official Paper View all Vizag Steel Management Trainee Papers > 1. Estimating the area under the square root of its amplitude spectrum 2. Estimating the area under the square of its amplitude spectrum 3. Estimating the area under the one-fourth power of its amplitude spectrum 4. Estimating the area exactly half as that of its amplitude spectrum Option 2 : Estimating the area under the square of its amplitude spectrum ## Detailed Solution Rayleigh’s theorem states that “if the energy of a signal is finite, then it could be evaluated from it spectrum” as: $$\mathop \smallint \limits_{ - \infty }^\infty \left| {x{{\left( t \right)}^2}} \right|dt = \mathop \smallint \limits_{ - \infty }^\infty {\left| {X\left( f \right)} \right|^2}dt$$ Where the term on the LHS represents the energy of a signal x(t) and the term on the RHS represents the net area under the square of the amplitude spectrum. X(f) is defined as: $$X\left( f \right) = \mathop \smallint \limits_{ - \infty }^\infty x\left( t \right){e^{ - j2\pi t}}dt$$     ----(1) Note: The finite-energy assumption is important to assure that X(f) is properly defined, i.e., the integral in equation (1) exists or doesn’t diverge.
2021-10-23T02:56:28
{ "domain": "testbook.com", "url": "https://testbook.com/question-answer/according-to-rayleighs-theorem-it-becomes--5e5e7098f60d5d1808367663", "openwebmath_score": 0.8926568031311035, "openwebmath_perplexity": 860.2329803014661, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692339078752, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747650260479 }
https://mc-stan.org/docs/2_22/stan-users-guide/multi-logit-section.html
This is an old version, view current version. 1.6 Multi-Logit Regression Multiple outcome forms of logistic regression can be coded directly in Stan. For instance, suppose there are $$K$$ possible outcomes for each output variable $$y_n$$. Also suppose that there is a $$D$$-dimensional vector $$x_n$$ of predictors for $$y_n$$. The multi-logit model with $$\textsf{normal}(0,5)$$ priors on the coefficients is coded as follows. data { int K; int N; int D; int y[N]; matrix[N, D] x; } parameters { matrix[D, K] beta; } model { matrix[N, K] x_beta = x * beta; to_vector(beta) ~ normal(0, 5); for (n in 1:N) y[n] ~ categorical_logit(x_beta[n]'); } The prior on beta is coded in vectorized form. As of Stan 2.18, the categorical-logit distribution is not vectorized for parameter arguments, so the loop is required. The matrix multiplication is pulled out to define a local variable for all of the predictors for efficiency. Like the Bernoulli-logit, the categorical-logit distribution applies softmax internally to convert an arbitrary vector to a simplex, $\texttt{categorical}\mathtt{\_}\texttt{logit}\left(y \mid \alpha\right) = \texttt{categorical}\left(y \mid \texttt{softmax}(\alpha)\right),$ where $\texttt{softmax}(u) = \exp(u) / \operatorname{sum}\left(\exp(u)\right).$ The categorical distribution with log-odds (logit) scaled parameters used above is equivalent to writing y[n] ~ categorical(softmax(x[n] * beta)); Constraints on Data Declarations The data block in the above model is defined without constraints on sizes K, N, and D or on the outcome array y. Constraints on data declarations provide error checking at the point data are read (or transformed data are defined), which is before sampling begins. Constraints on data declarations also make the model author’s intentions more explicit, which can help with readability. The above model’s declarations could be tightened to int<lower = 2> K; int<lower = 0> N; int<lower = 1> D; int<lower = 1, upper = K> y[N]; These constraints arise because the number of categories, K, must be at least two in order for a categorical model to be useful. The number of data items, N, can be zero, but not negative; unlike R, Stan’s for-loops always move forward, so that a loop extent of 1:N when N is equal to zero ensures the loop’s body will not be executed. The number of predictors, D, must be at least one in order for beta * x[n] to produce an appropriate argument for softmax(). The categorical outcomes y[n] must be between 1 and K in order for the discrete sampling to be well defined. Constraints on data declarations are optional. Constraints on parameters declared in the parameters block, on the other hand, are not optional—they are required to ensure support for all parameter values satisfying their constraints. Constraints on transformed data, transformed parameters, and generated quantities are also optional. Identifiability Because softmax is invariant under adding a constant to each component of its input, the model is typically only identified if there is a suitable prior on the coefficients. An alternative is to use $$(K-1)$$-vectors by fixing one of them to be zero. The partially known parameters section discusses how to mix constants and parameters in a vector. In the multi-logit case, the parameter block would be redefined to use $$(K - 1)$$-vectors parameters { matrix[K - 1, D] beta_raw; } and then these are transformed to parameters to use in the model. First, a transformed data block is added before the parameters block to define a row vector of zero values, transformed data { row_vector[D] zeros = rep_row_vector(0, D); } which can then be appended to beta_row to produce the coefficient matrix beta, transformed parameters { matrix[K, D] beta; beta = append_row(beta_raw, zeros); } The rep_row_vector(0, D) call creates a row vector of size D with all entries set to zero. The derived matrix beta is then defined to be the result of appending the row-vector zeros as a new row at the end of beta_raw; the row vector zeros is defined as transformed data so that it doesn’t need to be constructed from scratch each time it is used. This is not the same model as using $$K$$-vectors as parameters, because now the prior only applies to $$(K-1)$$-vectors. In practice, this will cause the maximum likelihood solutions to be different and also the posteriors to be slightly different when taking priors centered around zero, as is typical for regression coefficients.
2020-07-04T18:48:51
{ "domain": "mc-stan.org", "url": "https://mc-stan.org/docs/2_22/stan-users-guide/multi-logit-section.html", "openwebmath_score": 0.8054683804512024, "openwebmath_perplexity": 1356.780169117171, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692332287863, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747645718188 }
https://keisan.casio.com/exec/system/1180573427
# Sine integral Si(x) Calculator ## Calculates the sine integral Si(x). $Si(x)={\large\int_{\small 0}^{\hspace{25}\small x}\frac{sin(t)}{t}}dt$ x 6digit10digit14digit18digit22digit26digit30digit34digit38digit42digit46digit50digit $\normal Sine\ integral\ Si(x)\\\hspace{60} and\ cosine\ integral\ Ci(x)\\[10](1)\ Si(x)={\large\int_{\small 0}^{\hspace{25}\small x}\frac{sin(t)}{t}}dt\\(2)\ Ci(x)={\large\int_{\small 0}^{\hspace{25}\small x}\frac{cos(t)-1}{t}}dt+ln(x)+\gamma\\$ Sine integral Si(x) [1-10] /10 Disp-Num5103050100200 [1]  2018/08/19 23:29   Male / 60 years old level or over / A retired person / Useful / Purpose of use Retired physicist looking for code to generate table of dipole antenna complex impedance as function of wavelength and antenna length. This was a step along the way. Comment/Request Would like code to incorporate in more complicated function evaluation. Fortran or Basic preferred, C, C++ or Java OK too. [2]  2018/02/28 04:11   Male / 60 years old level or over / An office worker / A public employee / Very / Purpose of use Normalize laser beam quality [3]  2013/12/16 05:28   Male / 60 years old level or over / High-school/ University/ Grad student / Very / Purpose of use GENERAL INTEREST [4]  2012/10/26 00:50   Male / 50 years old level / An office worker / A public employee / Very / Purpose of use Spectrum Analysis Comment/Request Very nice work. Domo arigoto goziamashita. [5]  2011/07/05 02:28   Male / 20 years old level / A student / Very / Purpose of use [6]  2010/09/16 11:38   Male / More than 60 / A teacher / Very / Comment/Request good [7]  2010/07/04 03:50   Male / 30 level / A teacher / Very / Purpose of use engineering Comment/Request this is a good site. i need it [8]  2009/11/19 06:47   Male / 30 level / A researcher / Very / Purpose of use Needed a calculation of the Sine integral Comment/Request This site is very useful and convenient! [9]  2009/10/13 14:21   Male / 20 level / A specialized student / Very / Purpose of use to compare the exact value of the integral with the asymptotic expansion approach Comment/Request impressive website!easy to use and all useful function integrals are accessible [10]  2009/09/04 02:25   Male / 30 level / A researcher / Very / Comment/Request The algorithm could have been provided, researchers can use for scientific computing. Sending completion To improve this 'Sine integral Si(x) Calculator', please fill in questionnaire. Male or Female ? Age Occupation Useful? Purpose of use?
2019-05-22T11:35:28
{ "domain": "casio.com", "url": "https://keisan.casio.com/exec/system/1180573427", "openwebmath_score": 0.6051081418991089, "openwebmath_perplexity": 6133.917052736466, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692332287863, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747645718188 }
https://proofwiki.org/wiki/Category:Definitions/Algebras_of_Sets
# Category:Definitions/Algebras of Sets This category contains definitions related to Algebras of Sets. Related results can be found in Category:Algebras of Sets. Let $X$ be a set. Let $\powerset X$ be the power set of $X$. Let$\RR \subseteq \powerset X$ be a set of subsets of $X$. $\RR$ is an algebra of sets over $X$ if and only if $\RR$ satisfies the algebra of sets axioms: $(\text {AS} 1)$ $:$ Unit: $\ds X \in \RR$ $(\text {AS} 2)$ $:$ Closure under Union: $\ds \forall A, B \in \RR:$ $\ds A \cup B \in \RR$ $(\text {AS} 3)$ $:$ Closure under Complement Relative to $X$: $\ds \forall A \in \RR:$ $\ds \relcomp X A \in \RR$ ## Subcategories This category has the following 2 subcategories, out of 2 total. ## Pages in category "Definitions/Algebras of Sets" The following 5 pages are in this category, out of 5 total.
2022-08-15T18:25:50
{ "domain": "proofwiki.org", "url": "https://proofwiki.org/wiki/Category:Definitions/Algebras_of_Sets", "openwebmath_score": 0.8493804931640625, "openwebmath_perplexity": 543.5280170846386, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692332287863, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747645718188 }
http://mathhelpforum.com/algebra/50166-tv.html
# Math Help - TV 1. ## TV A television screen is 40 cm high and 60 cm wide. The picture is compressed to 62.5 % of its original area, leaving a uniform dark strip around the outside. What are the dimensions of the reduced picture? 2. Would it not be 30cm by 50cm? 3. Hello, Andrew! A television screen is 40 cm high and 60 cm wide. The picture is compressed to 62.5 % of its original area, leaving a uniform dark strip around the outside. What are the dimensions of the reduced picture? Let $x$ = width of the dark strip. Code: : - - - 60 - - - - : - *-------------------* - : | | x : | *-----------* | - : | | | | : 40 | | | | 40-2x : | | | | : : | *-----------* | - : | | x - *-------------------* - : x : 60-2x : x : The area of the screen is: . $40 \times 60 \:=\:2400$ cm². The area of the picture is: . $(40-2x)(60-2x) \:=\:4x^2 - 200x + 2400$ cm². The area of the picture is $\frac{5}{8}$ the area of the screen: . . $4x^2 - 200x + 2400 \:=\:\frac{5}{8}(2400) \;=\;1500 \quad\Rightarrow\quad 4x^2 - 200x + 900 \:=\:0$ Factor: . $4(x-5)(x-45) \:=\:0 \quad\Rightarrow\quad x \;=\;5,\:45$ The dark strip is 5 cm wide. Therefore, the dimensions of the picture are: . $30\text{ cm } \times 50\text{ cm.}$ You're absolutely correct, Lucky!
2015-07-30T02:33:57
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/50166-tv.html", "openwebmath_score": 0.556723415851593, "openwebmath_perplexity": 2649.0370579940072, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692332287862, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747645718188 }
http://www.maths.kisogo.com/index.php?title=Linear_combination
Linear combination This page is a stub, so it contains little or minimal information and is on a to-do list for being expanded.The message provided is: Definition Let [ilmath](V,\mathcal{K})[/ilmath] be a vector space and let [ilmath]v_1,v_2,\ldots,v_n\in V[/ilmath] be given. A linear combination of [ilmath]v_1,\ldots,v_n[/ilmath] is any vector of the form[1]: • $\sum_{i=1}^na_iv_i$ for some scalars, [ilmath]a_1,a_2,\ldots,a_n\in\mathcal{K} [/ilmath][Note 1]. Note: A linear combination is always a finite sum[1][Note 2]
2022-09-30T15:22:13
{ "domain": "kisogo.com", "url": "http://www.maths.kisogo.com/index.php?title=Linear_combination", "openwebmath_score": 0.5758016705513, "openwebmath_perplexity": 5021.752629853979, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692332287863, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747645718187 }
https://math.stackexchange.com/questions/2021898/show-that-sum-n-1-infty-a-n-converges-iff-sum-k-1-infty-ka-k2
# Show that $\sum_{n =1}^\infty a_n$ converges iff $\sum_{k = 1}^\infty ka_{k^2}$ converges. Let $a_1 \geq a_2 \geq a_3 \geq \cdots \geq 0$. Prove that the series $\sum_{n =1}^\infty a_n$ converges if and only if the series $\sum_{k = 1}^\infty ka_{k^2}$ converges. I have the following: Let $S_n = \sum_{k = 1}^na_k$, $t_k = \sum_{j = 1}^k ja_{j^2}$. Let $n \geq k(k+1) / 2)$. Then \begin{align*} t_k &= a_1 + 2a_4 + 3a_9 + \cdots + ka_{k^2}\\ &\leq a_1 + 2a_3 + 3a_6 + \cdots + ka_{k(k+1)/2}\\ &\leq a_1 + a_2 + a_3 + \cdots + a_{k(k+1)/2}\\ &\leq S_n \end{align*} So if $\sum_{n =1}^\infty a_n$ converges, then $S_n$ is bounded above, so $t_k$ is bounded above, so $\sum_{k = 1}^\infty ka_{k^2}$ converges since it is a nonnegative series with partial sums bounded above. How do I prove the other direction? • $(2k+1)a_{k^2} \geqslant a_{k^2} + a_{k^2+1} + \dotsc + a_{k^2 + 2k}$ – Daniel Fischer Nov 19 '16 at 22:46 • This is a variant of the (Cauchy) condensation test. en.wikipedia.org/wiki/Cauchy_condensation_test, see under "Generalizations". – LutzL Nov 19 '16 at 23:12 • – LutzL Nov 19 '16 at 23:15
2019-05-22T07:53:33
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2021898/show-that-sum-n-1-infty-a-n-converges-iff-sum-k-1-infty-ka-k2", "openwebmath_score": 0.9995031356811523, "openwebmath_perplexity": 204.71480666130552, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692332287862, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747645718187 }
https://quant.stackexchange.com/questions/20834/reconciling-forecasted-growth-of-components-and-sum
# Reconciling forecasted growth of components and sum I'm working with a very basic basic forecast model using Compound Annual Growth Rate and I need to reconcile the forecasts at different levels of detail. Suppose I have two business lines with initial values $X_0,Y_0$ and terminal values $X_T,Y_T$. Then the sum of the initial values are $X_0+Y_0$ and the terminal values are $X_T+Y_T$. Let their projected $T+1$ values be $X_{T+1},Y_{T+1},(X+Y)_{T+1}$. I find the Compound Annual Growth Rate $R$ for each line and the sum of all lines: \begin{align} R_X &= \left(\frac{X_T}{X_0}\right)^{(1/T)} \\ R_Y &= \left(\frac{Y_T}{Y_0}\right)^{(1/T)} \\ R_{X+Y} &= \left(\frac{X_T + Y_T}{X_0 + Y_0}\right)^{(1/T)} \\ \end{align} To project forward one period, I multiply the terminal value of each line and the sum of all lines by their respective $R$ values: \begin{align} X_{T+1} &= X_T(1+R_X)\\ Y_{T+1} &= X_T(1+R_Y)\\ (X+Y)_{T+1} &= (X_T + Y_T) (1 + R_{X+Y}) \\ \end{align} However, I find that $X_{T+1}+Y_{T+1}\neq(X+Y)_{T+1}$ because the rate computation is not linear in the values argument. I need to work with $(X+Y)_{T+1}$ as is, but I also need to discuss how $X_{T+1}$ and $Y_{T+1}$ individually contribute to the total. Is there a function $f$ such that: $(X_T + Y_T) (1 + R_{X+Y})=X_T(1+f(R_X))+Y_T(1+f(R_Y))$
2019-07-18T00:56:26
{ "domain": "stackexchange.com", "url": "https://quant.stackexchange.com/questions/20834/reconciling-forecasted-growth-of-components-and-sum", "openwebmath_score": 1.0000096559524536, "openwebmath_perplexity": 659.1949621754968, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692332287863, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747645718187 }
http://math.stackexchange.com/questions/214259/how-to-define-a-perspective-circle-in-xy
How to define a perspective circle in xy? You can see a perspective view of a square(FCED) and a circle in 2D screen. O is center of the circle. How can I define the perspective circle equation that shown as red in the picture? Thanks a lot - The red curve is an ellipse, but the point $O$ will not be the center of this ellipse! The values $a$, $c$, $d$ seem to be given, and the figure is symmetric with respect to the $y$-axis. Intersecting $A\vee D$ with $B\vee C$ gives you the point $F=(-f,-g)$. It follows that $E=(f, -g)$ and $O=(0,-g)$. Intersecting $A\vee D$ with $B\vee O$ then furnishes $H$, and intersecting $B\vee C$ with $A\vee O$ furnishes $G$. By symmetry the ellipse has an equation of the form $${x^2\over p^2} +{(y+m)^2\over q^2}=1$$ with unknown constants $p$, $q$, $m$. The coordinates of $G$ and $H$, through which the ellipse passes, provide more than enough information to determine these constants. - One possibility is to find out the projective transformation itself, then apply on the circle. Were these objects originally really a square and a circle somewhere? I mean, can you use those data or only $a,c,d$? Put the whole picture in 3d, on the $z=1$ plane (that is, $A=[-a,0,1]$, $B=[a,0,1]$, $C=[0,-c,1]$ and $D=[0,-d,1]$ will be). Then consider the matrix $$M:=[A|B|C]=\begin{bmatrix} -a & a & 0 \\ 0 & 0 & -c \\ 1 & 1 & 1 \end{bmatrix}$$ then $M\cdot\begin{bmatrix} 1\\0\\0 \end{bmatrix} = A$, $\ M\cdot\begin{bmatrix} 0\\1\\0 \end{bmatrix} = B$, $\ M\cdot\begin{bmatrix} 0\\0\\1 \end{bmatrix} = C$, so $M^{-1}$ takes $A$ and $B$ to the ideal points $i_x:=\begin{bmatrix} 1\\0\\0 \end{bmatrix}$ and $i_y:=\begin{bmatrix} 0\\1\\0 \end{bmatrix}$, and $C$ to the origo $\begin{bmatrix} 0\\0\\1 \end{bmatrix}$. So, for this, you need to calculate $M^{-1}$, then $D':=M^{-1}D$ (divided by its 3rd coordinate) will give the opposite corner of the square, suppose $D'=\begin{bmatrix} u\\v\\w \end{bmatrix}$, then $u=v$ should hold (because it wants to be a square), and $F'=(u/w,0)$ and $E'=(0,v/w)$ will be [projected back to the $z=1$ plane]. Then form the circle (centered at $(u/2w,u/2w)$ with radius $u/2w$) in homogenious coordinates [-- so we could also call it a cone]: $$(\frac xz-\frac u{2w})^2 + (\frac yz-\frac u{2w})^2 = \left(\frac u{2w}\right)^2$$ and transform it back by $M$: $$(x,y)\ \text{ in the red ellipse} \iff M^{-1}\begin{bmatrix} x\\y\\1 \end{bmatrix}\ \text{in the circle}$$ -
2015-04-21T03:10:58
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/214259/how-to-define-a-perspective-circle-in-xy", "openwebmath_score": 0.9194996953010559, "openwebmath_perplexity": 210.23847559538453, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692332287863, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747645718187 }
http://math.stackexchange.com/questions/214560/calculating-pab-where-a-and-b-are-normal-distribution/214566
# Calculating P(A>B), where A and B are normal distribution In the problem we have that $A \sim N(7, 11/60)$ and $B \sim N(7.3, 7/20)$ and the question is what is the probability that $A$ gives a higher value that $B.$ Since the textbook we have for the course doesn't include information about this type of question, I figured that it might just required to do manipulation with the normal law formula. So I came up with this : \begin{equation*} \int_{-\infty}^{\infty}\Big(\frac{1}{\frac{11}{60}\sqrt{2\pi}} \exp\{-0.5 \frac{t-7}{\frac{11}{60}}^2 \}\Big) \Big(\int_{-\infty}^{t}\frac{1}{\frac{7}{20}\sqrt{2\pi}}\exp\{ -0.5\frac{x-7.3}{\frac{7}{20}}^2 \}dx\Big) dt \end{equation*} What I thought could be a way to solve this was by multiplying the probability to get a lower value of $B$ (right part of the integral) by the probability of $A$ (left part of the integral) for each value of $A.$ In theory this might work, but I'm unable to actually calculate this with either my calculator or Wolfram Alpha. Is there something I'm overlooking in this problem ? - An easy way of solving this problem for the case when $A$ and $B$ are jointly normal, is to use the fact that $A-B$ is also a normal random variable with mean equal to $E[A] -E[B]$ and variance equal to $\text{var}(A) + \text{var}(B) - 2\cdot\text{cov}(A,B)$. So then the question simplifies to What is the probability than a normal random variable with given mean and variance is greater than $0$? the answer to which can be obtained from tables of the standard normal distribution function. You don't say anything about whether $A$ and $B$ are jointly normal or not, though your approach seems to indicate that you are treating them as independent normal random variables (which happen to be jointly normal). Note that this approach cannot be used when $A$ and $B$ are normal but not jointly normal random variables because in this case $A-B$ is not a normal random variable. @Alex Excerpt from an answer I wrote on stats.SE. Consider two standard normal random variables. If they are jointly normal, then $p(x,y)$ is the bivariate normal density. As a special case, if they are independent, then $p(x,y) = p(x)p(y)$. But they could be marginally normal random variables that are not jointly normal with joint density $$p(x,y)=\begin{cases}2p(x)p(y),&\text{if}~x\geq 0, y\geq 0,\text{or}~x<0,y<0,\\0,&\text{otherwise.}\end{cases}$$ For even more possibilities, see here. –  Dilip Sarwate Oct 23 '12 at 1:49
2015-07-06T22:50:38
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/214560/calculating-pab-where-a-and-b-are-normal-distribution/214566", "openwebmath_score": 0.8595504760742188, "openwebmath_perplexity": 59.126810232016744, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692332287863, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747645718187 }
http://mathhelpforum.com/advanced-statistics/214945-find-expectation-r-v.html
# Thread: find expectation of this r.v. 1. ## find expectation of this r.v. toss balls one at a time into n bins, each ball will always land in one of the n bins. stop tossing once some bin end up with 2 balls. and the tosses are independent of each other X be the number of tosses needed. (so X is between 2 and n+1) find E(X) i find linearity hard to apply here. and the naive definition resulted in a messy sum, that i cannot reduce to simple form. is this a well know distribution somewhere? thanks for any insights 2. ## Re: find expectation of this r.v. Originally Posted by pv3633 toss balls one at a time into n bins, each ball will always land in one of the n bins. stop tossing once some bin end up with 2 balls. and the tosses are independent of each other X be the number of tosses needed. (so X is between 2 and n+1) find E(X) i find linearity hard to apply here. and the naive definition resulted in a messy sum, that i cannot reduce to simple form. is this a well know distribution somewhere? thanks for any insights Hi pv3633! This is not a well known distribution as far as I know. The formula for EX is: $EX=\sum_{k=2}^{n+1} k \cdot \frac {n!(k-1)!} {(n-k+1)!n^k}$ Wolfram|Alpha could not solve this (within its timeout). But I found that a close numerical approximation is: $EX \approx \frac 5 4 \sqrt n + \frac 3 4$ 3. ## Re: find expectation of this r.v. thank you. that's the sum i got also. how to get the approximation btw? 4. ## Re: find expectation of this r.v. Originally Posted by pv3633 thank you. that's the sum i got also. how to get the approximation btw? I calculated a couple of values with Wolfram|Alpha: Code: n EX 1 2 10 4.6 100 13.2 1000 40.3 10000 125.66 Then I made a log-log-plot in excel which showed a straight line. I used excel's solver to find the coefficients. The resulting approximation is: Code: n EX Approx 1 2 2 10 4.6 4.702847075 100 13.2 13.25 1000 40.3 40.27847075 10000 125.66 125.75 thank you
2018-02-22T13:44:39
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/advanced-statistics/214945-find-expectation-r-v.html", "openwebmath_score": 0.8690333962440491, "openwebmath_perplexity": 1512.793698093461, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692332287863, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747645718187 }
https://kerodon.net/tag/0014
# Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ Proposition 1.1.3.4. Let $\sigma : \Delta ^{n} \rightarrow S_{\bullet }$ be a map of simplicial sets. Then $\sigma$ can be factored as a composition $\Delta ^{n} \xrightarrow {\alpha } \Delta ^{m} \xrightarrow { \tau } S_{\bullet },$ where $\alpha$ corresponds to a surjective map of linearly ordered sets $[n] \rightarrow [m]$ and $\tau$ is a nondegenerate $m$-simplex of $S_{\bullet }$. Moreover, this factorization is unique. Proof. Let $m$ be the smallest nonnegative integer for which $\sigma$ can be factored as a composition $\Delta ^{n} \xrightarrow {\alpha } \Delta ^{m} \xrightarrow {\tau } S_{\bullet }$. It follows from the minimality of $m$ that $\alpha$ must induce a surjection of linearly ordered sets $[n] \rightarrow [m]$ (otherwise, we could replace $[m]$ by the image of $\alpha$) and that the $m$-simplex $\tau$ is nondegenerate. This proves the existence of the desired factorization. To establish uniqueness, let us suppose we are given another factorization of $\sigma$ as a composition $\Delta ^{n} \xrightarrow {\alpha '} \Delta ^{m'} \xrightarrow {\tau '} S_{\bullet }$. By assumption, $\alpha$ and $\alpha '$ determine surjections of linearly ordered sets $[n] \rightarrow [m]$ and $[n] \rightarrow [m']$, and therefore admit sections which we will denote by $\beta$ and $\beta '$, respectively. The equality $\sigma = \tau \circ \alpha$ then gives $\tau = \sigma \circ \beta = \tau ' \circ \alpha ' \circ \beta .$ Our assumption that $\tau$ is nondegenerate then guarantees that the map $\alpha ' \circ \beta : [m] \rightarrow [m']$ is injective, so that $m \leq m'$. The same argument shows that $m' \leq m$, so we must have $m = m'$. Since the only nondecreasing injection from $[m]$ to itself is the identity map, we conclude that $\alpha ' \circ \beta = \operatorname{id}_{[m]}$. The desired uniqueness now follows from the calculations $\tau = \tau ' \circ \alpha ' \circ \beta = \tau ' \quad \quad \alpha = \alpha ' \circ \beta \circ \alpha = \alpha '.$ $\square$
2020-02-19T08:07:50
{ "domain": "kerodon.net", "url": "https://kerodon.net/tag/0014", "openwebmath_score": 0.983535647392273, "openwebmath_perplexity": 64.70198370066237, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747641175897 }
https://math.stackexchange.com/questions/860201/f-in-l10-infty-monotone-show-lim-x-rightarrow-infty-xfx-0?noredirect=1
# $f\in L^1(0,\infty)$ monotone, show $\lim_{x\rightarrow \infty} xf(x) = 0$ [duplicate] Here is the solution: First $f$ is monotone and integrable on $(0,\infty)$, wolg we can assume that $f>0$ and approaches $0$ as $x$ goes to infinity. Observe that $$xf(2x) \leq \int_x^{2x} f(t)$$ since when $t\in [x,2x]$ we have $f(t) \geq f(2x)$ because $f$ is decreasing. And since $f$ is integrable, we have $$\lim_{x\rightarrow \infty} \int_x^{2x} f(t) = 0,$$ thus $$\lim_{x\rightarrow \infty} 2xf(2x) = 0.$$ ## marked as duplicate by user147263, Daniel Fischer, Norbert, Peter Woolfitt, HakimJul 8 '14 at 22:11 • The negation of "$\lim_{x\rightarrow\infty} xf(x)=0$" is not "$\lim_{x\rightarrow\infty} xf(x)>0$". – David Mitra Jul 8 '14 at 15:28 • Your proof looks OK. – TZakrevskiy Jul 8 '14 at 15:28 • $x$ is monotone increasing and $f$ will have to be monotone decreasing. The product $xf(x)$ needn't be monotone. You can't claim that for all $x\ge x_0$, $xf(x) \ge\epsilon$. One can say at best that there's a sequence $\{x_n\}$ such that $x_nf(x_n) \ge\epsilon$ as $n\to\infty$. – InTransit Jul 8 '14 at 15:29 • See this, though. – David Mitra Jul 8 '14 at 15:30 • This argument is by contradiction, not contraposition. – Alex Schiff Jul 8 '14 at 15:37
2019-08-19T07:34:01
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/860201/f-in-l10-infty-monotone-show-lim-x-rightarrow-infty-xfx-0?noredirect=1", "openwebmath_score": 0.8544354438781738, "openwebmath_perplexity": 612.0047456659026, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747641175897 }
http://planetmath.org/SigmaAlgebra
# $\sigma$-algebra ## Primary tabs Defines: generated by Synonym: sigma-algebra, sigma algebra, $\sigma$ algebra, Borel structure, $\sigma$-field, sigma-field, sigma field, $\sigma$ field Type of Math Object: Definition Major Section: Reference ## Mathematics Subject Classification 28A60 Measures on Boolean rings, measure algebras ### question about \sigma-algebra In relation to Djao's entry \sigma-algebra, \emptyset \in \mathcal{B}(E) appears like a premise. As far as I know, \emptyset \in A \forall A. So, why this one is not redundant? BTW, in my browser IE6 the entry title looks like [red cross]$\sigma$-algebra, missing the greek letter sigma. ### Re: question about \sigma-algebra You seem to be confusing membership with inclusion (being a subset of). The empty set is a subset of every set, but it is not, for example, a member of itself. ### Re: question about \sigma-algebra Hi ratboy, Yes! Now it is clearer for me why E \in \mathcal{B(E)}. Thank you! Pedro
2013-05-22T05:29:15
{ "domain": "planetmath.org", "url": "http://planetmath.org/SigmaAlgebra", "openwebmath_score": 0.9927606582641602, "openwebmath_perplexity": 5871.344382117214, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747641175897 }
http://math.stackexchange.com/questions/337626/please-help-me-group-theory-prove-b33-e/337638
# Please help me, Group Theory. Prove $b^{33}=e$. Let $G$ be a group and $a,b \in G$. Prove that if $a^{2}=e$ and $ab^{4}a=b^{7}$, then $b^{33}=e$, where $e$ is the identity of a group $G$. - More generally, $a^2=e$ and $a b^p a = b^q$ implies $b^{p^2}=b^{q^2}$. You can use anon's proof for this. – Martin Brandenburg Oct 3 '13 at 12:58 So $\rm ab^4a=b^7$. Apply $\rm a$ on left and right to get $\rm ab^7a=b^4$. Then, using symmetry in calculation, $$\rm ab^{4\times7}a=\begin{cases}(ab^4a)^7=(b^7)^7=b^{49} \\[3pt] \rm(ab^7a)^4=(b^4)^4=b^{16} \end{cases}$$ and so $\rm b^{49}=b^{16}\iff e=b^{49-16}=b^{33}$. Note that since $\rm a^2=e\iff a=a^{-1}$, the map $\rm\Phi_a: x\mapsto axa=axa^{-1}$ is conjugation, which is a special type of automorphism. In particular $\rm\Phi_a(xy)=\Phi_a(x)\Phi_a(y)$ and thus $\rm\Phi_a(x^{n})=\Phi_a(x)^n$ for any elements $\rm x,y$ and integer ${\rm n}\in{\bf Z}$. We employed the latter exponential distributivity above. - thank you so much – nameless Mar 22 '13 at 8:23 As an exercise, you could generalize this for $a^2 = e$, $a b^n a = b^m$. – Mikko Korhonen Mar 22 '13 at 13:34 $$b^{49} = (b^7)^7 = (ab^4a)^7 = a (b^7)^4 a = a (a b^4 a)^4 a = b^{16}.$$ - Since $a$ is its own inverse, $b^4 = ab^7a = ab^8b^{-1}a = ab^4a ab^4a ab^{-1}a = b^{14} ab^{-1}a$. Therefore, $ab^{-1}a = b^{-10}$. Inverting both, $aba = b^{10}$. Now, $b^7 = ab^4a = abaabaabaaba = b^{40}$ gives the result. - Nice solution.! – nameless Mar 22 '13 at 6:17
2016-07-02T02:25:19
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/337626/please-help-me-group-theory-prove-b33-e/337638", "openwebmath_score": 0.9483353495597839, "openwebmath_perplexity": 315.75201903659627, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747641175896 }
https://stats.stackexchange.com/questions/510263/bayes-information-criterion-what-does-log-mean
# Bayes Information Criterion — what does log mean? Super basic question about the BIC — is it defined in terms of log base ten or the natural logarithm? I see the latter on Wikipedia; but see ‘log’ not ‘ln’ in the original paper (though am aware that ‘log’ can mean ‘ln’...) Note that $$\log_b(x)=\frac{\log_a(x)}{\log_a(b)} =: c \log_a(x)$$. In that sense, all logs are proportional. So they all have the same order. In statistics, the logarithms are often not distinguished for this reason. I saw people use “log” for cases where they don’t care about the base (as it just leads to a different constant). In this specific case, using a different base will change the result. You’re minimizing something $$+ c \ln(d)$$, where $$c$$ is a constant which is determined by the basis of choice. So, if you choose a different basis, the penalty becomes larger (or smaller).
2021-11-29T21:01:12
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/510263/bayes-information-criterion-what-does-log-mean", "openwebmath_score": 0.8801665306091309, "openwebmath_perplexity": 420.7333984251475, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747641175896 }
https://www.physicsforums.com/threads/squared-fractions.330647/
# Squared fractions. 1. Aug 14, 2009 ### icystrike 1. The problem statement, all variables and given/known data Show that $$1/72$$ cannot be written as the sum of the reciprocals of the squares of two different positive integers. 2. Relevant equations 3. The attempt at a solution Available solutions 1/8²-1/24² 1/9²+1/648 Therefore Proven. 2. Aug 14, 2009 ### Дьявол $$\frac{1}{a^2}+\frac{1}{b^2}=\frac{1}{72}$$ $$\frac{b^2+a^2}{a^2b^2}=\frac{1}{72}$$ $$72(b^2+a^2)=a^2b^2$$ Solve the equation and write here the solution. 3. Aug 14, 2009 ### icystrike How do i solve that equation? 4. Aug 14, 2009 ### Дьявол Move the terms from the left to the right side of the equation: $$a^2b^2-72b^2-72a^2=0$$ Now factor out b2 or a2 and tell me what you got. 5. Aug 14, 2009 ### icystrike $$a^2=(72b^2)/(b^2-72)$$ 6. Aug 14, 2009 ### Дьявол Ok. Now what "a" is equal to? What can you conclude from the final solution? 7. Aug 14, 2009 ### icystrike Thanks for your help. But the actual problem is to derive the available solution from the question given. 8. Aug 14, 2009 ### HallsofIvy Yes, that is what he is trying to show you how to do! However, the problem, as you stated it was "Show that 1/72 cannot be written as the sum of the reciprocals of the squares of two different positive integers." (my emphasis) You can't do that because, as you showed, it is not true. 9. Aug 14, 2009 ### icystrike I got it. Im sorry. Now for part 2, How can I write $$1/72$$ with reciprocals of the squares of three different positve integers. Last edited: Aug 14, 2009 10. Aug 14, 2009 ### Staff: Mentor Start by writing an equation that expresses this relationship. 11. Aug 14, 2009 ### icystrike okay. $$1/a^2+1/b^2+c^3=1/72$$ By studying the relationship of their factor, The equation can be translated into : $$1/x^2+1/(b^2)(x^2)+1/(c^2)(x^2)=1/72$$ Moreover, $$b^2+c^2+1=x$$
2018-02-20T06:29:27
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/squared-fractions.330647/", "openwebmath_score": 0.4415680766105652, "openwebmath_perplexity": 3020.6578420550263, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692325496974, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747641175896 }
http://pilot.cnxproject.org/content/collection/col10064/latest/module/m10088/latest
Site Feedback # Introduction We have already shown the important role that continuous time convolution plays in signal processing. This section provides discussion and proof of some of the important properties of continuous time convolution. Analogous properties can be shown for continuous time circular convolution with trivial modification of the proofs provided except where explicitly noted otherwise. # Associativity The operation of convolution is associative. That is, for all continuous time signals $f1,f2,f3f1,f2,f3$ the following relationship holds. $f 1 * ( f 2 * f 3 ) = ( f 1 * f 2 ) * f 3 f 1 * ( f 2 * f 3 ) = ( f 1 * f 2 ) * f 3$ 1 In order to show this, note that $( f 1 * ( f 2 * f 3 ) ) ( t ) = ∫ - ∞ ∞ ∫ - ∞ ∞ f 1 ( τ 1 ) f 2 ( τ 2 ) f 3 ( ( t - τ 1 ) - τ 2 ) d τ 2 d τ 1 = ∫ - ∞ ∞ ∫ - ∞ ∞ f 1 ( τ 1 ) f 2 ( ( τ 1 + τ 2 ) - τ 1 ) f 3 ( t - ( τ 1 + τ 2 ) ) d τ 2 d τ 1 = ∫ - ∞ ∞ ∫ - ∞ ∞ f 1 ( τ 1 ) f 2 ( τ 3 - τ 1 ) f 3 ( t - τ 3 ) d τ 1 d τ 3 = ( ( f 1 * f 2 ) * f 3 ) ( t ) ( f 1 * ( f 2 * f 3 ) ) ( t ) = ∫ - ∞ ∞ ∫ - ∞ ∞ f 1 ( τ 1 ) f 2 ( τ 2 ) f 3 ( ( t - τ 1 ) - τ 2 ) d τ 2 d τ 1 = ∫ - ∞ ∞ ∫ - ∞ ∞ f 1 ( τ 1 ) f 2 ( ( τ 1 + τ 2 ) - τ 1 ) f 3 ( t - ( τ 1 + τ 2 ) ) d τ 2 d τ 1 = ∫ - ∞ ∞ ∫ - ∞ ∞ f 1 ( τ 1 ) f 2 ( τ 3 - τ 1 ) f 3 ( t - τ 3 ) d τ 1 d τ 3 = ( ( f 1 * f 2 ) * f 3 ) ( t )$ 2 proving the relationship as desired through the substitution $τ3=τ1+τ2τ3=τ1+τ2$. # Commutativity The operation of convolution is commutative. That is, for all continuous time signals $f1,f2f1,f2$ the following relationship holds. $f 1 * f 2 = f 2 * f 1 f 1 * f 2 = f 2 * f 1$ 3 In order to show this, note that $( f 1 * f 2 ) ( t ) = ∫ - ∞ ∞ f 1 ( τ 1 ) f 2 ( t - τ 1 ) d τ 1 = ∫ - ∞ ∞ f 1 ( t - τ 2 ) f 2 ( τ 2 ) d τ 2 = ( f 2 * f 1 ) ( t ) ( f 1 * f 2 ) ( t ) = ∫ - ∞ ∞ f 1 ( τ 1 ) f 2 ( t - τ 1 ) d τ 1 = ∫ - ∞ ∞ f 1 ( t - τ 2 ) f 2 ( τ 2 ) d τ 2 = ( f 2 * f 1 ) ( t )$ 4 proving the relationship as desired through the substitution $τ2=t-τ1τ2=t-τ1$. # Distribitivity The operation of convolution is distributive over the operation of addition. That is, for all continuous time signals $f1,f2,f3f1,f2,f3$ the following relationship holds. $f 1 * ( f 2 + f 3 ) = f 1 * f 2 + f 1 * f 3 f 1 * ( f 2 + f 3 ) = f 1 * f 2 + f 1 * f 3$ 5 In order to show this, note that $( f 1 * ( f 2 + f 3 ) ) ( t ) = ∫ - ∞ ∞ f 1 ( τ ) ( f 2 ( t - τ ) + f 3 ( t - τ ) ) d τ = ∫ - ∞ ∞ f 1 ( τ ) f 2 ( t - τ ) d τ + ∫ - ∞ ∞ f 1 ( τ ) f 3 ( t - τ ) d τ = ( f 1 * f 2 + f 1 * f 3 ) ( t ) ( f 1 * ( f 2 + f 3 ) ) ( t ) = ∫ - ∞ ∞ f 1 ( τ ) ( f 2 ( t - τ ) + f 3 ( t - τ ) ) d τ = ∫ - ∞ ∞ f 1 ( τ ) f 2 ( t - τ ) d τ + ∫ - ∞ ∞ f 1 ( τ ) f 3 ( t - τ ) d τ = ( f 1 * f 2 + f 1 * f 3 ) ( t )$ 6 proving the relationship as desired. # Multilinearity The operation of convolution is linear in each of the two function variables. Additivity in each variable results from distributivity of convolution over addition. Homogenity of order one in each varible results from the fact that for all continuous time signals $f1,f2f1,f2$ and scalars $aa$ the following relationship holds. $a ( f 1 * f 2 ) = ( a f 1 ) * f 2 = f 1 * ( a f 2 ) a ( f 1 * f 2 ) = ( a f 1 ) * f 2 = f 1 * ( a f 2 )$ 7 In order to show this, note that $( a ( f 1 * f 2 ) ) ( t ) = a ∫ - ∞ ∞ f 1 ( τ ) f 2 ( t - τ ) d τ = ∫ - ∞ ∞ ( a f 1 ( τ ) ) f 2 ( t - τ ) d τ = ( ( a f 1 ) * f 2 ) ( t ) = ∫ - ∞ ∞ f 1 ( τ ) ( a f 2 ( t - τ ) ) d τ = ( f 1 * ( a f 2 ) ) ( t ) ( a ( f 1 * f 2 ) ) ( t ) = a ∫ - ∞ ∞ f 1 ( τ ) f 2 ( t - τ ) d τ = ∫ - ∞ ∞ ( a f 1 ( τ ) ) f 2 ( t - τ ) d τ = ( ( a f 1 ) * f 2 ) ( t ) = ∫ - ∞ ∞ f 1 ( τ ) ( a f 2 ( t - τ ) ) d τ = ( f 1 * ( a f 2 ) ) ( t )$ 8 proving the relationship as desired. # Conjugation The operation of convolution has the following property for all continuous time signals $f1,f2f1,f2$. $f 1 * f 2 ¯ = f 1 ¯ * f 2 ¯ f 1 * f 2 ¯ = f 1 ¯ * f 2 ¯$ 9 In order to show this, note that $( f 1 * f 2 ¯ ) ( t ) = ∫ - ∞ ∞ f 1 ( τ ) f 2 ( t - τ ) d τ ¯ = ∫ - ∞ ∞ f 1 ( τ ) f 2 ( t - τ ) ¯ d τ = ∫ - ∞ ∞ f 1 ¯ ( τ ) f 2 ¯ ( t - τ ) d τ = ( f 1 ¯ * f 2 ¯ ) ( t ) ( f 1 * f 2 ¯ ) ( t ) = ∫ - ∞ ∞ f 1 ( τ ) f 2 ( t - τ ) d τ ¯ = ∫ - ∞ ∞ f 1 ( τ ) f 2 ( t - τ ) ¯ d τ = ∫ - ∞ ∞ f 1 ¯ ( τ ) f 2 ¯ ( t - τ ) d τ = ( f 1 ¯ * f 2 ¯ ) ( t )$ 10 proving the relationship as desired. # Time Shift The operation of convolution has the following property for all continuous time signals $f1,f2f1,f2$ where $STST$ is the time shift operator. $S T ( f 1 * f 2 ) = ( S T f 1 ) * f 2 = f 1 * ( S T f 2 ) S T ( f 1 * f 2 ) = ( S T f 1 ) * f 2 = f 1 * ( S T f 2 )$ 11 In order to show this, note that $S T ( f 1 * f 2 ) ( t ) = ∫ - ∞ ∞ f 2 ( τ ) f 1 ( ( t - T ) - τ ) d τ = ∫ - ∞ ∞ f 2 ( τ ) S T f 1 ( t - τ ) d τ = ( ( S T f 1 ) * f 2 ) ( t ) = ∫ - ∞ ∞ f 1 ( τ ) f 2 ( ( t - T ) - τ ) d τ = ∫ - ∞ ∞ f 1 ( τ ) S T f 2 ( t - τ ) d τ = f 1 * ( S T f 2 ) ( t ) S T ( f 1 * f 2 ) ( t ) = ∫ - ∞ ∞ f 2 ( τ ) f 1 ( ( t - T ) - τ ) d τ = ∫ - ∞ ∞ f 2 ( τ ) S T f 1 ( t - τ ) d τ = ( ( S T f 1 ) * f 2 ) ( t ) = ∫ - ∞ ∞ f 1 ( τ ) f 2 ( ( t - T ) - τ ) d τ = ∫ - ∞ ∞ f 1 ( τ ) S T f 2 ( t - τ ) d τ = f 1 * ( S T f 2 ) ( t )$ 12 proving the relationship as desired. # Differentiation The operation of convolution has the following property for all continuous time signals $f1,f2f1,f2$. $d d t ( f 1 * f 2 ) ( t ) = d f 1 d t * f 2 ( t ) = f 1 * d f 2 d t ( t ) d d t ( f 1 * f 2 ) ( t ) = d f 1 d t * f 2 ( t ) = f 1 * d f 2 d t ( t )$ 13 In order to show this, note that $d d t ( f 1 * f 2 ) ( t ) = ∫ - ∞ ∞ f 2 ( τ ) d d t f 1 ( t - τ ) d τ = d f 1 d t * f 2 ( t ) = ∫ - ∞ ∞ f 1 ( τ ) d d t f 2 ( t - τ ) d τ = f 1 * d f 2 d t ( t ) d d t ( f 1 * f 2 ) ( t ) = ∫ - ∞ ∞ f 2 ( τ ) d d t f 1 ( t - τ ) d τ = d f 1 d t * f 2 ( t ) = ∫ - ∞ ∞ f 1 ( τ ) d d t f 2 ( t - τ ) d τ = f 1 * d f 2 d t ( t )$ 14 proving the relationship as desired. # Impulse Convolution The operation of convolution has the following property for all continuous time signals $ff$ where $δδ$ is the Dirac delta funciton. $f * δ = f f * δ = f$ 15 In order to show this, note that $( f * δ ) ( t ) = ∫ - ∞ ∞ f ( τ ) δ ( t - τ ) d τ = f ( t ) ∫ - ∞ ∞ δ ( t - τ ) d τ = f ( t ) ( f * δ ) ( t ) = ∫ - ∞ ∞ f ( τ ) δ ( t - τ ) d τ = f ( t ) ∫ - ∞ ∞ δ ( t - τ ) d τ = f ( t )$ 16 proving the relationship as desired. # Width The operation of convolution has the following property for all continuous time signals $f1,f2f1,f2$ where $Duration(f)Duration(f)$ gives the duration of a signal $ff$. $Duration ( f 1 * f 2 ) = Duration ( f 1 ) + Duration ( f 2 ) Duration ( f 1 * f 2 ) = Duration ( f 1 ) + Duration ( f 2 )$ 17 . In order to show this informally, note that $(f1*f2)(t)(f1*f2)(t)$ is nonzero for all $tt$ for which there is a $ττ$ such that $f1(τ)f2(t-τ)f1(τ)f2(t-τ)$ is nonzero. When viewing one function as reversed and sliding past the other, it is easy to see that such a $ττ$ exists for all $tt$ on an interval of length $Duration(f1)+Duration(f2)Duration(f1)+Duration(f2)$. Note that this is not always true of circular convolution of finite length and periodic signals as there is then a maximum possible duration within a period. # Convolution Properties Summary As can be seen the operation of continuous time convolution has several important properties that have been listed and proven in this module. With slight modifications to proofs, most of these also extend to continuous time circular convolution as well and the cases in which exceptions occur have been noted above. These identities will be useful to keep in mind as the reader continues to study signals and systems.
2018-02-23T18:02:19
{ "domain": "cnxproject.org", "url": "http://pilot.cnxproject.org/content/collection/col10064/latest/module/m10088/latest", "openwebmath_score": 0.8881691694259644, "openwebmath_perplexity": 493.5075647889305, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692318706084, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747636633605 }
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-1-section-1-1-four-ways-to-represent-a-function-1-1-exercises-page-23/76
## Calculus: Early Transcendentals 8th Edition When $x\ge0,$ $f(x)=x|x|=x\times x=x^{2}$ $f(-x)=-x|-x|=-x\times x=-x^{2}$ $f(-x)=-f(x)$ When $x\lt0,$ $f(x)=x|x|=x\times-x=-x^{2}$ $f(-x)=-x|-x|=-x\times-x=x^{2}$ $f(-x)=-f(x)$ $\therefore f(x)$ is even.
2018-07-17T13:53:01
{ "domain": "gradesaver.com", "url": "https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-1-section-1-1-four-ways-to-represent-a-function-1-1-exercises-page-23/76", "openwebmath_score": 0.949786365032196, "openwebmath_perplexity": 195.59468099700698, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692318706084, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747636633605 }
https://stats.stackexchange.com/questions/421754/why-does-the-intercept-of-principal-component-regression-differ-from-the-orginal
# Why does the intercept of Principal Component Regression differ from the orginal regression? I am comparing two regressions: $$y= \beta_0+ \beta_1 x_1 + \beta_2 x_2 + \epsilon$$ and $$y = \gamma_0 + \gamma_1 PC_1 + \gamma_2 PC_2 +\eta$$. The regressors in the second regression, $$PC_1, PC_2$$, are principal components generated by a PCA of $$x_1,x_2$$. Presumably if the PCA just linearly transforms the original regressors, the intercepts should be identical. However, I find them very different. What is the reason behind this difference? The estimate for the intercept, $$\hat{\beta_0}$$, can be computed as $$\hat{\beta_0} = \bar{y} - \hat{\beta_1}\bar{x_1} - \hat{\beta_2}\bar{x_2} - \dots,$$ where $$\bar{y}$$ denotes the sample mean of $$y$$, $$\hat{\beta_j}$$ is the sample estimate of $$\beta_j$$, and $$\bar{x_j}$$ is the sample mean of $$x_j$$. So, for a simple counterexample, consider a linear transformation of $$x_1$$ where we just add 1 to every value. This would increase $$\bar{x_1}$$ by 1 and thus change the intercept.
2019-08-22T02:45:03
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/421754/why-does-the-intercept-of-principal-component-regression-differ-from-the-orginal", "openwebmath_score": 0.9578549861907959, "openwebmath_perplexity": 221.83320880696328, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692318706084, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747636633603 }
https://tutorial.math.lamar.edu/Solutions/CalcII/ParametricEqn/Prob3.aspx
Paul's Online Notes Home / Calculus II / Parametric Equations and Polar Coordinates / Parametric Equations and Curves Show Mobile Notice Show All Notes Hide All Notes Mobile Notice You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width. ### Section 9.1 : Parametric Equations and Curves 3. Eliminate the parameter for the following set of parametric equations, sketch the graph of the parametric curve and give any limits that might exist on $$x$$ and $$y$$. $x = \sqrt {t + 1} \hspace{0.5in}y = \frac{1}{{t + 1}}\hspace{0.5in}\hspace{0.5in}\,\,\,t > - 1$ Show All Steps Hide All Steps Start Solution First, we’ll eliminate the parameter from this set of parametric equations. For this particular set of parametric equations that is actually really easy to do if we notice the following. $x = \sqrt {t + 1} \,\,\hspace{0.25in}\hspace{0.25in} \Rightarrow \hspace{0.25in}\hspace{0.25in}{x^2} = t + 1$ With this we can quickly convert the $$y$$ equation to, $y = \frac{1}{{{x^2}}}$ Show Step 2 At this point we can get limits on $$x$$ and $$y$$ pretty quickly so let’s do that. First, we know that square roots always return positive values (or zero of course) and so from the $$x$$ equation we see that we must have $$x > 0$$. Note as well that this must be a strict inequality because the inequality restricting the range of $$t$$’s is also a strict inequality. In other words, because we aren’t allowing $$t = - 1$$ we will never get $$x = 0$$. Speaking of which, you do see why we’ve restricted the $$t$$’s don’t you? Now, from our restriction on $$t$$ we know that $$t + 1 > 0$$ and so from the $$y$$ parametric equation we can see that we also must have $$y > 0$$. This matches what we see from the equation without the parameter we found in Step 1. So, putting all this together here are the limits on $$x$$ and $$y$$. $x > 0\hspace{0.25in}\hspace{0.25in}y > 0$ Note that for this problem these limits are important (or at least the $$x$$ limits are important). Because of the $$x$$ limit we get from the parametric equation we can see that we won’t have the full graph of the equation we found in the first step. All we will have is the portion that corresponds to $$x > 0$$. Show Step 3 Before we sketch the graph of the parametric curve recall that all parametric curves have a direction of motion, i.e. the direction indicating increasing values of the parameter, $$t$$ in this case. There are several ways to get the direction of motion for the curve. One is to plug in values of $$t$$ into the parametric equations to get some points that we can use to identify the direction of motion. Here is a table of values for this set of parametric equations. $$t$$ $$x$$ $$y$$ -0.95 0.2236 20 -0.75 0.5 4 0 1 1 2 $$\sqrt 3$$ $$\frac{1}{3}$$ Note that there is an easier way (probably – it will depend on you of course) to determine direction of motion. Take a quick look at the $$x$$ equation. $x = \sqrt {t + 1} \,$ Increasing the value of $$t$$ will also cause $$t$$ + 1 to increase and the square root will also increase (we can verify with a quick derivative/Calculus I analysis if we want to). This means that the graph must be tracing out from left to right as the table of values above in the table supports. Likewise, we could use the $$y$$ equation. $y = \frac{1}{{t + 1}}$ Again, we know that as $$t$$ increases so does $$t$$ + 1. Because the $$t$$ + 1 is in the denominator we can further see that increasing this will cause the fraction, and hence $$y$$, to decrease. This means that the graph must be tracing out from top to bottom as both the $$x$$ equation and table of values supports. Using a quick Calculus analysis of one, or both, of the parametric equations is often a better and easier method for determining the direction of motion for a parametric curve. For “simple” parametric equations we can often get the direction based on a quick glance at the parametric equations and it avoids having to pick “nice” values of $$t$$ for a table. Show Step 4 Finally, here is a sketch of the parametric curve for this set of parametric equations. For this sketch we included the points from our table because we had them but we won’t always include them as we are often only interested in the sketch itself and the direction of motion.
2023-02-04T01:45:15
{ "domain": "lamar.edu", "url": "https://tutorial.math.lamar.edu/Solutions/CalcII/ParametricEqn/Prob3.aspx", "openwebmath_score": 0.8234321475028992, "openwebmath_perplexity": 226.87314604658158, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692318706085, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747636633603 }
https://socratic.org/questions/how-do-you-solve-and-check-for-extraneous-solutions-in-sqrt-x-1-5-x
# How do you solve and check for extraneous solutions in sqrt(x+1) + 5 = x? Aug 1, 2015 The only valid solution is $x = 8$ A second candidate solution $x = 3$ can be eliminated by checking for the validity of the given equation with $x = 3$ (and noting that it is not valid). #### Explanation: Given $\sqrt{x + 1} + 5 = x$ Subtract $5$ from both sides $\textcolor{w h i t e}{\text{XXXX}}$$\sqrt{x + 1} = x - 5$ Square both sides (possibly generating an extraneous root at this point) $\textcolor{w h i t e}{\text{XXXX}}$$x + 1 = {x}^{2} - 10 x + 25$ Subtract $\left(x + 1\right)$ from both sides (and flip sides) $\textcolor{w h i t e}{\text{XXXX}}$${x}^{2} - 11 x + 24 = 0$ Factor $\textcolor{w h i t e}{\text{XXXX}}$$\left(x - 3\right) \left(x - 8\right) = 0$ $\Rightarrow$$\textcolor{w h i t e}{\text{XXXX}}$$x = 3$$\textcolor{w h i t e}{\text{XXXX}}$or$\textcolor{w h i t e}{\text{XXXX}}$$x = 8$ Substituting $3$ for $x$ in the Left Side of original equation $\textcolor{w h i t e}{\text{XXXX}}$$\sqrt{3 + 1} + 5 = 2 + 3$ $\textcolor{w h i t e}{\text{XXXX}}$$\textcolor{w h i t e}{\text{XXXX}}$$\ne 3$ The solution $x = 3$ is extraneous. Substituting $8$ for $x$ in the Left Side of original equation $\textcolor{w h i t e}{\text{XXXX}}$$\sqrt{8 + 1} + 5 = 3 + 5$ $\textcolor{w h i t e}{\text{XXXX}}$$\textcolor{w h i t e}{\text{XXXX}}$$= 8$ $\textcolor{w h i t e}{\text{XXXX}}$$\textcolor{w h i t e}{\text{XXXX}}$$= x$ The solution $x = 8$ is valid.
2019-09-19T06:28:38
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-solve-and-check-for-extraneous-solutions-in-sqrt-x-1-5-x", "openwebmath_score": 0.9222266674041748, "openwebmath_perplexity": 1658.1608067289883, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915195, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747632091312 }
https://socratic.org/questions/abcd-is-a-rhombus-if-angle-adb-50-find-all-the-angles-of-the-rhombus-pls-ans-urg
# ABCD is a rhombus.If angle ADB = 50, find all the angles of the rhombus? Then teach the underlying concepts Don't copy without citing sources preview ? #### Explanation Explain in detail... #### Explanation: I want someone to double check my answer 17 CW Share Dec 11, 2016 see explanation #### Explanation: A Rhombus has the following properties : 1) The sides of a rhombus are all congruent (the same length.) $\implies A B = B C = C D = D A$ 2) $A C \mathmr{and} B D$ are perpendicular. 3) $A O = O C , \mathmr{and} B O = O D$ 4) $\Delta A O B , \Delta C O B , \Delta C O D \mathmr{and} \Delta A O D$ are congruent ................. Now back to our question, Given that $\angle A D B = {50}^{\circ} = \angle A D O$, $\angle O A D = 90 - 50 = {40}^{\circ}$ $\implies \angle O B A = \angle O B C = \angle O D C = O D A = {50}^{\circ}$ $\implies \angle O A B = \angle O C B = \angle O C D = \angle O A D = {40}^{\circ}$ • 28 minutes ago • 29 minutes ago • 29 minutes ago • 30 minutes ago • 6 minutes ago • 7 minutes ago • 9 minutes ago • 13 minutes ago • 23 minutes ago • 28 minutes ago • 28 minutes ago • 29 minutes ago • 29 minutes ago • 30 minutes ago
2018-02-23T23:58:54
{ "domain": "socratic.org", "url": "https://socratic.org/questions/abcd-is-a-rhombus-if-angle-adb-50-find-all-the-angles-of-the-rhombus-pls-ans-urg", "openwebmath_score": 0.7159221768379211, "openwebmath_perplexity": 11010.579371517175, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915195, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747632091312 }
http://strumpen.net/dc/build/html/combinationallogic/combinationallogic.html
# 5. Combinational Logic¶ A combinational circuit is a digital circuit whose outputs depend on the input values only.[1] In particular, combinational circuits do not contain memory elements and are commonly acyclic. Example 5.1 illustrates the type of problem that a combinational circuit may solve, and how we approach the solution. Example 5.1: Fibonacci Number Recognizer Leonardo da Pisa (1170-1241) Perhaps the most famous number sequence of all time is the sequence of Fibonacci numbers, named after Leonardo da Pisa, called Fibonacci because he was the son of Bonaccio, filius Bonaccii in Latin. The Fibonacci sequence $0,\ 1,\ 1,\ 2,\ 3,\ 5,\ 8,\ 13,\ 21,\ 34,\ 55,\ \ldots$ models the growth of rabbit populations, and can be defined by a recurrence such that $$Fib(n)$$ is the $$n^{th}$$ Fibonacci number in the sequence: $\begin{split}Fib(n) = \begin{cases} 0\,, & \text{if}\ n = 0\,, \\ 1\,, & \text{if}\ n = 1\,, \\ Fib(n-1) + Fib(n-2)\,, & \text{if}\ n > 1\,. \end{cases}\end{split}$ We wish to design a combinational circuit that recognizes the Fibonacci numbers among 4-bit binary numbers. More specifically, the circuit shall have as input a 4-bit binary number $$A = A_3 A_2 A_1 A_0,$$ and output $$Y$$ such that $$Y(A)=1$$ if $$A$$ is a Fibonacci number, and 0 otherwise. Our first step is to translate the problem statement into a truth table. For each 4-bit binary number in range $$[0,15],$$ we specify whether $$Y$$ is 0 or 1. $$A_3$$ $$A_2$$ $$A_1$$ $$A_0$$ $$Y$$ 0 0 0 0 0 1 1 0 0 0 1 1 2 0 0 1 0 1 3 0 0 1 1 1 4 0 1 0 0 0 5 0 1 0 1 1 6 0 1 1 0 0 7 0 1 1 1 0 8 1 0 0 0 1 9 1 0 0 1 0 10 1 0 1 0 0 11 1 0 1 1 0 12 1 1 0 0 0 13 1 1 0 1 1 14 1 1 1 0 0 15 1 1 1 1 0 As the second step, we transform the truth table into a Boolean expression for $$Y(A).$$ A simple, systematic transformation is based on the SOP normal form, which is the disjunction of minterms in those rows of the truth table where $$Y = 1$$: $Y(A) = \sum (0,1,2,3,5,8,13) = \overline{A}_0\,\overline{A}_1\,\overline{A}_2\,\overline{A}_3 + A_0\,\overline{A}_1\,\overline{A}_2\,\overline{A}_3 + \overline{A}_0\,A_1\,\overline{A}_2\,\overline{A}_3 + A_0\,A_1\,\overline{A}_2\,\overline{A}_3 + A_0\,\overline{A}_1\,A_2\,\overline{A}_3 + \overline{A}_0\,\overline{A}_1\,\overline{A}_2\,A_3 + A_0\,\overline{A}_1\,A_2\,A_3\,.$ The third step is the synthesis of a digital circuit. Here, we transform the Boolean expression into the corresponding AND-OR gate circuit. For comparison, we also show a series-parallel switch network, where each series composition corresponds to an AND gate and the parallel composition to the OR gate. The black-box symbol of the Fibonacci circuit is shown on the left, the gate-level circuit with NOT, AND, and OR gates in the middle, and the switch network on the right. The next step towards realizing the Fibonacci number recognizer could be the design a fast CMOS circuit with the method of logical effort. We build large combinational circuits from smaller ones. To facilitate reuse of a circuit as part of a hierarchical design methodology, we characterize each circuit module as a black box with: • one or more binary input terminals, • one or more discrete (binary or Z) output terminals, • a functional specification that relates the outputs logically to the inputs, • a timing specification of the delay before the outputs respond to a change of the inputs. A black box module does not expose the internal implementation, whereas a glass box module does. Logic gates are often considered elementary circuits in combinational logic design, and the basic circuits tend to be used as black box modules, assuming every designer knows about their specifications. The primary challenges of combinational circuit design are (1) finding the minimal logic expression for each output function, (2) mapping the function to a target technology for physical implementation, and (3) minimizing the circuit delay. These tasks are not independent of each other, and may require several design iterations before arriving at a satisfying solution. The tools for the design of a combinational circuit are Boolean algebra and the method of logical effort. Boolean algebra is useful to derive minimal logic functions for each circuit output, and to provide the inspiration for inventing functionally equivalent circuits with different topologies that serve as candidates for a high-speed CMOS design by means of the method of logical effort. Logic minimization rarely yields the fastest CMOS circuit but the problem is too hard to crack in general without logic minimization as a yardstick. ## 5.1. Logic Minimization¶ The Fibonacci number recognizer in Example 5.1 implements the desired functionality but requires more logic gates than necessary. For instance, the Boolean expression $$Y'(A_0, A_1, A_2, A_3) = \overline{A}_2\,\overline{A}_3 + A_0\,\overline{A}_1\,A_2 + \overline{A}_0\,\overline{A}_1\,\overline{A}_2$$ is logically equivalent to SOP normal form $$Y(A)$$ derived in Example 5.1 yet smaller. Therefore, we could have designed a gate-level circuit with one 2-input AND gate, two 3-input AND gates, and a 3-input OR gate to realize expression $$Y'$$ rather than seven 4-input AND gates and a 7-input OR gate for expression $$Y.$$ In general, it pays off to find a smaller Boolean expression representing a given Boolean function, because the resulting circuit is smaller and likely to be faster. The question is whether a smaller expression, once we have found it, is the “smallest” expression or whether there exists an even smaller expression. If we had a criterion to identify the smallest expression, we can terminate our search once we have found an expression that fulfills the minimization criterion. Even better are constructive methods that enable us to derive the smallest expression systematically rather than finding it by sheer luck. Logic minimization encompasses the most satisfactory methods for finding a smallest Boolean expression known to date. However, the logic minimization problem that we know how to solve systematically is formulated in a rather narrow technical sense which limits the expressions to a sum-of-products structure reminiscent of the AND-OR topology of compound gates. Because these circuits have two levels of gates from input to output, the logic minimization problem is also called two-level minimization problem: Given a Boolean function $$f(x_0, x_1, \ldots, x_{n-1})$$ find an equivalent sum-of-products expression $$g(x_0, x_1, \ldots, x_{n-1})$$ that minimizes the number of products and the number of inputs of all products. The logic minimization problem may be viewed as a cost minimization problem. The cost of a Boolean expression can be defined in various ways. Throughout this chapter, we define the cost to reflect the area requirements of a CMOS circuit in terms of number of wires and transistors: The cost $$\mathcal{C}(g)$$ of Boolean expression $$g$$ equals the number of all gate inputs in the corresponding combinational circuit, not counting inverters. For example, the cost $$\mathcal{C}(g)$$ of sum-of-products expression $$g$$ equals its number of products plus the number of inputs of all product terms. The inputs of a product term are a subset of the variables $$x_0, x_1, \ldots, x_{n-1}$$ either in complemented or uncomplemented form, called literals. We count literals rather than variables and, thereby, ignore the cost of the inverters required to generate the complemented variables. This is sensible, because the polarity of the inputs is independent of the intrinsic cost of an SOP expression. SOP expressions do not have negations except for complemented variables. Renaming variable $$x_i$$ to $$y_i = \overline{x}_i$$ does not change the SOP, but the variable name only. In Example 5.1, the literals of function $$Y$$ are $$A_0,$$ $$\overline{A}_0,$$ $$A_1,$$ $$\overline{A}_1,$$ $$A_2,$$ $$\overline{A}_2,$$ $$A_3,$$ $$\overline{A}_3,$$ and the cost of SOP normal form $$Y(A)$$ is $$\mathcal{C}(Y) = 7 + (7 \cdot 4) = 35.$$ The cost of SOP expression $$Y' = \overline{A}_2\,\overline{A}_3 + A_0\,\overline{A}_1\,A_2 + \overline{A}_0\,\overline{A}_1\,\overline{A}_2$$ is significantly smaller, $$\mathcal{C}(Y') = 3 + (2 + 3 + 3) = 11,$$ and is indeed the minimum cost. Note that counting literals and products of an SOP expression to determine its cost is equivalent to counting in the two-level combinational circuit the number of inputs of the AND gates plus the number of AND gates, which equals the number of inputs of the OR gate. 5.1 Derive the cost of the SOP normal forms and POS normal forms of the XNOR function $$f_9$$ and implication $$f_{11}.$$ $$x$$ $$y$$ $$f_9$$ $$f_{11}$$ 0 0 1 1 0 1 0 1 1 0 0 0 1 1 1 1 The normal forms of $$f_9$$ are $\begin{eqnarray*} \text{sop}(f_9) &=& \overline{x}\,\overline{y} + x\,y\,, \\ \text{pos}(f_9) &=& (x + \overline{y}) \cdot (\overline{x} + y)\,. \end{eqnarray*}$ We could derive the cost of each normal form by inspecting their associated two-level circuits, see Exercise 2.5. However, it is as easy to deduce the cost from the Boolean expressions directly. The cost of an SOP equals its number of products plus the number of literals of all products. The SOP normal form of $$f_9$$ has two products each with two literals, product $$\overline{x}\,\overline{y}$$ with literals $$\overline{x}$$ and $$\overline{y}$$ and product $$x\,y$$ with literals $$x$$ and $$y$$: $\mathcal{C}(\text{sop}(f_9)) = 2 + 2 \cdot 2 = 6\,.$ Analogously, the cost of a POS equals its number of sums plus the number of literals of all sums. The POS normal form of $$f_9$$ has two sums each with two literals, sum $$x + \overline{y}$$ with literals $$x$$ and $$\overline{y}$$ and sum $$\overline{x} + y$$ with literals $$\overline{x}$$ and $$y$$: $\mathcal{C}(\text{pos}(f_9)) = 2 + 2 \cdot 2 = 6\,.$ We find that the SOP and POS normal forms of the XNOR functions have equal cost. This is not the common case, though. The normal forms of $$f_{11}$$ are $\begin{eqnarray*} \text{sop}(f_{11}) &=& \overline{x}\,\overline{y} + \overline{x}\,y + x\,y\,, \\ \text{pos}(f_{11}) &=& \overline{x} + y\,. \end{eqnarray*}$ The SOP normal form has three products, each with two literals, so its cost is $\mathcal{C}(\text{sop}(f_{11})) = 3 + 3 \cdot 2 = 9\,.$ The POS normal form has no product, because it consists of just one sum with two literals. Its cost is $\mathcal{C}(\text{pos}(f_{11})) = 0 + 1 \cdot 2 = 2\,.$ We conclude that the POS normal form of the implication has a much lower cost than the SOP normal form, indicating that the POS yields a smaller and probably faster implementation of this function. ### 5.1.1. Geometry of Boolean Functions¶ To understand the logic minimization problem, it is helpful to interpret a Boolean function geometrically. The Boolean points $$\mathcal{B}^n$$ occupy an $$n$$-dimensional space. Point $$P = (x_0, x_1, \ldots, x_{n-1})$$ is uniquely specified in an $$n$$-dimensional Cartesian coordinate system, where each coordinate $$x_i$$ of point $$P$$ is confined to the Boolean values 0 and 1, i.e. $$x_i \in \{0, 1\}.$$ Figure 5.1 shows the points of $$\mathcal{B}^1,$$ $$\mathcal{B}^2,$$ and $$\mathcal{B}^3.$$ In three dimensions, the eight points form the corners of a cube. Therefore, an $$n$$-dimensional Boolean space is also called n-cube or hypercube. A 0-cube is a single point representing a constant Boolean value. Figure 5.1: Boolean spaces in Cartesian coordinate systems: 1-dimensional (left), 2-dimensional (middle), and 3-dimensional (right). An $$n$$-cube can be constructed from two $$(n-1)$$-cubes, and a set of new edges connecting the vertices with equal coordinates in the $$(n-1)$$-dimensional space. For example, the 3-cube in Figure 5.1 consists of two 2-cubes, the front square and the rear square in the $$(x,y)$$-space. Four new edges connect the corresponding corners of the squares in the $$z$$-dimension. Thus, an $$n$$-cube contains subcubes of smaller dimensions. The 3-cube in Figure 5.1 has six 2-dimensional subcubes, which are the squares on its faces. Furthermore, it has twelve 1-dimensional subcubes, each represented by an edge and its incident vertices. Since each point of an $$n$$-cube represents a minterm, we can characterize a Boolean function $$f(x_0, x_1, \ldots, x_{n-1})$$ by means of a partitioning of the points into the on-set and the off-set. The on-set contains all 1-points for which $$f=1,$$ i.e. all minterms of the SOP normal form of $$f.$$ The off-set contains all 0-points where $$f = 0,$$ i.e. all maxtems of the POS normal form of $$f.$$ Example 5.2: Geometric Interpretation Function $$f(x, y, z)$$ in three variables with SOP normal form $f(x, y, z) = \overline{x}\,\overline{y}\,\overline{z} + \overline{x}\,\overline{y}\,z + x\,\overline{y}\,z + x\,y\,z$ and truth table $$x$$ $$y$$ $$z$$ $$f$$ minterm 0 0 0 1 $$\overline{x}\,\overline{y}\,\overline{z}$$ 0 0 1 1 $$\overline{x}\,\overline{y}\,z$$ 0 1 0 0 $$\overline{x}\,y\,\overline{z}$$ 0 1 1 0 $$\overline{x}\,y\,z$$ 1 0 0 0 $$x\,\overline{y}\,\overline{z}$$ 1 0 1 1 $$x\,\overline{y}\,z$$ 1 1 0 0 $$x\,y\,\overline{z}$$ 1 1 1 1 $$x\,y\,z$$ has on-set $$= \{\overline{x}\,\overline{y}\,\overline{z},\ \overline{x}\,\overline{y}\,z,\ x\,\overline{y}\,z,\ x\,y\,z\}$$ and off-set $$= \{\overline{x}\,y\,\overline{z},\ \overline{x}\,y\,z,\ x\,\overline{y}\,\overline{z},\ x\,y\,\overline{z}\}\,.$$ The 3-cube representation of $$f$$ is shown on the right. The 1-points of the on-set are drawn black and the 0-points of the off-set as circles. Maurice Karnaugh discovered a projection of $$n$$-cubes into the two dimensions of a sheet of paper. Figure 5.2 illustrates the projection of the 3-cube into two dimensions. The four vertices of the front square are mapped into the top row, and the four vertices of the rear square into the bottom row. The edges of the squares form a torus in the 2-dimensional projection. The Karnaugh map, or K-map for short, draws the vertices of the cube as cells and omits the edges. The crucial feature of the K-map is the arrangement of the cells such that neighboring cells in the K-map correspond to adjacent vertices in the cube, keeping in mind that the rows of the K-map wrap around the torus. The coordinates of the cells of the K-map are annotated as point coordinates. For example, at the intersection of column $$x\,y = 01$$ and row $$z=1$$ we find the cell of point $$(x,y,z) = (0,1,1)$$ corresponding to minterm $$\overline{x}\,y\,z.$$ Note that the column coordinates are not in the order of binary numbers but form a Gray code, where exactly one bit flips between neighboring cells. This property holds also for the wrap-around between $$x\,y = 00$$ and $$x\,y = 10,$$ where the $$x$$-coordinate changes and the $$y$$ coordinate remains constant. In fact, every Hamiltonian cycle [2] through the 3-cube and the corresponding K-map visits the vertices in an order such that exactly one bit of the point coordinate flips on each step. Figure 5.2: 3-cube (left) mapped into two dimensions (middle), and drawn as K-map (right). The K-maps for two and four variables are shown in Figure 5.3. They correspond to a 2-cube, i.e. a 2-dimensional square, and a 4-cube, which is easier to draw as a K-map than as a 4-dimensional cube. Figure 5.3: K-maps for two variables (left) and four variables (right). We represent a Boolean function $$f$$ in a K-map by marking each cell of the on-set of $$f$$ with a 1 and each cell of the off-set of $$f$$ with a 0. Figure 5.4 shows the K-maps for the Boolean functions of Example 5.1 and Example 5.2. Figure 5.4: K-maps for the Boolean function in Example 5.2 (left) and the Fibonacci function of Example 5.1 (right). The 4-variable K-map has implicit wrap-arounds in both the horizontal and the vertical direction. K-maps for more than four variables exist, but identifying all neighboring cells that correspond to adjacent vertices in the $$n$$-cube becomes less intuitive as the number of variables grows. Therefore, we will use K-maps for Boolean functions with up to four variables only. ### 5.1.2. K-Map Minimization¶ In this section, we solve the two-level minimization problem graphically in a K-map. The key is the geometric interpretation of the combining theorem. On the right, we show the 3-cube of function $$f(x,y,z)=\overline{x}\,\overline{y}\,\overline{z} + \overline{x}\,\overline{y}\,z + x\,\overline{y}\,z + x\,y\,z$$ in Example 5.2, with black vertices marking the 1-points of the on-set. Two adjacent 1-points constitute a 1-dimensional subcube of the 3-cube, for example the encircled 1-points $$\overline{x}\,\overline{y}\,z$$ and $$x\,\overline{y}\,z.$$ Observe that we can apply the combining theorem to the disjunction of the minterms of the 1-cube: $\overline{x}\,\overline{y}\,z + x\,\overline{y}\,z = \overline{y}\,z\,,$ such that literal $$x$$ vanishes on the rhs. If we substitute $$\overline{y}\,z$$ for the two minterms in $$f$$: $f(x,y,z)= \overline{x}\,\overline{y}\,\overline{z} + \overline{y}\,z + x\,y\,z\,,$ then $$f$$ remains in SOP form, although it is not an SOP normal form any longer. More importantly, the SOP form is smaller than the SOP normal form, because the substitution replaces two minterms with one product, which has one literal less than each of the minterms. Merging two minterms into one product term removes the variable that appears in both complemented and uncomplemented form and retains the common literals in the product term of the merged subcube. In the example, we merge the minterms in the $$x$$-dimension, so that the $$x$$-literals disappear and product $$\overline{y}\,z$$ with the common literals represents the merged subcube. This example generalizes to higher dimensional subcubes. From the geometric point of view, the key observation is that merging two adjacent $$(n-1)$$-cubes into an $$n$$-cube removes the literals of the merged dimension. The larger the dimensionality of the cube, the fewer literals has the associated product. #### K-map Minimization Method¶ We assume that a Boolean function $$f$$ is given in SOP normal form. Then, the K-map minimization method consists of three steps: 1. Mark the on-set of $$f$$ in the K-map with 1’s. Leave the cells of the off-set unmarked. 2. Identify the maximal subcubes of the on-set. 3. Identify the minimal cover of the on-set as a subset of the maximal subcubes. We explain the K-map minimization method by means of examples. In Experiment 5.1, you can practice minimization with an interactive K-map. Example 5.3: 2-Variable K-map Minimization Consider function $f(x,y) = x\,\overline{y} + \overline{x}\,y + x\,y$ in two variables, given in SOP normal form. The corresponding 2-cube below marks each 1-point associated with a minterm as a black vertex. In the K-map, we mark the cells of the minterms with value 1. This is step 1 of the K-map minimization method. In a 2-variable K-map no wrap-around is required to identify neighboring cells. We mark each application of the combining theorem in the K-map by encircling the adjacent cells corresponding to a 1-cube, as shown in both the 2-cube representation and the K-map: The combining theorem yields: $\begin{eqnarray*} \overline{x}\,y + x\,y &=& y \\ x\,\overline{y} + x\,y &=& x\,. \end{eqnarray*}$ The first application merges the 1-cube in the $$x$$-dimension, resulting in degenerate a product $$y$$ that consists of a single literal only. The second application merges the $$y$$-dimension resulting in degenerate product $$x.$$ Note that we use minterm $$x\,y$$ in both applications of the combining theorem. In terms of Boolean algebra, we may express the minimization procedure as the transformation: $\begin{eqnarray*} f(x,y) &= &x\,\overline{y} + \overline{x}\,y + x\,y & \\ &= &(x\,\overline{y} + x\,y) + (\overline{x}\,y + x\,y)\qquad & \text{by idempotence}\\ &= &x + y & \text{by combining}\,. \end{eqnarray*}$ This algebraic transformation has a simple graphical equivalent that is easy to remember by itself. Merge neigboring 1-cells in the K-map, such that the resulting encircled cells form a subcube. A subcube in the K-map is a rectangle, perhaps a square of cells, and the number of encircled cells is a power of 2. The subcube represents a product term that we read off the K-map by including those coordinates that are unchanged. For example, the 1-cube in column $$x=1$$ covers both rows of the K-map, the top row where $$y=0$$ and the bottom row, where $$y=1.$$ Since $$x$$ is unchanged, the subcube represents the degenerate product $$x.$$ Analogously, the 1-cube in row $$y=1$$ covers both columns of the K-map and represents the degenerate product $$y.$$ These two 1-cubes are the maximal subcubes, because extending the circles two cover the next larger cube, a 2-cube with four cells, would include the 0-point of cell $$\overline{x}\,\overline{y}.$$ We have found two maximal subcubes, as required in step 2 of the K-map minimization method. The algebraic transformation shows that we need to form the sum (disjunction) of the products representing the maximal subcubes to obtain the minimal SOP form $$f(x,y) = x + y.$$ In this particular example, the two subcubes together cover all 1-cells of the on-set. In general, we wish to find the smallest subset of maximal subcubes that covers all 1-cells of the K-map. This is step 3 of the K-map minimization method. Note that $$f(x,y) = x + y$$ can be implemented with a single OR gate. Since no AND gates are required, we view the circuit as a degenerated two-level circuit that requires one level of logic gates only. Example 5.4: 3-Variable K-map Minimization We minimize the Boolean function of Example 5.2: $f(x, y, z) = \overline{x}\,\overline{y}\,\overline{z} + \overline{x}\,\overline{y}\,z + x\,\overline{y}\,z + x\,y\,z\,.$ The corresponding 3-cube and the K-map are shown below. The K-map has three maximal subcubes, each of which is a 1-cube that covers two neighboring 1-cells. Reading the products associated with the 1-cubes off the K-map, we find that the blue 1-cube changes in the $$x$$-dimension along the toroidal wrap-around and represents product $$\overline{y}\,z.$$ The red 1-cube changes in the $$y$$-dimension and represents product $$xz.$$ The green 1-cube changes in the $$z$$-dimension and represents product $$\overline{x}\,\overline{y}.$$ Step 3 of the K-map minimization method turns out to be slightly trickier in this example. If we form the sum of the products representing the maximal subcubes, we obtain the SOP form $f(x,y,z) = \overline{x}\,\overline{y} + x z + \overline{y}\,z\,.$ This SOP form has cost $$\mathcal{C}(f) = 3 + (2 + 2 + 2) = 9.$$ Notice in the K-map, however, that the blue subcube covers two 1-cells that are also covered by the other two subcubes. In particular, both the blue and the green subcubes cover the 1-cell of minterm $$\overline{x}\,\overline{y}\,z,$$ and the blue and the red subcubes both cover the 1-cell of minterm $$x\,\overline{y}\,z.$$ We conclude that the blue subcube is redundant, because the red and green subcubes include all 1-cells that the blue subcube covers. Therefore, a smaller SOP form for $$f$$ is $f(x,y,z) = \overline{x}\,\overline{y} + x z$ with cost $$\mathcal{C}(f) = 2 + (2 + 2) = 6.$$ Since the red subcube is the only maximal subcube to cover the 1-cell of minterm $$xyz$$ and the green subcube is the only maximal subcube to cover the 1-cell of minterm $$\overline{x}\,\overline{y}\,\overline{z},$$ these two subcubes cannot be removed without changing the logical function. We say that a maximal subcube is essential if it is the only one to cover a 1-cell in a K-map. The minimal cover must include all essential maximal subcubes. Since the green subcube $$\overline{x}\,\overline{y}$$ and the red subcube $$x z$$ are essential maximal subcubes, we conclude that $$f(x,y,z) = \overline{x}\,\overline{y} + x z$$ is the minimal SOP form. The implementation is a two-level circuit with two AND gates and one OR gate. Note that $$\overline{y}\,z$$ is the consensus of $$\overline{x}\,\overline{y}$$ and $$x z.$$ Example 5.5: 3-Variable K-map Minimization We wish to minimize Boolean function $f(x, y, z) = \overline{x}\,\overline{y}\,\overline{z} + \overline{x}\,\overline{y}\,z + \overline{x}\,y\,z + x\,\overline{y}\,\overline{z} + x\,\overline{y}\,z + x\,y\,z\,.$ The corresponding 3-cube and K-map are shown below. Identifying the maximal subcubes in the K-map yields seven 1-cubes that we can extend into two 2-cubes. The red subcube covers the four 1-cells in the bottom row, and the blue subcube is the square of four 1-cells wrapping around the rows. In the 3-cube, the two 2-cubes correspond to the botton and rear faces. The red subcube changes in the $$x$$ and $$y$$-dimensions but not in the $$z$$-dimension, and represents the degenerate product $$z.$$ The blue subcube does not change in the $$y$$-dimension, and represents the degenerate product $$\overline{y}.$$ Both subcubes are maximal and essential. Therefore, the minimal SOP form is $$f(x,y,z) = \overline{y} + z.$$ If we insist on minimizing a Boolean function algebraically rather than using a K-map, consider the following Gedankenexperiment. We can derive the minimization steps due to the combining theorem in the reverse order starting with the minimal SOP form, which is equivalent to performing the Shannon expansion, plus subsequent removal of duplicate minterms: $\begin{eqnarray*} f(x,y,z) &=& \overline{y} + z & \\ &=& (\overline{x}\,\overline{y} + x\,\overline{y}) + (\overline{x}\,z + x z) & \text{by combining with}\ x \\ &=& \bigl((\overline{x}\,\overline{y}\,\overline{z} + \overline{x}\,\overline{y}\,z) + (x\,\overline{y}\,\overline{z} + x\,\overline{y}\,z)\bigr) + \bigl((\overline{x}\,\overline{y}\,z + \overline{x}\,y\,z) + (x\,\overline{y}\,z + x\,y\,z)\bigr)\qquad & \text{by combining with}\ z\ \text{and}\ y \\ &=& \overline{x}\,\overline{y}\,\overline{z} + \overline{x}\,\overline{y}\,z + \overline{x}\,y\,z + x\,\overline{y}\,\overline{z} + x\,\overline{y}\,z + x\,y\,z & \text{by idempotence}\,. \end{eqnarray*}$ For logic minimization by means of Boolean algebra, we start with the last equality, the SOP normal form, and proceed in the opposite direction. First, we would duplicate minterms $$\overline{x}\,\overline{y}\,z$$ and $$x\,\overline{y}\,z$$ by idempotence. Then, we would merge minterms into 1-cubes by applying the combining theorem four times. At last, we would merge 1-cubes into 2-cubes by applying the combining theorem twice. What appears like divine foresight in the first step reduces to craftsmanship in the K-map method. We conclude that the K-map method suits digital circuit designers who have not acquired the powers of an oracle yet. Example 5.6: 3-Variable K-map Minimization We wish to minimize Boolean function $f(x, y, z) = \overline{x}\,\overline{y}\,\overline{z} + \overline{x}\,\overline{y}\,z + \overline{x}\,y\,\overline{z} + x\,\overline{y}\,z + x\,y\,\overline{z} + x\,y\,z\,.$ The K-map of $$f$$ is shown below. We find that $$f$$ consists of six 1-cubes $$\overline{x}\,\overline{y},$$ $$\overline{x}\,\overline{z},$$ $$y\,\overline{z},$$ $$x y,$$ $$x z,$$ and $$\overline{y} z.$$ All 1-cubes are maximal subcubes. Step 3 of the K-map minimization method, finding the minimal cover, is even trickier than in Example 5.4, because none of the maximal subcubes is essential. Every 1-cube covers 1-cells that are also covered by other 1-cubes. Thus, there is no obvious choice for the minimal cover. Instead, this function has two minimal covers $\begin{eqnarray*} f_1(x,y,z) &=& \overline{x}\,\overline{z} + x y + \overline{y} z\,, \\ f_2(x,y,z) &=& \overline{x}\,\overline{y} + y\,\overline{z} + x z\,, \end{eqnarray*}$ where $$f_1$$ contains the blue subcubes and $$f_2$$ the red subcubes. There exists no smaller cover, because the red and blue subsets of the set of maximal subcubes cover each 1-cell of the K-map exactly once. Both covers have minimal cost $$\mathcal{C}(f_1) = \mathcal{C}(f_2) = 3 + (2 + 2 + 2) = 9,$$ and can be implemented with two-level circuits using three AND gates and one OR gate. If the minimal cover is not unique, the cost function provides no further guidance other than choosing one of them arbitrarily. Example 5.7: 4-Variable K-map Minimization We wish to minimize 4-variable function $f(w,x,y,z) = \overline{w}\,\overline{x}\,\overline{y}\,\overline{z} + \overline{w}\,\overline{x}\,y\,\overline{z} + \overline{w}\,\overline{x}\,y\,z + \overline{w}\,x\,\overline{y}\,\overline{z} + \overline{w}\,x\,y\,\overline{z} + \overline{w}\,x\,y\,z + w\,\overline{x}\,\overline{y}\,\overline{z} + w\,\overline{x}\,y\,\overline{z} + w\,\overline{x}\,y\,z + w\,x\,y\,\overline{z} + w\,x\,y\,z\,.$ The 4-variable K-map is shown below. A 4-variable K-map wraps around horizontally and vertically. The blue subcube $$y$$ is a 3-cube. The green subcube $$\overline{w}\,\overline{z}$$ is a 2-cube that wraps around vertically. The red 2-cube $$\overline{x}\,\overline{z}$$ is the only subcube of a 4-variable K-map that wraps around horizontically and vertically. All three subcubes are maximal and essential. Therefore, the minimal cover is unique and yields SOP form: $f(w,x,y,z) = y + \overline{w}\,\overline{z} + \overline{x}\,\overline{z}\,.$ Overlooking the red subcube of the four corner cells is a common beginner’s mistake. Experiment 5.1: Interactive K-map You may use this K-map to minimize Boolean functions with three or four variables in three steps: (1) define the function by selecting the 1-cells of the on-set, (2) group the 1-cells into prime implicants (maximal subcubes), and (3) choose the prime implicants for the minimal cover. 1. Define function 2. Specify prime implicants 3. Determine minimal cover 5.2 Consider 3-variable function $$f(x, y, z) = \overline{x}\,y\,\overline{z} + \overline{(x + y)} + x\,z\,.$$ 1. Identify all maximal subcubes (also called prime implicants). 2. Identify the essential maximal subcubes (also called essential prime implicants). 3. Determine all minimal covers of $$f$$. 1. We wish to represent $$f$$ in a K-map to identify all maximal subcubes. To that end we need to determine all minterms of the on-set of $$f$$. First, we apply De Morgan’s theorem to transform $$f$$ into sum-of-products form: $\begin{eqnarray*} f(x, y, z) &=& \overline{x}\,y\,\overline{z} + \overline{(x + y)} + x\,z \\ &=& \overline{x}\,y\,\overline{z} + \overline{x}\,\overline{y} + x\,z\,. \end{eqnarray*}$ Then, we apply Shannon expansions to transform each product into minterms. Note that the first product is a minterm already. We expand the second product about $$z$$ and the third product about $$y$$: $\begin{eqnarray*} f(x, y, z) &=& \overline{x}\,y\,\overline{z} + \overline{x}\,\overline{y} + x\,z \\ &=& \overline{x}\,y\,\overline{z} + (\overline{x}\,\overline{y}\,\overline{z} + \overline{x}\,\overline{y}\,z) + (x\,\overline{y}\,z + x\,y\,z)\,. \end{eqnarray*}$ The K-map with the five minterms of the on-set and all maximal subcubes is: The maximal subcubes are the orange subcube $$\overline{x}\,\overline{z},$$ the green subcube $$\overline{x}\,\overline{y},$$ the blue subcube $$\overline{y}\,z,$$ and the red subcube $$x\,z.$$ 2. The essential maximal subcubes are the orange and the red subcubes $$\overline{x}\,\overline{z}$$ and $$x\,z.$$ The green and blue subcubes are not essential. They are redundant prime implicants. 3. The minimal cover of $$f$$ must include all essential subcubes, here the orange and red subcubes. These subcubes leave a single 1-cell uncovered, the 1-cell in the bottom left corner. To cover this 1-cell, we may use either the green or the blue subcube. Since they incur the same cost, function $$f$$ has two minimal covers: $\begin{eqnarray*} f(x, y, z) &=& \overline{x}\,\overline{z} + x\,z + \overline{x}\,\overline{y} \\ &=& \overline{x}\,\overline{z} + x\,z + \overline{y}\,z\,. \end{eqnarray*}$ Both minimal covers have cost 9. 5.3 Consider the SOP normal forms: $\begin{eqnarray*} f(A,B) &=& \overline{A}\,\overline{B} + \overline{A}\,B + A\,B \\ g(A,B,C) &=& \overline{A}\,\overline{B}\,C + \overline{A}\,B\,C + A\,B\,\overline{C} + A\,B\,C \\ h(A,B,C,D) &=& \overline{A}\,\overline{B}\,\overline{C}\,\overline{D} + A\,B\,\overline{C}\,\overline{D} + A\,\overline{B}\,\overline{C}\,\overline{D} + \overline{A}\,\overline{B}\,C\,\overline{D} + A\,B\,C\,\overline{D} + A\,\overline{B}\,C\,\overline{D} + A\,\overline{B}\,C\,D + A\,B\,C\,D \end{eqnarray*}$ 1. Identify the on-sets in the associated $$n$$-cubes. 2. Represent the functions by means of K-maps. 3. Minimize the functions: 1. identify all maximal subcubes (also called prime implicants), 2. identify the essential maximal subcubes (also called essential prime implicants), 3. find a minimal cover and its associated SOP expression. 4. Determine the cost reductions due to two-level minimization. 1. Each minterm of an SOP normal form represents an element of the on-set of a function. The on-set of function $$f(A,B)$$ is $\text{on-set}(f) = \{ \overline{A}\,\overline{B}\,, \overline{A}\,B\,, A\,B \}\,.$ Since $$f$$ is a function in two variables, $$A$$ and $$B,$$ the on-set of $$f$$ are the 1-points of $$f$$ which are a subset of the vertices of a 2-cube. In the Figure below, we draw the three 1-points of $$f(A,B)$$ as black vertices, and the 0-point as a circle. The on-sets of $$g(A,B,C)$$ on the 3-cube and $$h(A,B,C,D)$$ on the 4-cube are drawn analogously. 2. The $$n$$-cubes in (a) have the corresponding K-maps shown below. We have marked the 1-cells associated with the minterms of the functions only. Blank cells represent 0-points. 3. We apply the K-map method to minimize the functions, starting with $$f(A,B).$$ The K-map of function $$f$$ on the right contains two prime implicants. They are the largest subcubes of the 2-cube that contains $$f.$$ The subcubes of a 2-cube are 1-cubes represented by two adjacent cells in the K-map, and 0-cubes represented by a single cell of the K-map. The blue prime implicant (1-cube) consists of the column where $$A = 0,$$ and the red prime implicant (1-cube) consists of the row where $$B = 1.$$ Therefore, the prime implicants of $$f$$ are $$\overline{A}$$ and $$B.$$ Both prime implicants are essential, because the blue prime implicant $$\overline{A}$$ is the only one to cover 1-cell $$\overline{A}\,\overline{B}$$ and the red prime implicant $$B$$ is the only one to cover 1-cell $$A\,B.$$ Therefore, both prime implicants must be part of the minimal cover. Since together both prime implicants cover all 1-cells of $$f,$$ we have found the minimal cover: $f(A,B) = \overline{A} + B\,.$ As a 3-variable function, $$g(A,B,C)$$ occupies a 3-cube. The K-map on the right shows three prime implicants, each of which covers two adjacent 1-cells and hence correspond to 1-cubes. The next larger subcube would be a 2-cube of four adjacent 1-cells, that correspond to the four corners of one of the six faces of the 3-cube shown in (a). The four 1-cells of $$g$$ to not form a 2-cube, though. The blue prime implicant covers 1-cells where $$C=1$$ and $$A=0.$$ Variable $$B$$ changes, and can be eliminated by the combining theorem. Thus, the blue prime implicant is $$\overline{A}\,C.$$ The red prime implicant is $$A\,B,$$ because it covers 1-cells where $$A=1$$ and $$B=1,$$ while $$C$$ changes. Similarly, the green prime implicant is $$B\,C,$$ because it covers two 1-cells where $$B=1$$ and $$C=1$$ while $$A$$ changes. The blue prime implicant is essential, because it is the only one to cover 1-cell $$\overline{A}\,\overline{B}\,C.$$ The red prime implicant is essential too, because is is the only prime implicant to cover 1-cell $$A\,B\,\overline{C}.$$ In contrast, the green prime implicant is not essential because both 1-cells it covers are covered by another prime implicant as well. Therefore, the blue and red essential prime implicants must be part of the minimal cover. Since the blue and red prime implicants cover all 1-cells of $$g,$$ they are also sufficient for the minimal cover: $g(A,B,C) = A\,B + \overline{A}\,C\,.$ Function $$h(A,B,C,D)$$ is a 4-variable function. The K-map on the right identifies three prime implicants. Each prime implicant covers four 1-cells, and corresponds to a 2-cube. The blue prime implicant covers 1-cells where $$A = 1$$ and $$C = 1,$$ while $$B$$ and $$D$$ change. Two applications of the combining theorem are necessary to eliminate variables $$B$$ and $$D$$ from the sum of minterms, and arrive at prime implicant $$A\,C.$$ The red prime implicant wraps around vertically, and covers 1-cells where $$A=1$$ and $$D=0$$ while $$B$$ and $$C$$ change. Therefore, the red prime implicant is $$A\,\overline{D}.$$ The green prime implicant wraps around both directions vertically and horizontally. It covers 1-cells where $$B=0$$ and $$D=0$$ while $$A$$ and $$C$$ change. Thus, the green prime implicant is $$\overline{B}\,\overline{D}.$$ We notice that all three prime implicants are essential, because each of them covers at least one 1-cell that no other prime implicant covers. Therefore, the minimal cover of $$h$$ is $h(A,B,C,D) = A\,C + A\,\overline{D} + \overline{B}\,\overline{D}\,.$ 4. The cost reduction of the minimization is the difference of the costs of the SOP normal form and the minimal cover. The cost of an SOP form, normal or not, is the number of products plus the number of literals of all products. For function $$f,$$ we save 7 units of cost: $\begin{eqnarray*} \mathcal{C}(\text{sop}(f)) &=& 3 + 3 \cdot 2 = 9 \\ \mathcal{C}(\text{mc}(f)) &=& 2 + 0 \cdot 1 = 2 \\ \Delta \mathcal{C}(f) &=& 9 - 2\ =\ 7\,. \end{eqnarray*}$ Minimization of function $$g$$ saves 10 units of cost: $\begin{eqnarray*} \mathcal{C}(\text{sop}(g)) &=& 4 + 4 \cdot 3 = 16 \\ \mathcal{C}(\text{mc}(g)) &=& 2 + 2 \cdot 2 = 6 \\ \Delta \mathcal{C}(g) &=& 16 - 6\ =\ 10\,. \end{eqnarray*}$ The largest savings occur for function $$h$$: $\begin{eqnarray*} \mathcal{C}(\text{sop}(h)) &=& 8 + 8 \cdot 4 = 40 \\ \mathcal{C}(\text{mc}(h)) &=& 3 + 3 \cdot 2 = 9 \\ \Delta \mathcal{C}(h) &=& 40 - 9\ =\ 31\,. \end{eqnarray*}$ 5.4 Derive all minimal covers of Boolean function: $f(A,B,C,D) = \overline{A}\,\overline{B}\,\overline{C}\,\overline{D} + \overline{A}\,\overline{B}\,\overline{C}\,D + \overline{A}\,\overline{B}\,C\,\overline{D} + \overline{A}\,B\,\overline{C}\,\overline{D} + A\,\overline{B}\,\overline{C}\,D + A\,\overline{B}\,C\,\overline{D} + A\,\overline{B}\,C\,D + A\,B\,\overline{C}\,\overline{D} + A\,B\,\overline{C}\,D + A\,B\,C\,\overline{D}\,.$ Function $$f$$ has 12 prime implicants (maximal subcubes): $\overline{A}\,\overline{B}\,\overline{C},\ A\,\overline{B}\,C,\ A\,B\,\overline{C},\ \overline{A}\,\overline{B}\,\overline{D},\ A\,\overline{B}\,D,\ A\,B\,\overline{D},\ \overline{A}\,\overline{C}\,\overline{D},\ A\,\overline{C}\,D,\ A\,C\,\overline{D},\ \overline{B}\,\overline{C}\,D,\ \overline{B}\,C\,\overline{D},\ B\,\overline{C}\,\overline{D}\,,$ none of which is essential. There are 32 minimal covers with cost $$\mathcal{C}(f) = 24$$: $\begin{eqnarray*} f(A,B,C,D) &=& \overline{A}\,\overline{B}\,\overline{C} + A\,\overline{B}\,C + \overline{A}\,\overline{B}\,\overline{D} + A\,B\,\overline{D} + \overline{A}\,\overline{C}\,\overline{D} + A\,\overline{C}\,D \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,\overline{B}\,C + \overline{A}\,\overline{B}\,\overline{D} + A\,B\,\overline{D} + A\,\overline{C}\,D + B\,\overline{C}\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,\overline{B}\,C + \overline{A}\,\overline{B}\,\overline{D} + A\,\overline{C}\,D + A\,C\,\overline{D} + B\,\overline{C}\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,\overline{B}\,C + A\,B\,\overline{D} + \overline{A}\,\overline{C}\,\overline{D} + A\,\overline{C}\,D + \overline{B}\,C\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,\overline{B}\,C + A\,B\,\overline{D} + A\,\overline{C}\,D + \overline{B}\,C\,\overline{D} + B\,\overline{C}\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,\overline{B}\,C + A\,\overline{C}\,D + A\,C\,\overline{D} + \overline{B}\,C\,\overline{D} + B\,\overline{C}\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,B\,\overline{C} + \overline{A}\,\overline{B}\,\overline{D} + A\,\overline{B}\,D + \overline{A}\,\overline{C}\,\overline{D} + A\,C\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,B\,\overline{C} + \overline{A}\,\overline{B}\,\overline{D} + A\,\overline{B}\,D + A\,C\,\overline{D} + B\,\overline{C}\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,B\,\overline{C} + A\,\overline{B}\,D + A\,B\,\overline{D} + \overline{A}\,\overline{C}\,\overline{D} + \overline{B}\,C\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,B\,\overline{C} + A\,\overline{B}\,D + A\,B\,\overline{D} + \overline{B}\,C\,\overline{D} + B\,\overline{C}\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,B\,\overline{C} + A\,\overline{B}\,D + \overline{A}\,\overline{C}\,\overline{D} + A\,C\,\overline{D} + \overline{B}\,C\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,B\,\overline{C} + A\,\overline{B}\,D + A\,C\,\overline{D} + \overline{B}\,C\,\overline{D} + B\,\overline{C}\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + \overline{A}\,\overline{B}\,\overline{D} + A\,\overline{B}\,D + A\,\overline{C}\,D + A\,C\,\overline{D} + B\,\overline{C}\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,\overline{B}\,D + A\,B\,\overline{D} + \overline{A}\,\overline{C}\,\overline{D} + A\,\overline{C}\,D + \overline{B}\,C\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,\overline{B}\,D + A\,B\,\overline{D} + A\,\overline{C}\,D + \overline{B}\,C\,\overline{D} + B\,\overline{C}\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{C} + A\,\overline{B}\,D + A\,\overline{C}\,D + A\,C\,\overline{D} + \overline{B}\,C\,\overline{D} + B\,\overline{C}\,\overline{D} \\ &=& A\,\overline{B}\,C + A\,B\,\overline{C} + \overline{A}\,\overline{B}\,\overline{D} + A\,B\,\overline{D} + \overline{A}\,\overline{C}\,\overline{D} + \overline{B}\,\overline{C}\,D \\ &=& A\,\overline{B}\,C + A\,B\,\overline{C} + \overline{A}\,\overline{B}\,\overline{D} + A\,B\,\overline{D} + \overline{B}\,\overline{C}\,D + B\,\overline{C}\,\overline{D} \\ &=& A\,\overline{B}\,C + A\,B\,\overline{C} + \overline{A}\,\overline{B}\,\overline{D} + \overline{A}\,\overline{C}\,\overline{D} + A\,C\,\overline{D} + \overline{B}\,\overline{C}\,D \\ &=& A\,\overline{B}\,C + A\,B\,\overline{C} + \overline{A}\,\overline{B}\,\overline{D} + A\,C\,\overline{D} + \overline{B}\,\overline{C}\,D + B\,\overline{C}\,\overline{D} \\ &=& A\,\overline{B}\,C + A\,B\,\overline{C} + A\,B\,\overline{D} + \overline{A}\,\overline{C}\,\overline{D} + \overline{B}\,\overline{C}\,D + \overline{B}\,C\,\overline{D} \\ &=& A\,\overline{B}\,C + A\,B\,\overline{C} + \overline{A}\,\overline{C}\,\overline{D} + A\,C\,\overline{D} + \overline{B}\,\overline{C}\,D + \overline{B}\,C\,\overline{D} \\ &=& A\,\overline{B}\,C + \overline{A}\,\overline{B}\,\overline{D} + A\,B\,\overline{D} + \overline{A}\,\overline{C}\,\overline{D} + A\,\overline{C}\,D + \overline{B}\,\overline{C}\,D \\ &=& A\,\overline{B}\,C + \overline{A}\,\overline{B}\,\overline{D} + A\,B\,\overline{D} + A\,\overline{C}\,D + \overline{B}\,\overline{C}\,D + B\,\overline{C}\,\overline{D} \\ &=& A\,\overline{B}\,C + \overline{A}\,\overline{B}\,\overline{D} + A\,\overline{C}\,D + A\,C\,\overline{D} + \overline{B}\,\overline{C}\,D + B\,\overline{C}\,\overline{D} \\ &=& A\,\overline{B}\,C + A\,B\,\overline{D} + \overline{A}\,\overline{C}\,\overline{D} + A\,\overline{C}\,D + \overline{B}\,\overline{C}\,D + \overline{B}\,C\,\overline{D} \\ &=& A\,B\,\overline{C} + \overline{A}\,\overline{B}\,\overline{D} + A\,\overline{B}\,D + \overline{A}\,\overline{C}\,\overline{D} + A\,C\,\overline{D} + \overline{B}\,\overline{C}\,D \\ &=& A\,B\,\overline{C} + \overline{A}\,\overline{B}\,\overline{D} + A\,\overline{B}\,D + A\,C\,\overline{D} + \overline{B}\,\overline{C}\,D + B\,\overline{C}\,\overline{D} \\ &=& A\,B\,\overline{C} + A\,\overline{B}\,D + A\,B\,\overline{D} + \overline{A}\,\overline{C}\,\overline{D} + \overline{B}\,\overline{C}\,D + \overline{B}\,C\,\overline{D} \\ &=& A\,B\,\overline{C} + A\,\overline{B}\,D + \overline{A}\,\overline{C}\,\overline{D} + A\,C\,\overline{D} + \overline{B}\,\overline{C}\,D + \overline{B}\,C\,\overline{D} \\ &=& \overline{A}\,\overline{B}\,\overline{D} + A\,\overline{B}\,D + A\,\overline{C}\,D + A\,C\,\overline{D} + \overline{B}\,\overline{C}\,D + B\,\overline{C}\,\overline{D} \\ &=& A\,\overline{B}\,D + A\,B\,\overline{D} + \overline{A}\,\overline{C}\,\overline{D} + A\,\overline{C}\,D + \overline{B}\,\overline{C}\,D + \overline{B}\,C\,\overline{D}\,. \end{eqnarray*}$ #### Dual K-map Minimization Method¶ The K-map minimization method can be used not only to determine a minimal SOP form but also for a minimal POS form. The procedure is based on the dual combining theorem. From the geometric perspective, we combine subcubes of the off-set rather than subcubes of the on-set of a function. Recall function $$f(x, y, z)$$ of Example 5.2 with SOP normal form $f(x, y, z) = \overline{x}\,\overline{y}\,\overline{z} + \overline{x}\,\overline{y}\,z + x\,\overline{y}\,z + x\,y\,z\,.$ The truth table below specifies $$f,$$ and lists both minterms and maxterms associated with the combination of input values in each row. $$x$$ $$y$$ $$z$$ $$f$$ minterm maxterm 0 0 0 1 $$\overline{x}\,\overline{y}\,\overline{z}$$ $$x + y + z$$ 0 0 1 1 $$\overline{x}\,\overline{y}\,z$$ $$x + y + \overline{z}$$ 0 1 0 0 $$\overline{x}\,y\,\overline{z}$$ $$x + \overline{y} + z$$ 0 1 1 0 $$\overline{x}\,y\,z$$ $$x + \overline{y} + \overline{z}$$ 1 0 0 0 $$x\,\overline{y}\,\overline{z}$$ $$\overline{x} + y + z$$ 1 0 1 1 $$x\,\overline{y}\,z$$ $$\overline{x} + y + \overline{z}$$ 1 1 0 0 $$x\,y\,\overline{z}$$ $$\overline{x} + \overline{y} + z$$ 1 1 1 1 $$x\,y\,z$$ $$\overline{x} + \overline{y} + \overline{z}$$ The POS normal form is the product of those maxterms where $$f(x,y,z) = 0$$: $f(x, y, z) = (x + \overline{y} + z)\,(x + \overline{y} + \overline{z})\,(\overline{x} + y + z)\,(\overline{x} + \overline{y} + z)\,.$ Function $$f$$ assumes value 0 if one of its maxterms is 0. Since the on-set of a function is the set of 1-points, the on-set of $$f$$ is the set of maxterms that do not appear in the the POS normal form: on-set $$= \{x + y + z,\ x + y + \overline{z},\ \overline{x} + y + \overline{z},\ \overline{x} + \overline{y} + \overline{z}\}\,,$$ and the off-set is the set of maxterms that do appear in the POS normal form: off-set $$= \{x + \overline{y} + z,\ x + \overline{y} + \overline{z},\ \overline{x} + y + z,\ \overline{x} + \overline{y} + z\}\,.$$ From the geometric point of view, the maxterms are the 0-points in the 3-cube representation of $$f.$$ The dual combining theorem applies to adjacent 0-points, for instance $(x + \overline{y} + z)\,(\overline{x} + \overline{y} + z) = \overline{y} + z\,.$ We interpret the application of the combining theorem as merging two adjacent 0-points in the $$x$$-dimension into 1-cube $$\overline{y} + z$$ by eliminating the $$x$$-literals in complemented and uncomplemented form and retaining the common literals. The geometric interpretation is identical to the interpretation in the K-map minimization method, except that we perform the minimization on the off-set of $$f$$ rather than the on-set. Since the K-map minimization method is independent of the interpretation of its cells as minterms or maxterms, we can use the K-map minimization method as is to derive the minimal SOP or the minimal POS form. Example 5.8: Dual 3-Variable K-map Minimization In Example 5.4 we derive the minimal SOP form of Boolean function $f(x, y, z) = \overline{x}\,\overline{y}\,\overline{z} + \overline{x}\,\overline{y}\,z + x\,\overline{y}\,z + x\,y\,z\,.$ Here, we derive the minimal POS form. The off-set of $$f$$ consists of its 0-points. In the 3-cube, the 0-points are the vertices marked with circles. Since $$f$$ is given in SOP normal form, we may populate the K-map by first marking each cell associated with a minterm of $$f$$ with a 1, and then all remaining cells with a 0. After erasing the 1’s, we obtain the 0-cells of the off-set shown below. Note that K-map coordinates of a maxterm correspond to complemented literals. For example, maxterm $$x + \overline{y} + z$$ has coordinates $$x=0,$$ $$y=1,$$ and $$z=0.$$ According to step 2 of the K-map minimization method we find the maximal subcubes of the off-set. We read the sums associated with the three 1-cubes off the K-map. The blue 1-cube changes in the $$x$$-dimension. Since the unchanged coordinates are $$y=1$$ and $$z=0,$$ the 1-cube represents sum $$\overline{y} + z.$$ The red 1-cube changes in the $$y$$-dimension, and represents sum $$\overline{x} + z.$$ The green 1-cube changes in the $$z$$-dimension, and represents sum $$x + \overline{y}.$$ Following step 3 of the K-map minimization method we determine the minimal cover by observing that the red and green subcubes are essential, because they are the only subcubes covering one 0-cell each. In contrast, the blue subcube is redundant, because it covers no 0-cell that is not also covered by another subcube. Since the essential red and green subcubes cover the entire off-set, they form the unique minimal cover with corresponding minimal POS form $f(x,y,z) = (x + \overline{y})\,(\overline{x} + z)\,.$ This POS form minimizes the cost among all POS forms. The cost of a POS expression is the number of literals plus the number of sums. Thus, the cost of the minimal POS form of $$f$$ is $$\mathcal{C}(f) = 2 + (2 + 2) = 6.$$ The resulting circuit is a two-level OR-AND circuit with two OR gates and one AND gate. Note that the minimal SOP form in Example 5.4 has the same cost. In general, the costs of the minimal forms differ as the next example demonstrates. Example 5.9: Dual 4-Variable K-map Minimization We derive the minimal POS form for Boolean function $f(w,x,y,z) = \overline{w}\,\overline{x}\,\overline{y}\,\overline{z} + \overline{w}\,\overline{x}\,y\,\overline{z} + \overline{w}\,\overline{x}\,y\,z + \overline{w}\,x\,\overline{y}\,\overline{z} + \overline{w}\,x\,y\,\overline{z} + \overline{w}\,x\,y\,z + w\,\overline{x}\,\overline{y}\,\overline{z} + w\,\overline{x}\,y\,\overline{z} + w\,\overline{x}\,y\,z + w\,x\,y\,\overline{z} + w\,x\,y\,z\,.$ of Example 5.7 with a unique minimal SOP form of cost $$\mathcal{C}(f) = 3 + (1 + 2 + 2) = 8.$$ The 4-variable K-map below shows the maximal subcubes of the off-set of $$f.$$ The blue subcube is a 2-cube that is unchanged in the $$y$$-dimension and the $$z$$-dimension. The corresponding sum is $$y + \overline{z}.$$ The red 1-cube changes in the $$z$$-dimension, and represents sum $$\overline{w} + \overline{x} + y.$$ Both subcubes are maximal and essential. Therefore, the minimal cover is unique, and the minimal POS form is $f(w,x,y,z) = (y + \overline{z})\,(\overline{w} + \overline{x} + y)\,.$ The cost of the minimal POS form is $$\mathcal{C}(f) = 2 + (2 + 3) = 7,$$ which is by 1 smaller than the cost of the minimal SOP form. We conclude that the minimal POS form is superior to the minimal SOP form with respect to the cost metric of two-level logic minimization. The corresponding OR-AND circuit requires three gates, and is smaller than the AND-OR circuit with four gates. Two-level logic minimization is a powerful tool for digital circuit designers. However, Example 5.9 emphasizes that the K-map minimization method solves a rather narrowly posed problem. If we wish to find the smallest two-level circuit of a Boolean function, we have the choice between an OR-AND and an AND-OR form. To find the smaller one, we need to solve two logic minimization problems, one to derive the minimal SOP form and the other to derive the minimal POS form, and then compare the costs of the minimal forms. There exist Boolean functions where logic minimization is powerless. The XOR and XNOR functions are examples, where the minimal SOP equals the SOP normal form and the minimal POS equals the POS normal form, and the cost of both forms are equal. The 3-input XOR or parity function $$P(x,y,z) = x \oplus y \oplus z,$$ for instance, has these K-maps for the on-set shown on the left and for the off-set on the right: All maximal subcubes are 0-cubes covering a single minterm or maxterm only. Both minimal two-level forms SOP and POS have cost $$\mathcal{C}(P) = 16,$$ which is the maximum cost of the minimal two-level forms among all 3-variable functions. This is a sober indication as to why small and fast XOR gates are hard to design. #### Incompletely Specified Functions¶ An incompletely specified function is unspecified for a subset of its input combinations. There are two common design scenarios that lead to incompletely specified functions. First, we may define a function assuming that its inputs are constrained. For example, we may define a 3-variable function $$f(x,y,z)$$ assuming that $$y = \overline{x},$$ because we do not want to include the negation of $$x$$ inside the circuit module for $$f.$$ Thus, we assume that input combinations $$x=y=0$$ and $$x=y=1$$ never occur, so we do not need to define the corresponding function values. In fact, since we do not care whether the unspecified function values are 0 or 1, we introduce symbol $$X$$ to denote a don’t care value. Interpret $$X$$ as a Boolean wildcard character that we may replace with constant 0 or 1 as we please. The second common design scenario enables us to exclude certain input combinations from a function, because we can guarantee that the driver circuits never output these input combinations. We illustrate this scenario by designing a seven-segment display decoder below. Incompletely specified functions introduce flexibility into the circuit design process that we exploit when performing logic minimization. More succinctly, an incompletely specified function with $$k$$ don’t cares can be viewed as a set of $$2^k$$ distinct completely specified Boolean functions. Among the minimal two-level forms of all these functions we wish to find the minimal one. K-maps support this extended minimization problem almost effortlessly. During step 1 of the K-map minimization method we mark all unspecified cells with don’t care symbol $$X.$$ The decisive difference occurs in step 2. When maximizing the subcubes, we interpret the $$X$$‘s either as 0 or 1, so as to increase the size of the subcubes where possible. Covering a don’t care cell with a subcube binds its value in the minimal cover to 0 or 1, so that the resulting two-level form is a completely specified Boolean function. We illustrate the logic minimization of incompletely specified functions with the K-map minimization method by designing a decoder for a seven-segment display. If you have a digital wristwatch or alarm clock, you probably have a seven-segment display. Each of the ten decimal digits is displayed by illuminating the corresponding subset of its seven segments $$S_0, S_1, \ldots, S_6$$: The input of a seven-segment decoder is a 4-bit BCD code word, short for binary coded decimal, and the outputs are seven signals $$S_0, S_1, \ldots, S_6,$$ one for each segment of the display. If a segment signal is 1, the segment illuminates. The BCD code represents the decimal digits with their binary number. Since 10 digits require $$\lceil\lg 10\rceil = 4$$ bits in binary representation, the BCD code uses the first ten of the sixteen 4-bit binary numbers only: decimal BCD 0 0 0 0 0 1 0 0 0 1 2 0 0 1 0 3 0 0 1 1 4 0 1 0 0 5 0 1 0 1 6 0 1 1 0 7 0 1 1 1 8 1 0 0 0 9 1 0 0 1 Our first step in the design of a combinational seven-segment decoder is the specification of a truth table. For each 4-bit input $$A = A_3\,A_2\,A_1\,A_0$$ in range $$[0,9],$$ we specify whether each of the outputs $$S_0, S_1, \ldots, S_6$$ is 0 or 1. For the remaining inputs, where $$A$$ is in range $$[10,15],$$ we don’t care whether the outputs are 0 or 1, silently assuming that the inputs are guaranteed to be legal BCD code words. Thus, we assign don’t cares to the corresponding outputs. We could also assign arbitrary 0 or 1 values, but that would restrict the logic minimization potential unnecessarily. $$A_3$$ $$A_2$$ $$A_1$$ $$A_0$$ $$S_0$$ $$S_1$$ $$S_2$$ $$S_3$$ $$S_4$$ $$S_5$$ $$S_6$$ 0 0 0 0 0 1 1 1 0 1 1 1 1 0 0 0 1 0 0 1 0 0 1 0 2 0 0 1 0 1 0 1 1 1 0 1 3 0 0 1 1 1 0 1 1 0 1 1 4 0 1 0 0 0 1 1 1 0 1 0 5 0 1 0 1 1 1 0 1 0 1 1 6 0 1 1 0 0 1 0 1 1 1 1 7 0 1 1 1 1 0 1 0 0 1 0 8 1 0 0 0 1 1 1 1 1 1 1 9 1 0 0 1 1 1 1 1 0 1 0 10 1 0 1 0 X X X X X X X 11 1 0 1 1 X X X X X X X 12 1 1 0 0 X X X X X X X 13 1 1 0 1 X X X X X X X 14 1 1 1 0 X X X X X X X 15 1 1 1 1 X X X X X X X The second step of the design process is logic minimization. The decoder has seven output functions, each of which we minimize independent of the others. Below, the 4-variable K-map on the left shows of the on-set and the don’t cares of function $$S_0(A_0, A_1, A_2, A_3).$$ The K-map in the middle demonstrates the lack of minimization potential if we exclude the don’t care cells from the minimization process. In contrast, the K-map on the right exploits the don’t cares to minimize the cost of the SOP form. The K-map in the middle shows the maximal subcubes assuming that the don’t cares are not included in the on-set, effectively treating all don’t cares as 0-cells. We find two essential subcubes $$\overline{A}_3\,A_2\,A_0$$ and $$A_3\,\overline{A}_2\,\overline{A}_1,$$ and one of the three minimal covers results in minimal SOP form $$S_0 = \overline{A}_3\,A_2\,A_0 + A_3\,\overline{A}_2\,\overline{A}_1 + \overline{A}_3\,\overline{A}_2\,A_1 + \overline{A}_3\,\overline{A}_2\,\overline{A}_0$$ with cost $$\mathcal{C}(S_0) = 16.$$ If we take the liberty to replace a don’t care with a 1 whenever it permits increasing the size of a subcube, then we obtain the maximal subcubes in the K-map shown on the right. We interpret don’t care cell $$A_3\,\overline{A}_2\,A_1\,\overline{A}_0$$ in the bottom right corner as a 1-cell, which allows us grow 1-cube $$\overline{A}_3\,\overline{A}_2\,\overline{A}_0$$ into 2-cube $$\overline{A}_2\,\overline{A}_0$$ covering the four corner cells. Likewise, interpreting don’t care cell $$A_3\,\overline{A}_2\,A_1\,A_0$$ as a 1-cell permits growing 1-cube $$\overline{A}_3\,\overline{A}_2\,A_1$$ into 2-cube $$\overline{A}_2\,A_1$$ by wrapping around vertically. Analogously, we grow 1-cube $$\overline{A}_3\,A_1\,A_0$$ into 2-cube $$A_1\,A_0,$$ 1-cube $$\overline{A}_3\,A_2\,A_0$$ into 2-cube $$A_2\,A_0,$$ and 1-cube $$A_3\,\overline{A}_2\,\overline{A}_1$$ into 3-cube $$A_3.$$ There are two minimal covers, one of which corresponds to minimal SOP form $$S_0 = A_3 + A_2\,A_0 + \overline{A}_2\,\overline{A}_0 + A_1\,A_0$$ with cost $$\mathcal{C}(S_0) = 11.$$ The cost reduction is due to the implicit replacement of all don’t cares with value 1. The K-maps for the other six output functions are shown below with a minimal cover representing a minimal SOP form. Note that in the K-maps of functions $$S_2,$$ $$S_4,$$ $$S_5,$$ and $$S_6$$ we interpret only a subset of the don’t care cells as 1-cells to obtain a minimal cover. The remaining don’t care cells are implicitly interpreted as 0-cells. We leave it as an exercise to determine the minimal POS forms for each output function of the seven-segment decoder. 5.5 We are given Boolean function $f(A,B,C,D) = A\,\overline{B}\,C\,\overline{D} + \overline{A}\,\overline{B}\,C\,\overline{D} + A\,\overline{B}\,\overline{C}\,\overline{D}$ with don’t care conditions: $A\,\overline{B}\,C\,D,\ \overline{A}\,\overline{B}\,C\,D,\ A\,\overline{B}\,\overline{C}\,D,\ \overline{A}\,\overline{B}\,\overline{C}\,D,\ \overline{A}\,B\,C\,D,\ \overline{A}\,B\,C\,\overline{D},\ A\,B\,\overline{C}\,\overline{D},\ \overline{A}\,B\,\overline{C}\,\overline{D}\,.$ 1. Use a K-map to minimize $$f.$$ 2. Determine the cost reduction due to minimization. 1. We begin by translating the minimization problem into a K-map. We note that $$f$$ is a 4-variable function, which requires the K-map of a 4-cube: Function $$f$$ is given in SOP normal form. Therefore, each of the three minterms specifies a 1-cell in the K-map. In addition, we are given don’t care conditions in form of minterms. We mark the corresponding cells in the K-map with don’t care symbol X. Next, we determine the prime implicants (maximal subcubes). We exploit the don’t care cells by interpreting them as 1-cells where convenient to enlarge a prime implicant. Our strategy is to start with a single 1-cell, and expand this 0-cube into the highest dimensional cube possible covering 1’s or X’s. For example, start with 1-cell $$\overline{A}\,\overline{B}\,C\,\overline{D}$$ in the top-right corner. We can expand this 0-cube into a 1-cube by including 1-cell $$A\,\overline{B}\,C\,\overline{D}$$ in the bottom-right corner. Then, we include the X-cells to the left to obtain the red prime implicant, which is a 2-cube. Alternatively, we may start with 1-cell $$\overline{A}\,\overline{B}\,C\,\overline{D},$$ and expand it into the blue prime implicant. The only other prime implicant is the green prime implicant that covers the bottom row, if we start the expansion from the 1-cell in the bottom-left or the bottom-right corner. Thus, the prime implicants of function $$f$$ are: $\overline{B}\,C\ \text{(red)},\ \overline{A}\,C\ \text{(blue)},\ A\,\overline{B}\ \text{(green)}\,.$ The green prime implicant is essential, because it is the only one to cover 1-cell $$A\,\overline{B}\,\overline{C}\,\overline{D}.$$ The blue prime implicant is not essential, because the only 1-cell it covers is also covered by the red prime implicant. Similarly, the red prime implicant is not essential, because the two 1-cells it covers are also covered by the blue and green prime implicants. Therefore, we find two minimal covers for function $$f$$: $\begin{eqnarray*} f(A,B,C,D) &=& A\,\overline{B} + \overline{B}\,C \\ &=& A\,\overline{B} + \overline{A}\,C\,. \end{eqnarray*}$ Note that the minimal covers fix all X’s to be either 1’s or 0’s. For example, minimal cover $$A\,\overline{B} + \overline{B}\,C$$ interprets the X-cells covered by the green and red prime implicants as 1-cells, i.e. forces don’t care conditions $$A\,\overline{B}\,\overline{C}\,D,$$ $$A\,\overline{B}\,C\,D,$$ and $$\overline{A}\,\overline{B}\,C\,D$$ into 1-cells, and interprets all other X-cells as 0-cells. In this interpretation the blue cover would not be a prime implicant of $$f,$$ because it would cover two 0-cells. 2. The cost of function $$f$$ in SOP normal form is $\mathcal{C}(\text{sop}(f)) = 3 + 3 \cdot 4 = 15\,.$ Both minimal covers are SOP forms with cost $\mathcal{C}(\text{mc}(f)) = 2 + 2 \cdot 2 = 6\,.$ The cost reduction due to minimization is the difference $\Delta \mathcal{C}(f) = 15 - 6 = 9$ cost units. #### Truth Table Compaction¶ When circuit designers don’t care about some of the output values of a combinational circuit, they exploit the don’t care outputs of the incompletely specified function to minimize the corresponding two-level circuits. There are also situations, where circuit designers don’t care about some of the input values of a combinational circuit. In the following we show how to exploit don’t care inputs to reduce the number of rows of a truth table of a function. The more compact a truth table the higher are our chances to deduce the minimal two-level logic function without even drawing a K-map. We illustrate the use of truth table compaction by means of examples. Example 5.10: Truth Table Compaction We wish to design a combinational circuit with two data inputs $$A$$ and $$B,$$ plus an enable input $$EN,$$ such that output $$Y$$ computes the implication of $$A$$ and $$B$$ if the enable input is 1, and output $$Y$$ is constant 0 if the circuit is disabled, i.e. the enable input is 0. As a first step towards a combinational circuit, we specify the truth table to formalize the problem statement. Observe that we require that output $$Y=0$$ when the circuit is disabled. Thus, we don’t care about inputs $$A$$ and $$B$$ when $$EN = 0.$$ Only when $$EN=1,$$ do we compute the implication of $$A$$ and $$B.$$ We use a asterisk $$*$$ to mark don’t care inputs $$A$$ and $$B$$ when $$EN=0$$ in the compact truth table: $$EN$$ $$A$$ $$B$$ $$Y$$ 0 $$*$$ $$*$$ 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 1 1 The $$*$$ symbol signifies that the input combination covers all Boolean values, 0 and 1, whereas don’t care output symbol $$X$$ denotes the choice of one of 0 or 1. The compact truth table enables us to conclude that the minimal SOP form for output $$Y$$ is $$Y = EN\cdot\overline{A} + EN\cdot B,$$ because $$Y$$ is 1 if $$EN=1$$ and if implication $$\overline{A} + B$$ is 1, i.e. if $$Y = EN \cdot (\overline{A} + B).$$ The latter expression is a minimal POS form that is even less costly than the minimal SOP form that we obtain by applying the distributivity theorem to distribute $$EN$$ over the disjunction. We double check this argument by expanding the compact truth table into a complete truth table and use the K-map minimization method to derive the minimal SOP form. To that end we translate the specification into a complete truth table without don’t care inputs. In all rows where $$EN=0$$ we set $$Y=0.$$ $$EN$$ $$A$$ $$B$$ $$Y$$ 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 1 1 The 3-variable K-map of the on-set of $$Y$$ reveals two essential maximal subcubes. The unique minimal SOP form is $$Y = EN\cdot\overline{A} + EN\cdot B,$$ in accordance with our derivation from the compact truth table. Example 5.11: Priority Encoder Minimization We wish to design an 8-bit priority encoder. Recall that a priority encoder is a basic combinational circuit with $$n$$ inputs $$A_i$$ and $$n$$ outputs $$Y_i,$$ where $$0 \le i < n,$$ such that $\begin{split}Y_i = \begin{cases} 1\,, & \mbox{if}\ A_0 = \ldots = A_{i-1} = 0\ \mbox{and}\ A_i = 1\,, \\ 0\,, & \mbox{otherwise}. \end{cases}\end{split}$ Designing a combinational circuit with $$n=8$$ inputs exceeds the applicability of the K-map minimization method. However, the minimization problem becomes tractable with a compact truth table. The key insight is that an input pattern with leading zeros, i.e. $$A_j = 0$$ for $$j < i$$ and $$A_i=1$$ determines all outputs independent of the remaining input values $$A_j$$ for $$j > i.$$ In other words, we don’t care about the inputs values beyond the first 1. This insight translates into a compact truth table for the 8-bit priority encoder: $$A_0$$ $$A_1$$ $$A_2$$ $$A_3$$ $$A_4$$ $$A_5$$ $$A_6$$ $$A_7$$ $$Y_0$$ $$Y_1$$ $$Y_2$$ $$Y_3$$ $$Y_4$$ $$Y_5$$ $$Y_6$$ $$Y_7$$ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 $$*$$ 0 0 0 0 0 0 1 0 0 0 0 0 0 1 $$*$$ $$*$$ 0 0 0 0 0 1 0 0 0 0 0 0 1 $$*$$ $$*$$ $$*$$ 0 0 0 0 1 0 0 0 0 0 0 1 $$*$$ $$*$$ $$*$$ $$*$$ 0 0 0 1 0 0 0 0 0 0 1 $$*$$ $$*$$ $$*$$ $$*$$ $$*$$ 0 0 1 0 0 0 0 0 0 1 $$*$$ $$*$$ $$*$$ $$*$$ $$*$$ $$*$$ 0 1 0 0 0 0 0 0 1 $$*$$ $$*$$ $$*$$ $$*$$ $$*$$ $$*$$ $$*$$ 1 0 0 0 0 0 0 0 The next step is to deduce the minimal output functions $$Y_i$$ for $$0 \le i < 8$$ from the compact truth table. We note that the column of output $$Y_0$$ is identical to column $$A_0,$$ and conclude that $$Y_0 = A_0.$$ This function corresponds to a degenerate two-level circuit without any logic gates. For output $$Y_1,$$ we notice that the function values are equal to $$A_1$$ where input $$A_0 = 0,$$ and $$A_1 = 0$$ where $$A_0 = 1,$$ for all combinations of input values of $$A_2, A_3, \ldots, A_7.$$ We conclude that output $$Y_1$$ is independent of variables $$A_j$$ for $$j>1.$$ Since $$Y_1$$ is 1 only if $$A_0 = 0$$ and $$A_1 = 1,$$ we know that $$Y_1$$ consists of one maximal subcube only, i.e. $$Y_1 = \overline{A}_0\,A_1.$$ The corresponding circuit is a degenerate two-level circuit consisting of a single 2-input AND gate. Analogously, output $$Y_2$$ is 1 only if $$A_0 = 0,$$ $$A_1 = 0,$$ and $$A_2 = 1,$$ independent of the remaining inputs. We conclude that $$Y_2$$ consists of a single subcube $$Y_2 = \overline{A}_0\,\overline{A}_1\,A_2.$$ The complete list of output functions is then $\begin{eqnarray*} Y_0 &=& A_0 \\ Y_1 &=& \overline{A}_0\,A_1 \\ Y_2 &=& \overline{A}_0\,\overline{A}_1\,A_2 \\ Y_3 &=& \overline{A}_0\,\overline{A}_1\,\overline{A}_2\,A_3 \\ Y_4 &=& \overline{A}_0\,\overline{A}_1\,\overline{A}_2\,\overline{A}_3\,A_4 \\ Y_5 &=& \overline{A}_0\,\overline{A}_1\,\overline{A}_2\,\overline{A}_3\,\overline{A}_4\,A_5 \\ Y_6 &=& \overline{A}_0\,\overline{A}_1\,\overline{A}_2\,\overline{A}_3\,\overline{A}_4\,\overline{A}_5\,A_6 \\ Y_7 &=& \overline{A}_0\,\overline{A}_1\,\overline{A}_2\,\overline{A}_3\,\overline{A}_4\,\overline{A}_5\,\overline{A}_6\,A_7\,. \\ \end{eqnarray*}$ Each output function consists of a single maximal subcube only. Since the subcube is essential, the minimal cover is unique. ### 5.1.3. Algebra of Minimization¶ The K-map minimization method is an insightful tool for paper-and-pencil design of combinational circuits. However, when designing combinational circuits with more than four inputs, we employ algorithmic methods. In this section, we discuss the algebraic treatment of the two-level minimization method, because the design of a minimization algorithm hinges on a concise problem formulation. We begin by formalizing the notion of covering a Boolean function. Given two Boolean functions $$f$$ and $$g,$$ we say that $$f$$ covers $$g$$ if $$f \ge g,$$ i.e. if $$f$$ is 1 for all input combinations where $$g$$ is 1, and possibly more. If $$f$$ covers $$g$$ and, furthermore, $$g$$ covers $$f,$$ then $$f$$ equals $$g.$$ We may express this insight in terms of Boolean operations as $(f \le g) \cdot (f \ge g)\ =\ (f = g)\,.$ The use of the two equal signs in such logical expressions is often perceived as confusing. To obtain an unambiguous Boolean equation, for example for a proof by perfect induction, we can replace equality $$f = g$$ on the rhs with an XNOR operation: $(f \le g) \cdot (f \ge g)\ =\ \overline{f + g}\,.$ Recall that the Boolean magnitude comparison $$f \le g$$ can be interpreted as implication of formal logic, written $$f \Rightarrow g.$$ Analogously, $$f \ge g$$ is the converse implication of formal logic, written as $$f \Leftarrow g.$$ $$f$$ $$g$$ $$\le$$, $$\Rightarrow$$ $$\ge$$, $$\Leftarrow$$ 0 0 1 1 0 1 1 0 1 0 0 1 1 1 1 1 Therefore, if $$f$$ covers $$g$$ then $$g$$ implies $$f,$$ and vice versa. Covering and implication are two sides of the same coin. In general, when two Boolean functions $$f$$ and $$g$$ are given as arbitrary Boolean expressions, it is difficult to determine whether $$f$$ covers $$g$$ or, equivalently, whether $$g$$ implies $$f.$$ For example, given $f(x,y,z) = x y + z \,,\qquad g(x,y,z) = x\,y\,\overline{z} + \overline{x}\,z + x\,z\,,$ to determine that $$f \ge g,$$ we might use Boolean algebra, perfect induction, or K-maps to compare the on-sets of $$f$$ and $$g.$$ However, in case where the expressions of $$f$$ and $$g$$ are products of literals, i.e. conjunctions of the complemented or uncomplemented variables, then it is easy to determine whether $$f$$ covers $$g.$$ For example, given the products of literals $f(x,y,z) = \overline{x}\,y\,z\,,\qquad g(x,y,z) = \overline{x}\,y\,,$ we find that $$g$$ covers $$f,$$ because $$g$$ is a 1-cube that covers 0-cube $$f.$$ Thus, we have $$g \ge f$$ or $$f \Rightarrow g.$$ In general, if a product of literals $$f$$ implies function $$g,$$ we say that $$f$$ is an implicant of $$g.$$ For example $$f(x,y,z) = \overline{x}\,y\,z$$ is an implicant of $$g(x,y,z) = \overline{x}\,y + y\,z.$$ The fewer literals an implicant has, the larger it is w.r.t. the $$\ge$$ order. Since $$g = \overline{x}\,y + y\,z \ge \overline{x}\,y \ge \overline{x}\,y\,z = f,$$ we find that $$f$$ is an implicant of $$\overline{x}\,y,$$ and $$\overline{x}\,y$$ is an implicant of $$g.$$ Furthermore, we have $$g = \overline{x}\,y + y\,z \ge y\,z \ge \overline{x}\,y\,z = f,$$ so that $$f$$ is also an implicant of $$y\,z,$$ and $$y\,z$$ is an implicant of $$g.$$ We call an implicant prime implicant if it implies no other implicant. Prime implicants are the largest implicants of a function. For example, $$g = \overline{x}\,y + y\,z$$ has three minterms or 0-cubes, $$x\,y\,z,$$ $$\overline{x}\,y\,\overline{z},$$ and $$\overline{x}\,y\,z,$$ and two 1-cubes, $$\overline{x}\,y$$ and $$y\,z.$$ The 0-cubes are implicants but not prime, because for each 0-cube there is a larger 1-cube, $$x\,y\,z \le y\,z,$$ $$\overline{x}\,y\,\overline{z} \le \overline{x}\,y,$$ and $$\overline{x}\,y\,z \le \overline{x}\,y.$$ Both 1-cubes are prime implicants, because $$g$$ has no larger implicants. Formally, $$p$$ is a prime implicant of $$g,$$ if no other implicant $$f$$ of $$g$$ fulfills $$f \ge p.$$ Indeed, the prime implicants are the maximal subcubes we know from the K-map minimization method already. Prime implicants lead us to a fundamental theorem about Boolean functions due to Quine. Theorem (Complete SOP Normal Form) Every Boolean function can be represented by the sum of all its prime implicants. The sum of all prime implicants of a function is unique up to their order, and is called complete SOP normal form. Proof. Let $$f: \mathcal{B}^n \rightarrow \mathcal{B}$$ be a Boolean function and $$p_1, p_2, \ldots, p_k$$ its prime implicants. Assume $$f$$ is given in SOP form $$f = \sum_{i=1}^{m} c_i,$$ where the $$c_i$$ are the conjunctions or products. In an SOP form, every $$c_i$$ is an implicant, because $$c_i \le f.$$ Every implicant $$c_i$$ has at least one prime implicant $$p_j$$ that covers $$c_i,$$ i.e. $$c_i \le p_j.$$ If $$c_i$$ implies $$p_j,$$ then $$c_i + p_j = p_j.$$ Since all prime implicants cover all smaller implicants, we conclude that any SOP form of $$f$$ is equivalent to $$f = \sum_{j=1}^k p_j.$$ The geometric interpretation of the theorem of the complete SOP is that every Boolean function can be represented by the sum of all of its maximal subcubes. Every subcube of a function is an implicant, and each maximal subcube is a prime implicant. Recall that a maximal subcube may be essential or not. Analogously, we define an essential prime implicant of a Boolean function $$f$$ to be a prime implicant $$p_j$$ such that removing $$p_j$$ from the complete SOP of $$f$$ produces a sum $$\sum_{i\ne j} p_i < f$$ that does not cover $$f.$$ If Boolean function $$f$$ is represented by a sum of a subset of all prime implicants such that removing any of the prime implicants does not cover $$f$$ any longer then the sum of prime implicants is irredundant. Another theorem due to Quine involves monotone functions. We state the theorem without proof. Theorem (Minimal Complete SOP) The complete SOP of a monotone Boolean function is monotone and irredundant, and unique. Since the complete SOP of a monotone function is unique, it is equal to the minimal cover. This is a practically relevant insight. If a given function is known to be monotone, we can construct the minimal cover by identifying all prime implicants and forming their disjunction. For example, the majority function $$M(x,y,z)= x y + x z + y z$$ is monotone because all literals appear in uncomplemented form. Each of its three implicants is prime, because we cannot remove one of its two literals without changing the function, as is easily verified. Therefore, the expression for $$M$$ is the unique minimal complete SOP. In general, if a given SOP expression is not monotone or not known to be monotone, it is nontrivial to deduce whether the SOP is a minimal cover. There may be redundant prime implicants none of which is part of the minimal cover, see Example 5.4, or there are redundant prime implicants some of which are part of the minimal cover and the SOP is not unique, see Exercise 5.2, or in the extreme case all prime implicants are redundant and the SOP is not unique, see Example 5.6. We briefly mention that all of the properties and results about SOP forms discussed above have duals. A sum of literals is an implicate of function $$f$$ if it is implied by $$f.$$ An implicate is prime if it is not implied by any other implicate of $$f.$$ Every Boolean function can be represented by the product of prime implicates. For example, POS expression $$f(x,y,z) = (x + \overline{y}) (\overline{x} + z) (\overline{y} + z)$$ consists of three prime implicates. Prime implicate $$\overline{y} + z$$ is redundant, whereas prime implicates $$x + \overline{y}$$ and $$\overline{x} + z$$ are essential, cf. Example 5.8. ### 5.1.4. Algorithmic Minimization¶ In this section, we outline an algorithmic solution to the two-level minimization problem. As a first step, we find all prime implicants or maximal subcubes of a given Boolean function. If the function is known to be monotone, we can stop here, because the theorem of the minimal complete SOP tells us that the disjunction of the prime implicants is the unique minimal cover. Otherwise, we perform a second step to determine those prime implicants that are part of a minimal cover. #### Identifying All Prime Implicants¶ The most widely known algorithm to identify all prime implicants of a Boolean function is the consensus method. It transforms a given SOP expression of Boolean function $$f$$ into the complete SOP normal form of $$f$$ by applying the dual covering theorem and the consensus theorem repeatedly. Let $$g(x_0, x_1, \ldots, x_{n-1})$$ be an SOP expression of $$n$$-variable function $$f,$$ such that $$g = \sum_{i=1}^m c_i$$ is the sum of $$m$$ products $$c_i.$$ Then, repeat these two steps until neither applies any longer: 1. [covering] If there exist two products $$c_i$$ and $$c_j$$ in $$g$$ such that $$c_i$$ covers $$c_j,$$ i.e. $$c_i \ge c_j,$$ then remove implicant $$c_j$$ from $$g.$$ 2. [consensus] If there exist two products $$c_i = x_k \cdot c_i|_{x_k}$$ and $$c_j = \overline{x}_k \cdot c_j|_{\overline{x}_k},$$ then add consensus term $$c_i|_{x_k}\cdot c_j|_{\overline{x}_k}$$ to $$g.$$ When the algorithm terminates, $$g$$ is the sum of all prime implicants of $$f.$$ We illustrate the consensus method by means of SOP expression $g(x_0, x_1, x_2, x_3) = x_1\,x_2 + x_2\,x_3 + x_2\,\overline{x}_3 + \overline{x}_0\,\overline{x}_2\,\overline{x}_3 + x_0\,\overline{x}_1\,\overline{x}_2\,\overline{x}_3\,.$ In the first iteration, the covering theorem does not apply to any pair of products. However, the consensus theorem applies multiple times. The consensus of $$x_1 x_2$$ and $$\overline{x}_0 \overline{x}_2 \overline{x}_3$$ on $$x_2$$ is $$x_1 \overline{x}_0 \overline{x}_3,$$ the consensus of $$x_2 x_3$$ and $$x_2 \overline{x}_3$$ on $$x_3$$ is $$x_2 x_2 = x_2.$$ We add the consensus terms to $$g$$ and obtain $g_1(x_0, x_1, x_2, x_3) = x_1\,x_2 + x_2\,x_3 + x_2\,\overline{x}_3 + \overline{x}_0\,\overline{x}_2\,\overline{x}_3 + x_0\,\overline{x}_1\,\overline{x}_2\,\overline{x}_3 + \overline{x}_0\,x_1\,\overline{x}_3 + x_2\,.$ In the second iteration, the covering theorem applies. In particular, $$x_2$$ covers three products, $$x_2 \ge x_1 x_2,$$ $$x_2 \ge x_2 x_3,$$ and $$x_2 \ge x_2 \overline{x}_3.$$ We remove the implicants from $$g_1$$: $g_2'(x_0, x_1, x_2, x_3) = \overline{x}_0\,\overline{x}_2\,\overline{x}_3 + x_0\,\overline{x}_1\,\overline{x}_2\,\overline{x}_3 + \overline{x}_0\,x_1\,\overline{x}_3 + x_2\,.$ The consensus theorem applies to several pairs of products of $$g_2'.$$ The consensus of $$\overline{x}_0 \overline{x}_2 \overline{x}_3$$ and $$x_2$$ on $$x_2$$ is $$\overline{x}_0 \overline{x}_3,$$ and the consensus of $$x_0 \overline{x}_1 \overline{x}_2 \overline{x}_3$$ and $$x_2$$ on $$x_2$$ is $$x_0 \overline{x}_1 \overline{x}_3.$$ We add both consensus terms to $$g_2'$$ and obtain $g_2(x_0, x_1, x_2, x_3) = \overline{x}_0\,\overline{x}_2\,\overline{x}_3 + x_0\,\overline{x}_1\,\overline{x}_2\,\overline{x}_3 + \overline{x}_0\,x_1\,\overline{x}_3 + x_2 + \overline{x}_0\,\overline{x}_3 + x_0\,\overline{x}_1\,\overline{x}_3\,.$ In the third iteration, the covering theorem applies three times, because $$\overline{x}_0 \overline{x}_3 \ge \overline{x}_0 \overline{x}_2 \overline{x}_3,$$ $$\overline{x}_0 \overline{x}_3 \ge \overline{x}_0 x_1 \overline{x}_3,$$ and $$x_0 \overline{x}_1 \overline{x}_3 \ge x_0 \overline{x}_1 \overline{x}_2 \overline{x}_3.$$ We remove the implicants from $$g_2$$: $g_3'(x_0, x_1, x_2, x_3) = x_2 + \overline{x}_0\,\overline{x}_3 + x_0\,\overline{x}_1\,\overline{x}_3\,.$ Now, the consensus theorem applies to $$\overline{x}_0 \overline{x}_3$$ and $$x_0 \overline{x}_1 \overline{x}_3$$ on $$x_0.$$ We add consensus term $$\overline{x}_1 \overline{x}_3$$ to $$g_3',$$ and obtain $g_3(x_0, x_1, x_2, x_3) = x_2 + \overline{x}_0\,\overline{x}_3 + x_0\,\overline{x}_1\,\overline{x}_3 + \overline{x}_1\,\overline{x}_3\,.$ In the fourth iteration, the covering theorem applies, because $$\overline{x}_1 \overline{x}_3 \ge x_0 \overline{x}_1 \overline{x}_3.$$ We remove the latter term and obtain $g_4'(x_0, x_1, x_2, x_3) = x_2 + \overline{x}_0\,\overline{x}_3 + \overline{x}_1\,\overline{x}_3\,.$ Since the consensus theorem does not apply to any pair of implicants, the consensus method terminates. The complete SOP of function $$f$$ is expression $$g_4',$$ cf. Example 5.7. A decisive advantage of the consensus method is that we do not need to know all minterms of a function to generate all of its prime implicants. #### Extracting a Minimal Cover¶ Given the complete SOP of Boolean function $$f(x_0, x_1, \ldots, x_{n-1}) = \sum_{i=1}^k p_i$$ with prime implicants $$p_i,$$ we want to choose those prime implicants that form a minimal cover of $$f.$$ We introduce indicator variables $$s_i \in \mathcal{B},$$ where $$1 \le i \le k,$$ such that $$s_i = 1$$ if we select prime implicant $$p_i$$ to be in a minimal cover, and $$s_i = 0$$ otherwise. Then, we form the SOP of the products $$s_i\,p_i$$: $C_S(x_0, x_1, \ldots, x_{n-1}) = \sum_{i=1}^k s_i\,p_i\,.$ If $$s_i = 1$$ for all $$i \in [1, k],$$ then $$C_S$$ is the complete SOP of $$f.$$ On the other hand, if we choose $$s_i = 0,$$ then $$s_i\,p_i = 0$$ by annihilation, which effectively removes prime implicant $$p_i$$ from sum $$C_S.$$ In the following, we interpret $$S = (s_1, s_2, \ldots, s_k)$$ as a Boolean vector $$S \in \mathcal{B}^k.$$ Since $$f$$ covers $$C_S$$ for every $$S,$$ we have $C_S(x_0, x_1, \ldots, x_{n-1}) \le f(x_0, x_1, \ldots, x_{n-1})\,.$ Given that $$C_S \le f,$$ we can characterize those subsets of prime implicants $$S$$ for which $$C_S = f$$ by requiring also that $$C_S \ge f,$$ because $$C_S \le f$$ and $$C_S \ge f$$ is equivalent to $$C_S = f.$$ Thus, a minimal cover is a selection $$S$$ such that $$C_S$$ covers $$f$$ and the cost of the corresponding sum of prime implicants is minimized. The key observation is that we can turn constraint $$C_S \ge f$$ into a system of linear inequalities, which we can solve with existing methods. Recall that a Boolean function $$f(x_0, x_1, \ldots, x_{n-1})$$ is the sum of its minterms. Let $$J$$ be the set of minterms such that $$f = \sum_{j\in J} j,$$ where $$j \in J$$ is the decimal encoding of the minterm. Then, the restriction of $$f$$ in minterm $$j$$ is $$f|_j = 1$$ for all $$j \in J$$ and $$f|_j = 0$$ for $$j \not\in J.$$ Using Iverson brackets, we denote whether prime implicant $$p_i$$ of $$f$$ covers minterm $$j$$ or not: $\begin{split}[p_i \ge j] = \begin{cases} 1\,, & \text{if}\ p_i \ge j\,, \\ 0\,, & \text{otherwise}\,. \end{cases}\end{split}$ With this notation, we split the Boolean inequality $$C_S \ge f$$ into a system of linear inequalities, one per minterm: $\forall\ j \in J:\quad \sum_{i=1}^k s_i [p_i \ge j] \ge 1\,.$ The inequality for minterm $$j$$ is the restriction of constraint $$C_S \ge f$$ in $$j.$$ We can interpret the disjunctions on the lhs as arithmetic additions, and the system of Boolean inequalities turns into a system of pseudo-Boolean linear inequalities. This twist yields an integer linear program known as the set covering problem: Minimize $$\qquad \mathcal{C}(C_S) = \sum_{i=1}^k s_i (1 + \mathcal{C}(p_i)),$$ subject to $$\quad\ \,\sum_{i=1}^k s_i [p_i \ge j] \ge 1$$ for all minterms $$j$$ of $$f,$$ and $$s_i \in \mathcal{B}.$$ Assuming that cost $$\mathcal{C}(p_i)$$ is the number of literals of prime implicant $$p_i,$$ then $$\mathcal{C}(C_S)$$ is the number of the selected prime implicants plus the total number of literals, which coincides with our cost function. The solution to the set covering problem is a selection $$S$$ of prime implicants whose disjunction is a minimal cover of $$f.$$ The simplest method for solving the set covering problem is a search of the corresponding binary decision tree, that we know from our discussion of the ite-function already. Since the problem size grows exponentially in the number of prime implicants and minterms, more efficient algorithms have been developed.[3] We illustrate the solution of the set covering problem by minimizing the Boolean function of Example 5.6 algorithmically. The consensus method yields the complete SOP $f(x,y,z) = \overline{x}\,\overline{y} + x\,y + \overline{y}\,z + y\,\overline{z} + \overline{x}\,\overline{z} + x\,z\,.$ Call the prime implicants $$p_1 = \overline{x}\,\overline{y},$$ $$p_2 = x\,y,$$ $$p_3 = \overline{y}\,z,$$ $$p_4 = y\,\overline{z},$$ $$p_5 = \overline{x}\,\overline{z},$$ and $$p_6 = x\,z.$$ We introduce Boolean vector $$S = (s_1, s_2, \ldots, s_6)$$ of indicator variables for each prime implicant, and obtain the extended SOP $C_S = s_1\,\overline{x}\,\overline{y} + s_2\,x\,y + s_3\,\overline{y}\,z + s_4\,y\,\overline{z} + s_5\,\overline{x}\,\overline{z} + s_6\,x\,z\,.$ For each prime implicant, we may use the combining theorem or Shannon’s expansion theorem, to deduce the minterms it covers. For example, prime implicant $$p_1 = \overline{x}\,\overline{y} = \overline{x}\,\overline{y}\,\overline{z} + \overline{x}\,\overline{y}\,z = \sum (0, 1).$$ Therefore $$p_1$$ covers minterm 0, $$p_1 \ge 0,$$ and minterm 1, $$p_1 \ge 1.$$ Analogously, we find the minterms covered by the other prime implicants: prime implicant covered minterms $$p_1$$ 0, 1 $$p_2$$ 6, 7 $$p_3$$ 1, 5 $$p_4$$ 2, 6 $$p_5$$ 0, 2 $$p_6$$ 5, 7 For each of the covered minterms, we formulate an inequality associated with the restriction of constraint $$C_S \ge f.$$ For instance, since minterm 0 is covered by prime implicants $$p_1$$ and $$p_5,$$ we obtain the restriction $\begin{eqnarray*} C_S|_0 &=& (s_1\,\overline{x}\,\overline{y} + s_2\,x\,y + s_3\,\overline{y}\,z + s_4\,y\,\overline{z} + s_5\,\overline{x}\,\overline{z} + s_6\,x\,z)|_{\overline{x}\,\overline{y}\,\overline{z}} \\ &=& s_1 + s_5 \end{eqnarray*}$ Furthermore, we find that restriction $$f|_0 = f|_{\overline{x}\,\overline{y}\,\overline{z}} = 1,$$ and constraint $$C_S \ge f$$ turns for minterm 0 into inequality $$s_1 + s_5 \ge 1.$$ Analogously, we derive the system of linear inequalities for all minterms $$j \in J = \{ 0, 1, 2, 5, 6, 7\}$$: $\begin{eqnarray*} 0{:}\quad & s_1 + s_5 &\ge &1 \\ 1{:}\quad & s_1 + s_3 &\ge &1 \\ 2{:}\quad & s_4 + s_5 &\ge &1 \\ 5{:}\quad & s_3 + s_6 &\ge &1 \\ 6{:}\quad & s_2 + s_4 &\ge &1 \\ 7{:}\quad & s_2 + s_6 &\ge &1 \end{eqnarray*}$ Call this system of linear inequalities $$\Psi.$$ We construct a binary decision tree to find all selection vectors $$S$$ that solve $$\Psi.$$ To facilitate a concise notation, we introduce the $$\models$$ symbol, pronounced entails. For example, formula $$\Psi|_{s_1 s_5} \models (0, 1, 2)$$ expresses that the restriction of $$\Psi$$ in $$s_1$$ and $$s_5$$ entails inequalities 0, 1, and 2, i.e. the inequalities associated with minterms 0, 1, and 2 are satisfied. The inequality of minterm 0 yields $$s_1 + s_5 = 2 \ge 1,$$ for minterm 1 inequality $$s_1 + s_3 \ge 1$$ holds if $$s_1 = 1$$ independent of $$s_3,$$ and for minterm 2 inequality $$s_4 + s_5 \ge 1$$ holds if $$s_5 = 1$$ independent of $$s_4.$$ Rather than using arithmetic additions, we my also interpret the $$+$$-sign as Boolean disjunction, and evaluate the inequalities using Boolean operations. Furthermore, formula $$\Psi|_{\overline{s}_1 \overline{s}_5} \not\models (0)$$ expresses that the restriction of $$\Psi$$ in $$\overline{s}_1$$ and $$\overline{s}_5$$ does not entail inequality 0, i.e. the inequality associated with minterm 0 is not satisfied, because $$s_1 + s_5 = 0 + 0 \not\ge 1.$$ Figure 5.5 shows the binary decision tree for $$\Psi.$$ The root of the tree is shown on the left and the leaves on the right. Starting with the expansion around $$s_1$$ at the root, restriction $$\Psi|_{\overline{s}_1}$$ sets $$s_1 = 0,$$ which does not permit any decisions yet. In particular, setting $$s_1 = 0$$ does not prevent any of the inequalities from being satisfied nor does it suffice to satisfy any of the inequalities. In contrast, $$\Psi|_{s_1}$$ entails inequalities 0 and 1 regardless of all other selections. Figure 5.5: Binary decision tree for system $$\Psi$$ of linear inequalities. The binary decision tree has 18 leaves of selections $$S$$ that cover all minterms $$J$$ of $$f.$$ To find a minimal cover, we compute the cost of these selections, and pick the selections of minimum cost. There are two selections with minimum cost, framed in Figure 5.5, selection $$S_1 = (0, 1, 1, 0, 1, 0)$$ of prime implicants $$p_2,$$ $$p_3,$$ and $$p_5,$$ and selection $$S_2 = (1, 0, 0, 1, 0, 1)$$ of prime implicants $$p_1,$$ $$p_4,$$ and $$p_6.$$ Since the cost of each of the six prime implicants is $$\mathcal{C}(p_i) = 2,$$ the cost of both covers is $$\mathcal{C}(C_{S_1}) = \mathcal{C}(C_{S_2}) = 3 (1 + 2) = 9.$$ The corresponding minimal covers are $$C_{S_1} = x\,y + \overline{y}\,z + \overline{x}\,\overline{z}$$ and $$C_{S_2} = \overline{x}\,\overline{y} + y\,\overline{z} + x\,z,$$ as we found with the K-map minimization method in Example 5.6 already. ## 5.2. Technology Mapping¶ A particular realization of a digital circuit in hardware depends on the choice of the implementation technology, such as CMOS circuits. Besides custom designed CMOS circuits, industry offers a multitude of circuit technologies for various target applications with different price-performance points, including TTL, PLAs, ASICs, and FPGAs. The design of a combinational circuit from specification to logic minimization is largely technology independent. The step from a paper-and-pencil design to a real circuit includes solving the technology mapping problem: Given a Boolean expression $$f$$ and a set $$C$$ of digital circuit elements, we wish to generate a circuit that implements $$f$$ with the circuit elements of $$C.$$ Boolean expression $$f$$ may be given in any form, including in minimized SOP or POS form. If we wish to implement any Boolean function, then $$C$$ should contain at least a universal set of Boolean operators. ### 5.2.1. Universal Set Transformations¶ A Boolean expression in SOP form, whether as normal form or minimized, reflects the way humans tend to think about logical operations in a natural fashion. Thus, when we design a digital circuit with paper and pencil, we commonly derive a minimized SOP form, which serves as input of the technology mapping problem. If we want to implement the corresponding two-level circuit with complemented or uncomplement inputs as a CMOS circuit with elements $$C =$$ {INV, NAND, NOR}, the technology mapping problem corresponds to a transformation from universal set $$U =$$ {NOT, AND, OR} to universal set $$C =$$ {INV, NAND, NOR}. One possible strategy to solve the technology mapping problem is to first perform the universal set transformation for each operation in $$U,$$ independent of the Boolean expression $$f$$ we wish to implement. Then, we replace the logic gates of the two-level circuit for $$f$$ with the equivalent operators derived by the transformation. For example, we devise the universal set transformation from $$U =$$ {NOT, AND, NOR} to CMOS gates $$C =$$ {INV, NAND, NOR} by expressing each operation in $$U$$ with the CMOS gates. Formulated in terms of Boolean algebra, we obtain the universal set transformation with the operations of $$U$$ on the lhs and the CMOS functions on the rhs: $\begin{eqnarray*} \overline{A} &=& \text{INV}(A) \\ A\cdot B &=& \text{INV}(\text{NAND}(A,B)) \\ A + B &=& \text{INV}(\text{NOR}(A,B))\,. \end{eqnarray*}$ Now, consider implementing the 2-input XOR function given by minimal SOP form $Y(A,B) = A\,\overline{B} + \overline{A}\,B\,.$ We apply the universal set transformation graphically to the corresponding two-level circuit, see Figure 5.6. To that end, we replace each AND gate with a NAND gate followed by an inverter, and the OR gate with a NOR gate followed by an inverter. The CMOS inverter implements the NOT operator. The circuit diagram shows the two-level SOP form on the left and the CMOS circuit after technology mapping on the right. Figure 5.6: Two-level circuit for 2-input XOR operation (left), and after universal set transformation (right). For many technology mapping problems, we can obtain a smaller circuit if we do not apply the universal set transformation systematically to each gate of the two-level circuit, but by transforming the original Boolean expression into an equivalent expression using the operators of $$C.$$ We illustrate this alternative strategy using the XOR function as example. The key insight is to notice that we can transform the 2-input XOR function into an expression using NAND gates only: $\begin{eqnarray*} Y(A,B) &= &A\,\overline{B} + \overline{A}\,B & \\ &= &\overline{\overline{A\,\overline{B} + \overline{A}\,B}} & \text{by involution} \\ &= &\overline{\overline{A\,\overline{B}} \cdot \overline{\overline{A}\,B}} & \text{by De Morgan} \\ &= &\text{NAND}(\text{NAND}(A, \overline{B}), \text{NAND}(\overline{A}, B))\quad &\ \end{eqnarray*}$ This algebraic transformation generalizes to arbitrary SOP expressions. NAND transform: Given a Boolean expression in SOP form, the SOP transforms into a NAND-NAND expression of literals by applying the involution theorem and then De Morgan’s theorem. The same algebraic transformations applied to a POS form yield an expression using NOR gates only. Starting with the POS of the 2-input XOR function, we obtain: $\begin{eqnarray*} Y(A,B) &= &(A + B) \cdot (\overline{A} + \overline{B}) & \\ &= &\overline{\overline{(A + B) \cdot (\overline{A} + \overline{B})}} & \text{by involution} \\ &= &\overline{\overline{A + B)} + \overline{\overline{A} + \overline{B})}} & \text{by De Morgan} \\ &= &\text{NOR}(\text{NOR}(A, B), \text{NOR}(\overline{A}, \overline{B}))\quad &\ \end{eqnarray*}$ It is easy to verify that this algebraic transformation generalizes to arbitrary POS expressions. NOR transform: Given a Boolean expression in POS form, the POS transforms into a NOR-NOR expression of literals by applying the involution theorem and then De Morgan’s theorem. The NAND and NOR transforms produce two-level circuits that are well suited for CMOS technology. As Figure 5.7 shows, the resulting CMOS circuits have a 3-stage critical path, including the input inverter, compared to the 5-stage critical path in Figure 5.6 on the right. Figure 5.7: 2-input XOR circuit mapped into universal set {INV, NAND} (left) and {INV, NOR} (right). In case your target technology provides only NAND or only NOR gates, the NAND and NOR transforms are still useful. We just need to represent the input inverters with NAND or NOR gates. The idempotence theorem and its dual suggest the transformations $\begin{eqnarray*} \overline{A} &=& \text{NAND}(A, A) \\ \overline{A} &=& \text{NOR}(A, A)\,. \end{eqnarray*}$ We conclude, that we can transform the two-level SOP and POS forms into circuits with universal singleton sets {NAND} and {NOR} of operators. Figure 5.8 shows the 2-input XOR function implemented with NAND gates only and with NOR gates only. Figure 5.8: 2-input XOR circuits mapped into universal set {NAND} (left) and {NOR} (right). The NAND and NOR transforms are useful for the synthesis of two-level logic, i.e. the transformation from universal set $$U =$$ {NOT, AND, OR} to one of the universal sets {INV, NAND, NOR}, {INV, NAND}, {INV, NOR}, {NAND}, or {NOR} for implementation with CMOS gates. On the other hand, the analysis of a CMOS circuit, such as shown in Figure 5.7 or Figure 5.8, is more error-prone for human readers than analyzing a two-level circuit with gates from universal set $$U.$$ We can simplify the analysis of a CMOS circuit by first applying the NAND or NOR transforms in the reverse direction to derive a circuit with operators from universal set $$U$$ before extracting the Boolean expression of the circuit. ### 5.2.2. Bubble Pushing¶ In this section, we discuss a graphical method for the design and technology mapping of CMOS circuits. Recall that the bubble of the inverter symbol can be interpreted as graphical represention of the logical negation. Without the bubble, the inverter symbol turns into a buffer symbol, i.e. a logic gate for the identity function. From the functional perspective, it does not matter whether we draw the inverter bubble at the input or the output of the buffer symbol. Both symbols represent an inverter with function $$Y = \overline{A}$$: We take this idea one step further, and treat bubbles as independent negation symbols that can move freely along the wires of a circuit. More interestingly, we define graphical operations on bubbles that enable us to manipulate a combinational circuit. As a first such operation, consider two adjacent bubbles on a wire. Such a circuit represents the series composition of two negations. According to the involution theorem, the composition of two negations is equal to the identity function. Therefore, we can always place two bubbles on a wire or remove two bubbles from a wire without changing the logical function of the circuit. Figure 5.9: The output bubble of the stage-1 inverter and input bubble of the stage-2 inverter are two bubbles in series that cancel out. The circuits with and without bubbles are equivalent, and implement identity function $$Y = A.$$ Removing the two inverter bubbles in Figure 5.9 produces a logically equivalent circuit of two buffers in series. If our goal is circuit simplification, we may remove the two buffers as well, such that $$Y = A.$$ Besides adding or removing pairs of adjacent bubbles, we may push bubbles across the gates of a circuit. The graphical versions of De Morgan’s theorem and its dual $\begin{eqnarray*} \overline{A \cdot B} &=& \overline{A} + \overline{B} \\ \overline{A + B} &=& \overline{A} \cdot \overline{B} \end{eqnarray*}$ enable us to push bubbles from the inputs to the output of a gate or vice versa. De Morgan’s theorem states that a NAND gate is equivalent to an OR gate with complemented inputs. From the graphical perspective, see Figure 5.10, we interpret De Morgan’s theorem such that an AND gate with an output bubble is equivalent to an OR gate with a bubble on each input. Since both symbols represent a 2-input NAND gate, we can exchange them freely in a circuit diagram. We may also interpret the transformation from the AND gate to the OR gate representation as pushing the output bubble toward the inputs of the AND gate. The bubble-push operation removes the output bubble, replaces the AND with an OR gate, and places one bubble on each input. The reverse transformation pushes the input bubbles to the output, while exchanging the OR gate with an AND gate. Figure 5.10: De Morgan’s theorem states that an AND with an output bubble is equivalent to an OR with input bubbles. Both symbols represent a NAND gate. The dual of De Morgan’s theorem has an analogous graphical interpretation. A NOR gate, i.e. an OR gate with an output bubble, is equivalent to an AND gate with bubbles on its inputs. Transforming one representation into the other can be viewed as a graphical bubble-push operation. To transform the OR gate representation into an AND gate, we push the output bubble toward the inputs. More precisely, we remove the output bubble, replace the OR gate with an AND gate, and place one bubble on each input. The reverse transformation pushes the input bubbles to the output, while exchanging the AND gate with an OR gate. Figure 5.11: The dual of De Morgan’s theorem states that an OR with an output bubble is equivalent to an AND with input bubbles. Both symbols represent a NOR gate. It is easy to remember both forms of De Morgan’s theorem as a single bubble-push rule: when pushing bubbles from the input to the output or vice versa, replace an AND gate with an OR gate or vice versa. #### Two-Level Circuits¶ Bubble pushing enables us to analyze or solve the technology mapping problem for two-level CMOS circuits. We demonstrate the graphical version of the NAND transform without using Boolean algebra. As a concrete example, consider the NAND-NAND circuit of Figure 5.7, shown again in Figure 5.12 on the left with the input inverters drawn as input bubbles of the stage-1 NAND gates. The bubbles obscure the logic function $$Y(A,B)$$ of this circuit in the eyes of most human readers. We use bubble pushing to transform this two-level circuit into a human friendly SOP form. First, we push the bubble at output $$Y$$ across the AND gate to obtain the equivalent circuit in the middle of Figure 5.12. The resulting circuit has pairs of adjacent bubbles on the internal wires. Removing the pairs of bubbles produces the two-level circuit on the right of Figure 5.12. This circuit corresponds directly to the SOP form, which is now trivial to extract from the circuit diagram: $$Y = A\,\overline{B} + \overline{A}\,B.$$ Figure 5.12: Illustration of bubble pushing to transform two-level circuits graphically. We can reverse the order of the bubble push operations, from right to left in Figure 5.12, to solve the technology mapping problem of the SOP form into universal set {INV, NAND}. First, insert pairs of bubbles on the internal wires, then push the input bubbles of the OR gate toward the output. If you do not appreciate the tactical move to first insert bubble pairs on the internal wires, you can also add a pair of bubbles on output $$Y,$$ and push one bubble toward the inputs leaving one bubble behind. Figure 5.13 illustrates this alternative NAND transform by bubble pushing. On the left, we add a pair of bubbles to output $$Y$$ of the circuit in SOP form. Then, we push one of the two bubbles toward the inputs of the OR gate, which yields the circuit in the middle. The second step simply moves the bubbles along the internal wires from the inputs of the stage-2 NAND gate to the outputs of the stage-1 AND gates. Figure 5.13: Bubble pushing from output toward the inputs implements the NAND transform. The NOR transform by bubble pushing proceeds analogously, provided we start with a two-level circuit that corresponds to the POS form of a Boolean function. #### CMOS Gates¶ The bubble-push method is not only useful for transforming gate-level circuits, but also for the design of the pull-up and pull-down networks of CMOS gates. Given a gate-level circuit, we can derive its dual by bubble pushing before deriving the pull-up and pull-down networks of transistors. As an example, consider the compound gate derived in Figure 4.14 for Boolean expression $$Y = \overline{(A + BC) \cdot D}.$$ Figure 5.14 reproduces the gate-level representation of the CMOS circuit. Figure 5.14: Compound gate for $$Y = \overline{(A + BC) \cdot D}.$$ Our goal is to derive the pull-up and pull-down networks for the CMOS compound gate graphically. Recall from our discussion of the principle of duality that the pull-up network computes output $$Y$$ and the pull-down network the complement $$\overline{Y}.$$ Furthermore, we plan to derive both networks as dual series-parallel networks. Since the series composition of a switch network represents a conjunction and the parallel composition a disjunction, both gate-level circuits may comprise the corresponding AND and OR gates only. If we control the pull-down network with uncomplemented inputs, then the dual pull-up network must be controlled with the complemented inputs. As a first step of the transformation of the compound gate in Figure 5.14 into a CMOS circuit, we derive the gate-level representation of the pull-up network. Since the pull-up network produces output $$Y$$ in uncomplemented form, we push the output bubble in Figure 5.14 toward the inputs, until all bubbles have propagated to the inputs. Figure 5.15 shows the transformations in three steps. The resulting circuit has the desired form of AND and OR gates with complemented inputs. Figure 5.15: Bubble pushing produces a circuit without bubbles except on the inputs. The pull-down network uses the uncomplemented inputs and computes the complemented output $$\overline{Y}.$$ As shown in Figure 5.16, we simply remove the output bubble from the gate-level circuit in Figure 5.14 to obtain the desired form. Figure 5.16: Graphical design of the pull-up and pull-down networks of the compound gate. The second step of the transformation translates each AND gate into a series composition of transistors, and each OR gate into a parallel composition of transistors. The pull-down network uses the uncomplemented inputs as gate signals for nMOS transistors, and the pull-up network the complemented inputs as gate signals for pMOS transistors. Figure 5.16 shows the resulting CMOS gate, which coincides with the design in Figure 4.14. Note that no Boolean algebra is required for the transformation process. In fact, it is straightforward to generalize this example, and formulate a graphical transformation algorithm from gate-level to transistor schematics. #### Multilevel Circuits¶ Bubble pushing applies to combinational circuits with more than two levels, as the example of the three-level compound gate in Figure 5.14 demonstrates. Combinational circuits with more than two levels of gates are multilevel circuits. In general, multilevel circuits do not have the regular structure of an SOP or POS form. For example, the multilevel circuit in Figure 5.17 has a non-trivial topology due to its reconverging branches, and computes the sum of a full-adder. We demonstrate the bubble-push method to solve the technology mapping problem of this circuit into universal set {INV, NAND, NOR} for a CMOS implementation. Figure 5.17: Multi-level circuit with reconverging branches. Figure 5.18 illustrates the solution of the technology mapping problem by bubble pushing. The multilevel circuit offers many opportunities to place two bubbles on a wire and push the bubbles with the goal to turn AND and OR gates into NAND and NOR gates. In Figure 5.18(a) we choose to insert two bubbles at the output of the circuit. Our plan is to push bubbles systematically toward the inputs, rather than the other way around or inside out. Since De Morgan’s theorem turns an AND gate with an output bubble into an OR gate with input bubbles, we push one of two output bubbles across the AND gate, and leave one bubble behind to obtain a NOR gate. Figure 5.18(b) show the circuit after pushing one bubble from the output of the stage-4 AND gate toward its inputs. As a result, we have a NOR gate with one bubble on the upper and a pair of bubbles on the lower input. We might be tempted to remove the bubble pair. However, we can use two bubbles to repeat the first bubble-push move and turn the stage-3 AND gate into a NOR gate. Figure 5.18(c) shows the result. We have also shifted the bubble from the upper input of the stage-4 NOR gate to the output of the driving OR gate, forming a NOR gate. Since we cannot push the input bubbles of the stage-3 NOR gate across wire joins, we choose to insert another pair of bubbles on the output of the stage-2 AND gate, and repeat the bubble-push steps as shown in Figure 5.18(d), Figure 5.18(e), and Figure 5.18(f). The resulting circuit requires four inverters and six NOR gates. No NAND gates are required. Figure 5.18: Technology mapping by bubble pushing into {INV, NOR}. We study the problem of pushing bubbles across wire joins as inspiration for an alternative technology mapping. Figure 5.19 shows an inverter driving two loads $$X = \overline{A}$$ and $$Y = \overline{A}.$$ If we wish to push the inverter across the wire join, we remove the inverter on the driver side and insert two inverters on each wire of the other side. This preserves the logical functions $$X = \overline{A}$$ and $$Y = \overline{A}.$$ In terms of a bubble push operation, we can view the wire join like a gate. Remove the bubble on the input and place one bubble on each output. If we want to push bubbles toward the input, we need one bubble on each output, to remove the output bubbles and insert a bubble on the input. In Figure 5.18(c), only one of two outputs has a bubble, the input bubble of the stage-3 NOR gate. Without a matching bubble on the other output, we cannot push this bubble across the wire join toward the inputs. Figure 5.19: Equivalent inverter circuits at wire join. After inserting the pair of bubbles in Figure 5.18(d), we may push one of the bubbles across the wire join toward the outputs. This bubble-push move is shown in Figure 5.20(e). After removing the bubble pair from the input of the stage-3 NOR gate, and shifting the remaining bubble to the output of the stage-2 AND gate, we obtain a NAND gate in stage 2. We add another pair of bubbles, see Figure 5.20(f), and perform one more bubble-push move to turn all stage-1 and stage-2 gates into NAND gates. The resulting circuit in Figure 5.20(g) requires four inverters, three NAND gates, and three NOR gates. Since the 2-input NAND gate has a smaller logical effort than the 2-input NOR gate, the circuit in Figure 5.20 appears to be a superior alternative to the circuit in Figure 5.18. Figure 5.20: Alternative technology mapping by bubble pushing into {INV, NAND, NOR}. Applying the bubble-push method to multilevel circuits can turn into an intellectual challenge, because the number of possible moves as well as the number of solutions to the technology mapping problem grows quickly. Undoubtedly, however, bubble pushing on a whiteboard or blackboard can be more fun than the equivalent method of rewriting expressions by Boolean algebra. 5.6 Consider the multilevel circuit: Transform the circuit by bubble pushing such that it has 1. AND and OR gates, and inverters on inputs only, 2. NAND and NOR gates, and as few inverters as possible. 1. Our strategy is to push the output bubble towards the inputs until no bubbles are left, except on the bottom input, where we expand the bubble into an inverter. 2. We insert bubble pairs at outputs of AND and OR gates, and push excess bubbles towards the inputs. We terminate bubble pushing, when all AND and OR gates are transformed into NAND and NOR gates, and inserting additional bubble pairs would increase the total number of bubbles. After step (a) all gates are NAND or NOR gates already, see (b). The circuit in (c) permits bubble pushing at the stage-1 NAND gate. The NAND gate would turn into a NOR gate, and introduce two bubbles at its inputs. However, this would increase the total bubble count by one. Therefore, we keep the bubble at the output of the NAND gate. The other opportunity for bubble pushing is at the output of the stage-1 NOR gate. There is a bubble on the upper branch of the wire join, but no obvious bubble push transformation that could utilize this bubble and reduce the total number of bubbles. Therefore, we keep this bubble as well. The circuit in (d) expands the two remaining bubbles into inverters. ### 5.2.3. Multiplexer Logic¶ We can implement any Boolean function with multiplexers, because multiplexers implement the ite-function and the ite-function is universal. In the following, we solve the technology mapping problem to universal singleton set $$C =$$ {ite} by showing how to implement Boolean functions with multiplexers. Recall the 4:1 multiplexer with $$n=2$$ select inputs $$S=S_1\,S_0$$ and $$2^n=4$$ data inputs, $$D_0,$$ $$D_1,$$ $$D_2,$$ and $$D_3.$$ The multiplexer steers data input $$D_k$$ to output $$Y$$ if $$S = k,$$ interpreting $$S$$ as an $$n$$-bit binary number. We may express output $$Y$$ as a function of inputs $$S$$ and $$D,$$ such that $Y(S, D) = D_0\,\overline{S}_1\,\overline{S}_0 + D_1\,\overline{S}_1\,S_0 + D_2\,S_1\,\overline{S}_0 + D_3\,S_1\,S_0\,.$ This form resembles the Shannon expansion of Boolean function $$f:\mathcal{B}^2 \rightarrow \mathcal{B}$$ in two variables: $f(x_0, x_1) = f_0\,\overline{x}_1\,\overline{x}_0 + f_1\,\overline{x}_1\,x_0 + f_2\,x_1\,\overline{x}_0 + f_3\,x_1\,x_0\,,$ where $$f_0 = f(\overline{x}_1, \overline{x}_0),$$ $$f_1 = f(\overline{x}_1, x_0),$$ $$f_2 = f(x_1, \overline{x}_0),$$ and $$f_3 = f(x_1, x_0)$$ are the function values. We observe that we can implement an $$n$$-variable function with a $$2^n{:}1$$ multiplexer by connecting the input signals of the $$n$$ variables to the $$n$$ select inputs, and by driving data input $$D_k$$ with function value $$f_k \in \{0, 1\}.$$ Figure 5.21 illustrates the multiplexer implementations of a 2-variable NAND and the 3-variable majority function. Each data input of the multiplexer represents one row of the corresponding truth table. This observation confirms that multiplexers are universal circuit elements. Figure 5.21: Multiplexer implementations of a 2-variable NAND (left) and the majority function (right). We can reduce the size of the multiplexer to implement a Boolean function, by using an input signal to drive one or more data inputs rather than a select input. The key insight underlying this optimization is a logical equivalence that we can spot in the truth table of the function. For example, in the truth table of the NAND function in Figure 5.21, observe (1) that $$Y=1$$ in the two top rows where $$A=0,$$ and (2) in the two bottom rows where $$A=1$$ we find $$Y=\overline{B}.$$ This suggests that a 2:1 multiplexer suffices, at the expense of one inverter to complement input $$B,$$ see Figure 5.22 on the left. An analogous optimization applies if we use variable $$B$$ to drive the select input. According to the truth table, in both rows where $$B=0$$ we have $$Y=1,$$ and if $$B=1$$ then $$Y=\overline{A}.$$ Figure 5.22 shows the corresponding 2:1 multiplexer implementation on the right. Figure 5.22: Optimized 2:1 multiplexer implementations of a 2-variable NAND. In general, we can implement an $$n$$-variable function with a $$2^{n-1}{:}1$$ multiplexer by connecting all but one input signal to the $$n-1$$ select inputs and use the remaining input signal in complemented or uncomplemented form to drive a subset of the data inputs. For example, consider the truth table of the majority function in Figure 5.21. Assume that we connect inputs $$A$$ and $$B$$ to the select inputs of a 4:1 multiplexer. Then, the four data inputs correspond to the four combinations $$AB = 00, 01, 10, 11.$$ Each of these combinations covers two rows in the truth table, with variable $$C$$ changing. We find $$Y=0$$ for $$AB=00,$$ $$Y=C$$ for $$AB=01$$ and $$AB=10,$$ and $$Y=1$$ for $$AB=11.$$ Figure 5.23 shows the 4:1 multiplexer implementation on the left. The other two implementations can be deduced by analogous logical arguments from the truth table. Figure 5.23: Optimized 4:1 multiplexer implementations of the majority function. The optimized multiplexer implementations motivate refinements to our methods for representing Boolean functions. We may view the size reduction as an alternative style of truth table compaction, where output column $$Y$$ may contain a complemented or uncomplemented variable in addition to constants 0 or 1. For example, the multiplexer implementations on the left of Figure 5.22 and Figure 5.23 are represented by the compacted truth tables in Figure 5.24. Figure 5.24: Compacted truth tables for the 2-input NAND (left) and the majority function (right) in Figure 5.21. If we translate the compact truth table representation of a Boolean function into a K-map, then the K-map contains variables in the corresponding cells. Figure 5.25(a) shows the K-map with cell-entered variables for the majority function. Figure 5.25: K-map with cell-entered variables for the majority function (a), and covering procedure in (b) and (c). A cell marked with a variable represents a product term of the variable and the corresponding combination of coordinate variables. For example, the top-right cell in Figure 5.25(a) represents product term $$A\,\overline{B}\,C$$ and the bottom-left cell product term $$\overline{A}\,B\,C.$$ K-maps can have cells with different variables. A complemented variable can be treated as a new variable by renaming. We can extract a cover, although not necessarily a minimal cover, from a K-map with cell-entered variables. The procedure extracts maximal subcubes for each variable: First, replace all variables with 0’s, and extract the prime implicants. Then, for each variable $$x,$$ replace all 1’s with don’t care X’s, replace $$x$$‘s with 1’s, replace all other variables with 0’s, extract the prime implicants, and for each prime implicant form the conjunction with variable $$x.$$ The Boolean function of the K-map is the sum of the prime implicants. In case of the majority function, the first step is shown in Figure 5.25(b). The resulting prime implicant is $$A\,B.$$ For variable $$C,$$ we extract two prime implicants from the modified K-map in Figure 5.25(c), $$A$$ and $$B,$$ both of which we AND with cell-entered variable $$C.$$ Thus, we obtain $$Y(A,B,C) = A\,B + A\,C + B\,C,$$ which happens to be the minimal cover of the majority function. 5.7 Analyze the multiplexer CMOS circuit: 1. Perform a switch analysis of the CMOS circuit. 2. Derive a gate-level schematic for the CMOS circuit. 3. Express the logic function in form of a Boolean equation with a case distinction. 1. Figure 4.35 contains an interactive switch model of the CMOS circuit. You can use it to derive the truth table of the circuit: $$S$$ $$D_1$$ $$D_0$$ $$Y$$ 0 0 0 1 0 0 1 0 0 1 0 1 0 1 1 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 1 0 2. We employ a K-map to derive the minimal cover of $$Y(S, D_1, D_0).$$ We find that $$Y = \overline{S}\,\overline{D}_0 + S\,\overline{D}_1.$$ The corresponding two-level circuit is shown above. 3. We can extract a case distinction from the K-map or the gate-level circuit: $\begin{split}Y = \begin{cases} \overline{D}_0\,, & \text{if}\ S = 0\,, \\ \overline{D}_1\,, & \text{if}\ S = 1\,. \end{cases}\end{split}$ This case distinction emphasizes that the CMOS circuit implements a 2:1 multiplexer that outputs the complemented data input, also called inverting 2:1 multiplexer. 5.8 Implement XNOR function $$f_9 = \overline{A \oplus B}$$ and implication $$f_{11} = A \Rightarrow B$$ with multiplexers. $$A$$ $$B$$ $$f_9$$ $$f_{11}$$ 0 0 1 1 0 1 0 1 1 0 0 0 1 1 1 1 1. Use one 4:1 multiplexer. 2. Use one 2:1 multiplexer. Note: data inputs may be complemented. 3. Verify (b) by perfect induction. 1. A 4:1 multiplexer enables us to implement 2-variable functions $$f_9(A,B)$$ and $$f_{11}(A,B)$$ by connecting inputs $$A$$ and $$B$$ to the select inputs and apply the $$2^2 = 4$$ function values as constant inputs to the data inputs, as specified in the truth table. 2. We find the optimized 2:1 multiplexer implementations by inspecting the truth table specifications. First, consider $$f_9.$$ We notice that $$f_9 = B$$ in the bottom two rows where $$A=1,$$ and $$f_9 = \overline{B}$$ in the top two rows where $$A=0.$$ We can cast this observation into Boolean equation: $\begin{split}Y_9(A,B) = \begin{cases} \overline{B}\,, & \text{if}\ A = 0\,, \\ B\,, & \text{if}\ A = 1\,. \end{cases}\end{split}$ This case distinction matches the 2:1 multiplexer if select input $$S = A,$$ data input $$D_0 = \overline{B},$$ and data input $$D_1 = B.$$ The associated schematic is shown below. Alternatively, we can drive the select input of the 2:1 multiplexer with input signal $$B.$$ Then, the truth table tells us for $$B=0$$ that $$f_9 = \overline{A}$$ and for $$B=1$$ that $$f_9 = A.$$ We implement this circuit by connecting $$\overline{A}$$ to $$D_0$$ and $$A$$ to $$D_1.$$ Second, we derive a 2:1 multiplexer implementation for $$f_{11}.$$ We observe in the top two rows of the truth table, where $$A=0,$$ that $$f_{11} = 1,$$ and in the bottom two rows for $$A=1$$ that $$f_{11} = B.$$ The associated case distinction is: $\begin{split}Y_{11}(A,B) = \begin{cases} 1\,, & \text{if}\ A = 0\,, \\ B\,, & \text{if}\ A = 1\,. \end{cases}\end{split}$ We implement this expression by connecting $$A$$ to the select input, set data input $$D_0 = 1$$ by tying it to $$V_{DD},$$ and assign $$D_1 = B$$ by connecting input $$B$$ to $$D_1.$$ The 2:1 multiplexer circuit is shown below. The alternative 2:1 multiplexer implementation connects $$B$$ to the select input, $$\overline{A}$$ to $$D_0,$$ and ties $$D_1$$ to $$V_{DD}.$$ 3. We verify our 2:1 multiplexer designs of (b) using perfect induction. To that end, we build up the truth table for each input combination of the 2:1 multiplexer circuit, and compare the result with the given specification. We begin with XNOR function $$f_9.$$ The resulting truth table is: $$A$$ $$B$$ $$Y_9$$ $$f_9$$ 0 0 1 1 0 1 0 0 1 0 0 0 1 1 1 1 We derive $$Y_9$$ in each row of the truth table from the circuit schematic in subproblem (b). For $$A=0$$ and $$B=0,$$ the 2:1 mux steers data input $$D_0 = \overline{B} = 1$$ to output $$Y_9,$$ such that $$Y_9 = 1,$$ as shown in the first row of the truth table. In the second row, $$Y_9 = 0,$$ because $$A=0$$ steers $$D_0 = \overline{B} = 0$$ to output $$Y_9.$$ In the third row, we have $$A=1,$$ and the 2:1 mux steers data input $$D_1 = B = 0$$ to output $$Y_9,$$ so that $$Y_9 = 0,$$ and in the fourth row $$A=1$$ steers $$D_1 = B = 1$$ to the output, so that $$Y_9 = 1.$$ We find that the columns of mux output $$Y_9$$ and specification $$f_9$$ are equal, and conclude that $$Y_9 = f_9.$$ This verifies that our multiplexer circuit implements XNOR function $$f_9.$$ The perfect induction for the implication is analogous. We create the truth table below by arguing about output $$Y_{11}$$ of the 2:1 multiplexer circuit in (b) for each input combination. $$A$$ $$B$$ $$Y_{11}$$ $$f_{11}$$ 0 0 1 1 0 1 1 1 1 0 0 0 1 1 1 1 When input $$A=0,$$ the 2:1 mux steers data input $$D_0 = 1$$ to output $$Y_{11}.$$ Thus, $$Y_{11} = 1$$ independent of $$B,$$ i.e. in both the first and second row of the truth table. If $$A=1$$ and $$B=0,$$ the 2:1 mux steers data input $$D_1 = B = 0$$ to the output, so that $$Y_{11} = 0$$ in the third row. In the fourth row, the 2:1 mux steers data input $$D_1 = B = 1$$ to the output and $$Y_{11} = 1.$$ We conclude that $$Y_{11} = f_{11},$$ because the corresponding columns are equal. ## 5.3. Multilevel Logic¶ When designing a combinational circuit for a given Boolean function, the minimal two-level form rarely represents the best circuit. Often, circuits with multiple stages exist that minimize the cost function and may even have smaller delays than their equivalent two-level circuits. We call a multilevel circuit a combinational circuit with more than two levels of logic gates on the critical path. For example, consider the symmetric functions: $\begin{eqnarray*} S_{1,2}(w, x, y, z) &=& \overline{w}\,x\,\overline{y} + w\,\overline{x}\,\overline{y} + \overline{w}\,\overline{x}\,z + w\,\overline{y}\,\overline{z} + \overline{w}\,y\,\overline{z} + \overline{x}\,y\,\overline{z}\,, \\ S_{3,4,5}(v, w, x, y, z) &=& v w x + v w y + v w z + v x y + v x z + v y z + w x y + w x z + w y z + x y z\,. \end{eqnarray*}$ The costs of these minimal two-level forms are $$\mathcal{C}(S_{1,2}) = 24$$ and $$\mathcal{C}(S_{3,4,5}) = 40.$$ The equivalent multilevel circuits shown in Figure 5.26 have significantly smaller costs $$\mathcal{C}(S_{1,2}) = 12$$ and $$\mathcal{C}(S_{3,4,5}) = 18.$$ Figure 5.26: Don Knuth’s [DEK4A] multilevel circuits for symmetric functions $$S_{1,2}(w, x, y, z)$$ (left) and $$S_{3,4,5}(v, w, x, y, z)$$ (right). Finding minimal multilevel circuits like those in Figure 5.26 remains black magic, primarily due to the computational complexity of the minimization problem. In this section, we discuss the design of multilevel combinational circuits. In particular, we consider tree-structured circuits and then algebraically factored circuits. ### 5.3.1. Tree-Structured Circuits¶ A circuit with tree structure has multiple inputs, one output, and no branches. Each gate output of the circuit drives exactly one gate input. Combinational circuits that implement SOP and POS forms of Boolean functions are trees, except for the inputs which may branch to drive multiple inputs of the circuit. The multilevel circuit on the left of Figure 5.26 has tree structure, whereas the circuit on the right does not because it contains branches. In its purest form, a tree-structured combinational circuit is a complete k-ary tree of $$k$$-input gates, with $$n = k^l$$ inputs to the gates at the leaves of the tree, where $$l$$ is the number of levels or height of the tree, and the output of the circuit is the output of the gate at the root of the tree. The tree has $$\sum_{i=0}^{l-1} k^i = (k^l-1)/(k-1)$$ gates. For example, the binary AND-tree below with $$k=2$$ inputs per gate and $$l=3$$ levels has $$n = 8$$ inputs and $$7$$ gates. If we have a supply of 2-input AND gates and need an 8-input AND gate, we may view the task at hand as a technology mapping problem. The solution to this problem can be found in our discussion of tree-structured logic gates. The prerequisite for the design of a tree-structured gate is the associativity of the logic function. If we can prove the associativity of a binary operator, we can design a tree-structured gate. Example 5.12: Tree-Structured Parity Circuit We wish to implement a circuit for the odd parity function with $$n=8$$ inputs, given an unlimited supply of 2-input XOR gates. The 8-variable parity function $P(x_0, x_1, x_2, x_3, x_4, x_5, x_6, x_7) = x_0 \oplus x_1 \oplus x_2 \oplus x_3 \oplus x_4 \oplus x_5 \oplus x_6 \oplus x_7$ equals 1 if the number of 1-inputs is odd. Two-level minimization of $$P$$ is ineffective. However, if the binary XOR operation were associative, we can implement a tree-structured circuit. To find out whether a binary XOR is associative, we wish to prove that $(x \oplus y) \oplus z\ =\ x \oplus (y \oplus z)\,.$ From our discussion of the theorems of Boolean algebra, we know that we may use a perfect induction or Boolean algebra. The proof by perfect induction is straightforward if we recall that $$x \oplus y$$ equals 1 if one of $$x$$ or $$y$$ equals 1 but not both: $$x$$ $$y$$ $$z$$ $$x \oplus y$$ $$(x \oplus y) \oplus z$$ $$y \oplus z$$ $$x \oplus (y \oplus z)$$ 0 0 0 0 0 0 0 0 0 1 0 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 0 1 1 0 1 0 1 1 0 0 0 1 0 1 1 1 0 1 0 1 Since the truth table columns for $$(x \oplus y) \oplus z$$ and $$x \oplus (y \oplus z)$$ are equal, we conclude that the XOR operation is associative. Our XOR tree has $$n=8$$ inputs. Binary XOR gates have arity $$k = 2.$$ Therefore, the number of levels of the tree is $$l = \log_k n = \log_2 8 = 3,$$ and we need $$n-1 = 7$$ XOR gates to implement the tree. Not surprisingly, the 8-input XOR gate has the same tree-structure as the 8-input AND gate above with the AND gates replaced by XOR gates. Tree-structured gates like the XOR gate of Example 5.12 are pleasantly simple, because each tree node has the same gate and the regular structure of a tree is easy to remember. If the number of inputs $$n$$ is not a power of arity $$k$$ of the $$k$$-input gates, we sacrifice the complete tree and replace it with a balanced tree. In a complete tree all leaves have the same depth, whereas in a balanced tree the depths of the leaves differ by at most 1. Figure 5.27 shows two balanced trees for XOR gates. The construction of an $$n$$-input tree is easy, if you first draw a complete tree with a number of inputs that is the largest power of $$k$$ less than $$n,$$ and then add as many gates in the next level of the tree until the total number of inputs is $$n.$$ The 9-input XOR tree requires adding one gate in level 3, and for the 11-input XOR tree we need three gates in level 3 of the tree. Figure 5.27: Balanced tree circuits for 9-input XOR gate (left) and 11-input XOR gate (right). The design of more complex circuits can benefit from the tree structure, if we view the tree as a substructure of a circuit or, alternatively, superimpose additional connectivity on top of the tree. The multiplexer tree is an example of a multilevel circuit with tree structure. Here, the inputs drive not only the data inputs of the multiplexers at the leaves of the tree, but also the select inputs at the inner nodes of the tree. ### 5.3.2. Algebraically Factored Circuits¶ A factored form of a Boolean function is a Boolean expression that represents a tree-structured circuit, whose internal nodes are AND or OR gates and each leaf is a literal. Factoring is the process of transforming an SOP form into a factored form. For example, given 5-variable function $$f:\mathcal{B}^5 \rightarrow \mathcal{B}$$ in SOP form $$\qquad f(v, w, x, y, z)\ =\ v x + v y + w x + w y + z\,,$$ an equivalent, factored form of $$f$$ is $$\qquad f(v, w, x, y, z)\ =\ (v + w) (x + y) + z\,.$$ One form of $$f$$ translates into the other by applying the distributivity theorem. The corresponding combinational circuits are shown in Figure 5.28. The factored form is smaller than the SOP form, because it has fewer gates and fewer gate inputs. Figure 5.28: Two-level circuit of SOP form (left) and equivalent multilevel circuit of factored form (right). Factoring is a powerful method to derive multilevel circuits that are smaller than their two-level counterparts. We present one particular style, known as algebraic factoring, with the goal to minimize the cost of the factored form. Although this optimization problem is too complex in general to solve exactly, good heuristics lead to acceptable solutions. Given a Boolean expression $$f$$ in SOP form, consider the algebraic division $$f/d.$$ Boolean expression $$d$$ is an algebraic divisor of dividend $$f$$ if there exists a quotient $$q \ne 0$$ and a remainder $$r$$ such that $f = q \cdot d + r\,,$ and $$q$$ and $$d$$ have no variables in common. The latter constraint guarantees that the Boolean expressions behave like polynomials of real numbers, if we interpret a logical OR as an algebraic addition and a logical AND as an algebraic multiplication. Hence the name algebraic factoring. We say that $$d$$ is a factor of $$f$$ if $$r = 0.$$ Example 5.13: Algebraic Factoring We investigate the factoring of Boolean function $f(v, w, x, y, z)\ =\ v x + v y + w x + w y + z\,,$ given in SOP form. With the distributivity theorem in mind, observe that variable $$v$$ is a divisor of $$f,$$ because we can rewrite $$f$$ as $$\qquad f = (x + y) \cdot v + (w x + w y + z)\,.$$ The quotient is $$q = x + y$$ and the remainder is $$r = w x + w y + z.$$ Variable $$w$$ is another divisor of $$f,$$ also with quotient $$q = x + y$$ but with a different remainder $$r = v x + v y + z,$$ because $$\qquad f = (x + y) \cdot w + (v x + v y + z)\,.$$ Analogously, we find that variables $$x$$ and $$y$$ are divisors of $$f,$$ both with quotient $$q = v + w$$ but with different remainders. For divisor $$d=x$$ we find $$\qquad f = (v + w) \cdot x + (v y + w y + z)\,,$$ and for $$d=y$$: $$\qquad f= (v + w) \cdot y + (v x + w x + z)\,.$$ Variable $$z$$ is a divisor with trivial quotient $$q=1$$ and remainder $$r = v x + v y + w x + w y$$: $$\qquad f = 1 \cdot z + (v x + v y + w x + w y)\,.$$ None of the variables is a factor of $$f,$$ because each of the divisions has a nonzero remainder. The sums $$v+w$$ and $$x+y$$ are also divisors of $$f.$$ The quotient associated with divisor $$v+w$$ is $$q=x+y,$$ and the remainder is $$r = z,$$ because $$\qquad f = (v+w) \cdot (x+y) + z\,.$$ The quotient associated with divisor $$x+y$$ is $$q=v+w,$$ because $$\qquad f = (x+y) \cdot (v+w) + z\,.$$ Since the logical AND is commutative, the distinction between quotient and divisor is merely a matter of perspective. As such, the meanings of $$d$$ as divisor and $$q$$ as quotient are interchangeable. Example 5.14: Non-Algebraic Factoring We wish to factor the SOP form of Boolean function $g(x,y,z) = \overline{x} y + x z\,.$ The distributivity theorem does not apply, since the implicants have no literals in common. Therefore, $$g$$ has no obvious factors and no nontrivial divisors. However, the consensus theorem enables us to add implicant $$y z$$ by consensus on $$x,$$ such that $g(x,y,z) = \overline{x} y + x z + y z\,.$ This SOP form can be factored into $g(x,y,z) = (x + y) (\overline{x} + z)\,.$ Both sums $$x+y$$ and $$\overline{x} + z$$ are factors of $$g,$$ because the remainder is zero. However, the factors are not algebraic, because variable $$x$$ appears in each of them. We say that the factors are Boolean rather than algebraic, because we need to invoke theorems specific to Boolean algebra to prove that the factored form equals the SOP form. In this example, we need the Boolean complementation theorem, $$x\,\overline{x} = 0$$: $\begin{eqnarray*} g(x,y,z) &= &(x + y) (\overline{x} + z) & \\ &= &x\,\overline{x} + x z + y \overline{x} + y z\qquad & \text{by distributivity} \\ &= &\overline{x} y + x z + y z & \text{by complementation}\,. \end{eqnarray*}$ Note that $$g$$ has yet another factor, $$y + z.$$ This factor is Boolean too, because it shares variable $$y$$ with factor $$x + y$$ and variable $$z$$ with factor $$\overline{x} + z.$$ Using Boolean algebra, we can prove that the factorization into three factors equals the SOP form: $\begin{eqnarray*} g(x,y,z) &= &(x + y) (\overline{x} + z) (y + z) & \\ &= &(\overline{x} y + x z + y z) (y + z) & \text{by complementation} \\ &= &\overline{x} y y + x z y + y z y + \overline{x} y z + x z z + y z z\qquad & \text{by distributivity} \\ &= &\overline{x} y + x z y + y z + \overline{x} y z + x z + y z & \text{by idempotence} \\ &= &\overline{x} y + (x + \overline{x}) y z + x z + y z & \text{by idempotence} \\ &= &\overline{x} y + x z + y z & \text{by dual complementation and idempotence}\,. \end{eqnarray*}$ This proof relies on the idempotence theorem, $$x \cdot x = x,$$ which is specific to Boolean algebra. In the algebra of real numbers, $$x \cdot x = x^2,$$ and $$x^2 \ne x$$ for $$x \ne 0,1.$$ Finding divisors of a Boolean expression, like factor $$x + y$$ of $$g = \overline{x} y + x z,$$ can be done in principle by translating the SOP form into the POS form. Algebraic factoring avoids the POS form by restricting the factors of a Boolean expression to algebraic factors, where divisor and quotient have no variables in common. The key advantage of this restriction is that it simplifies the identification of divisors. The problem of factoring multilevel circuits is to find one or more algebraic divisors of SOP form $$f$$ such that the cost of the multilevel circuit is minimized. In the following, we restrict ourselves to divisors that are products of literals. For divisors in product form, the quotient assumes SOP form. Below, we discuss more complex divisors in SOP form, i.e. sums of at least two products, as part of our study of multilevel multioutput functions. To maintain the algebraic nature of the factorization, we restrict the SOP form of $$f$$ to products where no product covers another. Otherwise, we can simplify the SOP form by virtue of the covering theorem. Furthermore, we treat complemented and uncomplemented variables of $$f$$ as distinct literals. We may transform $$f$$ into an SOP form with uncomplemented literals only by replacing the complemented variables with new variable names. Experience shows that a good divisor for the cost reduction of an SOP form $$f$$ is a large product, i.e. a conjunction of many literals. The largest product that divides $$f$$ is a product that ceases to be a nontrivial divisor of $$f$$ if we extend the product by one more literal. Here, a nontrivial divisor $$d \ne 1$$ is associated with a nontrivial quotient $$q \ne 1.$$ For example, $$d = x y$$ is the largest product that is a divisor of $$f = v x y + w x y$$ with nontrivial quotient $$q = v + w.$$ Extending product $$x y$$ with literal $$v$$ yields a larger divisor $$d = v x y,$$ but with trivial quotient $$q = 1.$$ The extension with the only remaining variable $$w$$ yields $$q=1$$ too. Variable $$x$$ is also a divisor of $$f$$ with quotient $$v y + w y.$$ However, $$x$$ is not the largest divisor because extending product $$x$$ with literal $$y$$ yields nontrivial divisor $$x y.$$ If $$f$$ is a single product, then $$f$$ is also its own largest divisor but with trivial quotient $$1.$$ For example, $$x y z$$ the largest trivial divisor of $$g = x y z.$$ The unique nontrivial quotient associated with the largest algebraic divisor of an SOP form is called kernel, and the largest nontrivial divisor itself is the associated cokernel. An SOP form must contain at least two products to have a pair of kernel and cokernel. In the preceding examples, $$g = x y z$$ has no kernel, whereas $$f = v x y + w x y$$ has kernel $$v + w$$ and cokernel $$x y.$$ We can identify all pairs of kernels and cokernels of an SOP form $$f$$ by means of rectangle covering, which resembles the K-map method for two-level minimization. The rectangle covering method is based on the product-literal matrix of a given SOP form. This matrix has one row for each product of $$f$$ and one column for each literal of $$f.$$ A matrix element is 1 if the literal associated with its column appears in the product associated with its row, otherwise it is 0. For example, SOP form $f(w,x,y,z) = w x y + w x z + x y z$ has the product-literal matrix below. For clarity, we omit the 0 elements of the matrix. The product-literal matrix exposes products with common literals. If a column contains 1 elements, then the associated literal appears in as many products as there are 1’s in the column. For example, literal $$w$$ appears in two products and literal $$x$$ in all three. A literal is shared by multiple products if the associated column has two or more 1’s. When multiple literals appear in a product of the SOP form they constitute a product of literals. In the product-literal matrix, a product of literals appears in the product of the SOP form if the literals have 1’s in the row of the SOP product. Rectangle covering finds the kernels and cokernels of an SOP form as follows: Cover rows and columns of 1’s in the product-literal matrix, not necessarily of contiguous 1’s, that form the largest rectangle with at least two rows. The cokernel is the product of the literals associated with the columns of the rectangle. The kernel is the restriction of the sum of the products of the rows in the literals. The largest rectangles of $$f = w x y + w x z + x y z$$ are shown in the product-literal matrix below. The blue rectangle covers the columns of literals $$w$$ and $$x$$ and the rows of products $$w x y$$ and $$w x z.$$ The extension to columns $$y$$ or $$z$$ would include 0’s into the rectangle. Similarly, including row $$x y z$$ would include a 0 in the column of literal $$x.$$ Since we cannot increase the size of the blue rectangle without overing 0’s, the blue rectangle is a largest rectangle covering literals $$w$$ and $$x.$$ Analogously, the red and green rectangles are largest rectangles, except that they do not cover contiguous rows or columns. From a largest rectangle, we deduce the cokernel as the product of the literals, and derive the kernel by forming the sum of the row products and then the restriction of the sum in the literals. For example, the blue rectangle identifies cokernel $$w x$$ of $$f.$$ The sum of the row products is $$w x y + w x z$$ and the kernel is the restriction in the literals of the cokernel $$(w x y + w x z)|_{w, x}\ =\ y + z.$$ If we interpret the cokernel as largest divisor and the kernel as quotient, then we can factor $$f$$ such that $$f = (y + z) w x + x y z.$$ The factorizations of $$f$$ corresponding to the three largest rectangles in the product-literal matrix are: cokernel kernel factorization $$w x$$ $$y + z$$ $$(y + z) w x + x y z$$ $$x y$$ $$w + z$$ $$(w + z) x y + w x z$$ $$x z$$ $$w + y$$ $$(w + y) x z + w x y$$ There is one more largest rectangle, but it does not cover as many 1’s as the three largest rectangles above. The rectangle consists of the entire column of literal $$x,$$ which covers all three rows. The rectangle cannot be enlarged because no other columns covers all rows. The kernel of cokernel $$x$$ is $$wy + wz + yz.$$ We also discover cokernel $$x$$ with a second round of rectangle covering, starting with any of the three factorizations above. We derive the factorization of a factorization by renaming the kernel. For example, introduce a new name $$v = y + z.$$ Then, substitute $$v$$ for the kernel in the factorization to obtain $$f = v w x + x y z,$$ and treat $$v$$ as a literal. The resulting product-literal matrix is: We find only one largest rectangle that covers two rows, which is the rectangle consisting of column $$x.$$ The kernel of cokernel $$x$$ is the restriction of the sum of the row products in the literal of the cokernel $$(v w x + x y z)|_x = v w + y z.$$ Thus, we have found the factorization $$f = (v w + y z) x,$$ or after expanding variable $$v$$: $f(w,x,y,z)\ =\ ((y + z) w + y z) x\,.$ No further rounds of rectangle covering are necessary, because factorization $$(v w + y z) x$$ has reduced $$f$$ to a single product $$u x,$$ after introducing $$u$$ as a new name for kernel $$u = v w + y z.$$ Note that we obtain similar factored forms if we start the second round of rectangle covering with the other two factorizations, $$f = ((w + z) y + w z) x$$ and $$f = ((w + y) z + w y) x.$$ The corresponding multilevel circuits have the same topology, as shown in Figure 5.29 on the right. Figure 5.29: Two-level circuit of SOP form (left) and equivalent multilevel circuit of factored form (right). Factoring has reduced the cost of the SOP form from 12 by 2 units to cost 10 of the factored form. The cost of the factored form can be deduced from the factored expressions by including the new variable names, here $$u$$ and $$v$$: $\begin{eqnarray*} v &=& y + z \\ u &=& v w + y z \\ f &=& u x\,. \end{eqnarray*}$ The cost of $$f$$ is the number of literals plus the cost of the next level: $$\mathcal{C}(f) = 2 + \mathcal{C}(u),$$ where $$\mathcal{C}(u) = 4 + 2 + \mathcal{C}(v),$$ where $$\mathcal{C}(v) = 2,$$ for a total of $$\mathcal{C}(f) = 2 + 6 + 2 = 10.$$ Unlike two-level minimization, there is no guarantee that the multilevel circuit of factored form $$f$$ minimizes the cost $$\mathcal{C}(f).$$ Picking the largest rectangles in each round of rectangle covering is a greedy heuristic that succeeds in many cases but not always. Example 5.15: Multilevel Minimization by Factoring We apply the rectangle covering method to derive multilevel circuits for the symmetric functions $$S_{1,2}(w, x, y, z)$$ and $$S_{3,4,5}(v, w, x, y, z).$$ First, we consider the 4-variable function $S_{1,2}(w, x, y, z) = \overline{w}\,x\,\overline{y} + w\,\overline{x}\,\overline{y} + \overline{w}\,\overline{x}\,z + w\,\overline{y}\,\overline{z} + \overline{w}\,y\,\overline{z} + \overline{x}\,y\,\overline{z}\,,$ given in minimal SOP form. The product-literal matrix exposes two largest $$2\times 2$$ rectangles. The blue rectangle represents cokernel $$w \overline{y}$$ and kernel $$\overline{x} + \overline{z},$$ and the red rectangle represents cokernel $$y \overline{z}$$ and kernel $$\overline{w} + \overline{z}.$$ With new names for the kernels, $$a = \overline{x} + \overline{z}$$ and $$b = \overline{w} + \overline{z},$$ we obtain the factorization $S_{1,2}(a,b,w,x,y,z) = a\,w\,\overline{y} + b\,\overline{x}\,\overline{z} + \overline{w}\,x\,\overline{y} + \overline{w}\,\overline{x}\,z\,.$ The product-literal matrix of this factorization has three largest $$2\times 1$$ rectangles associated with literals $$\overline{x},$$ $$\overline{y},$$ and $$\overline{x}.$$ Since the rectangles share row products, the corresponding factorizations are not independent of each other. Hence, we pick one cokernel $$\overline{w}.$$ The corresponding kernel is $$x \overline{y} + \overline{x} z.$$ This pair of cokernel and kernel yields factorization $S_{1,2}(a,b,w,x,y,z) = a\,w\,\overline{y} + b\,\overline{x}\,\overline{z} + \overline{w} (\,x\,\overline{y} + \overline{x}\,z)\,.$ The cost of the factored form is $$\mathcal{C}(S_{1,2}) = ((3 + \mathcal{C}(a)) + (3 + \mathcal{C}(b)) + 8) + 3,$$ where $$\mathcal{C}(a) = \mathcal{C}(b) = 2,$$ for a total of $$\mathcal{C}(S_{1,2}) = 21.$$ Compared to 24 units of the minimal two-level form, the factored form saves 3 units of cost. However, the cost is still relatively large compared to Knuth’s multilevel circuit [DEK4A] on the left of Figure 5.26 with a cost of just 12 units. Second, we derive a factored form for 5-variable function $S_{3,4,5}(v, w, x, y, z) = v w x + v w y + v w z + v x y + v x z + v y z + w x y + w x z + w y z + x y z\,.$ The product-literal matrix of $$S_{3,4,5}$$ (round 1) contains ten largest $$3\times 2$$ rectangles. Only two of the ten rectangles, the blue and red rectangles are shown. We pick the blue and red rectangles because they do not share any rows. Furthermore, we pick the green and orange $$2\times 2$$ subrectangles of the largest $$3\times 2$$ rectangles. Together, these four rectangles cover each row of the product-literal matrix exactly once. Therefore, we can factor $$S_{3,4,5}$$ w.r.t. all kernels in a single round. The cokernel of the blue rectangle is $$v w$$ and the kernel $$x + y + z.$$ For the red rectangle cokernel and kernel are $$y z$$ and $$v + w + x,$$ for the green rectangle $$v x$$ and $$y + z,$$ and for the orange rectangle $$w x$$ and $$y + z.$$ We rename the kernels such that $$a = x + y + z,$$ $$b = v + w + x,$$ and $$c = y + z,$$ and obtain the factorization $S_{3,4,5}(a, b, c, v, w, x, y, z) = a v w + b y z + c v x + c w x\,.$ We use this factored form for a second round of rectangle covering. We find cokernel $$c x$$ and kernel $$v + w.$$ The resulting factorization in expanded form is $S_{3,4,5}(v, w, x, y, z) = (x + y + z) v w + (v + w + x) y z + (v + w) (y + z) x \,.$ Notice that kernel $$c = y + z$$ is a summand of kernel $$a$$ and kernel $$d = v + w$$ is a summand of kernel $$b.$$ We discuss the identification of shared kernels as part of our study of multioutput functions below. We observe that we can reduce the cost of $$S_{3,4,5}$$ further by sharing kernels $$c$$ and $$d$$ through algebraic rewriting: $\begin{eqnarray*} c &=& y + z \\ a &=& x + c \\ d &=& v + w \\ b &=& d + x \\ S_{3,4,5}(a, b, c, d, v, w, x, y, z) &=& a v w + b y z + d c x\,. \end{eqnarray*}$ Sharing kernels $$c$$ and $$d$$ yields a branching circuit that requires explicit naming of the associated common subexpressions. In contract, naming common subexpressions is not necessary to cast a tree-structured circuit like $$S_{1,2}$$ into factored form, because each kernel expression occurs only once. The cost of the factored form of $$S_{3,4,5}$$ is $$\mathcal{C}(S_{3,4,5}) = 20.$$ This cost comes pretty close to Knuth’s factored form [DEK4A] on the right of Figure 5.26 with a cost of 18 units, in particular when compared to the cost of 40 units of the two-level form of $$S_{3,4,5}.$$ ## 5.4. Multioutput Functions¶ A combinational circuit with $$n$$ inputs and $$m$$ outputs implements a multioutput function $$f:\mathcal{B}^n \rightarrow \mathcal{B}^m$$ if $$m > 1.$$ We may write a function with multiple outputs in vector notation as a row vector: $f(x_0, x_1, \ldots, x_{n-1}) = (f_0(x_0, x_1, \ldots, x_{n-1}),\ f_1(x_0, x_1, \ldots, x_{n-1}), \ldots,\ f_{m-1}(x_0, x_1, \ldots, x_{n-1}))\,.$ For example, the seven-segment decoder implements a multioutput function with $$n=4$$ and $$m=7.$$ Multiple functions present the opportunity of sharing combinational subcircuits if the functions have common subexpressions. As a result, we may save logic gates at the expense of introducing branches that potentially complicate the task of minimizing circuit delay. ### 5.4.1. Two-Level Multioutput Functions¶ Assume our goal is to design a multioutput function where each function shall be represented by a two-level AND-OR circuit. If we derive each of the functions by means of two-level minimization, an obvious opportunity for saving AND gates is to share prime implicants. As a concrete example, consider multioutput function $$f(x,y,z) = (f_0(x,y,z),\ f_1(x,y,z)),$$ where $\begin{eqnarray*} f_0(x,y,z) &=& \overline{x}\,\overline{y}\,z + \overline{x}\,y\,\overline{z} + \overline{x}\,y\,z + x\,y\,\overline{z}\,, \\ f_1(x,y,z) &=& \overline{x}\,y\,\overline{z} + x\,\overline{y}\,\overline{z} + x\,y\,\overline{z}\,. \end{eqnarray*}$ Figure 5.30 shows the K-map for $$f_0$$ on the left and for $$f_1$$ in the middle. If we minimize both functions independent of each other, we obtain the minimal covers $\begin{eqnarray*} f_0(x,y,z) &=& \overline{x}\,z + y\,\overline{z}\,, \\ f_1(x,y,z) &=& x\,\overline{z} + y\,\overline{z}\,. \end{eqnarray*}$ The total cost of the multioutput circuit is the sum of the costs of each function, $$\mathcal{C}(f_0) + \mathcal{C}(f_1) = 6 + 6 = 12.$$ Observe that we can reduce the cost of the circuit because the functions have prime implicant $$y\,\overline{z}$$ in common. Figure 5.30: K-map minimization of multioutput circuit with shared prime implicant. If two functions have a common prime implicant, we can share the prime implicant by connecting the output of the AND gate to the OR gates of both functions. Figure 5.31 contrasts the two-level circuits without and with sharing of prime implicant $$y\,\overline{z}.$$ Sharing saves one AND gate. To account for the savings, we determine the cost of a two-level multioutput function as the total number of AND gate inputs plus the total number of OR gate inputs. Then, the cost of the circuit without sharing is the sum of the costs of each function as before. The cost of the circuit with sharing is $$\mathcal{C}(f_0,f_1) = 6 + 4 = 10,$$ which is two units smaller than without sharing. Figure 5.31: Multioutput circuit without sharing (left) and with shared prime implicant (right). The key to minimizing the cost of a two-level multioutput function is to identify shared prime implicants. Observe in Figure 5.30 that the shared, red prime implicant covers those cells that are 1-cells in both K-maps for $$f_0$$ and $$f_1.$$ The 1-cells shared among functions $$f_0$$ and $$f_1$$ are the 1-cells of their conjunction $$f_0\cdot f_1,$$ i.e. the intersection of the sets of 1-cells, as illustrated in Figure 5.30 on the right. In general, a shared prime implicant of two functions $$f_0$$ and $$f_1$$ is a prime implicant of $$f_0\cdot f_1.$$ If the number of functions is larger than two, then sharing may be possible among pairs of functions, triples of functions, etc. For $$m$$ functions, there are $$\binom{m}{k}$$ combinations of pairs for $$k=2,$$ triples for $$k=3,$$ and so on, where $$2 \le k \le m.$$ Figure 5.32 illustrates an example with $$m=3$$ functions $$f_0,$$ $$f_1,$$ and $f_2(x,y,z) = \overline{x}\,y\,\overline{z} + \overline{x}\,y\,z + x\,y\,z\,.$ There are $$\binom{3}{2}=3$$ pairs of functions with shared prime implicants, $$f_0 \cdot f_1,$$ $$f_0 \cdot f_2,$$ and $$f_1 \cdot f_2.$$ Furthermore, we have $$\binom{3}{3}=1$$ choice for a triple of three functions $$f_0 \cdot f_1 \cdot f_2.$$ We find that the three functions have shared prime implicant $$\overline{x}\,y\,\overline{z}$$. Figure 5.32: K-map minimization of multioutput circuit with shared prime implicants. We can reduce the cost of function pair $$(f_0, f_1)$$ by sharing prime implicant $$y\,\overline{z},$$ as shown in Figure 5.31. Sharing is also effective between functions $$f_1$$ and $$f_2,$$ although these functions to not share a common prime implicant, but a subcube of their prime implicants only. As shown in Figure 5.32, the shared prime implicant of $$f_1 \cdot f_2$$ is the orange minterm $$\overline{x}\,y\,\overline{z}.$$ Nevertheless, although the shared prime implicant requires a 3-input AND gate, sharing reduces the size of the circuit because the 3-input AND gate replaces two 2-input AND gates. As a result, the cost of the multioutput circuit without sharing on the left in Figure 5.33, $$\mathcal{C}(f_1) + \mathcal{C}(f_2) = 6 + 6 = 12$$ is by 1 unit larger than the cost of the circuit with sharing, $$\mathcal{C}(f_1,f_2) = 7 + 4 = 11.$$ Figure 5.33: Multioutput circuit without sharing (left) and with a shared prime implicant that is a subcube of prime implicants of $$f_1$$ and $$f_2$$ (right). Function pair $$(f_0, f_2)$$ shares prime implicant $$\overline{x}\,y,$$ the green prime implicant in Figure 5.32. Although this prime implicant is essential for $$f_2$$ it is not for $$f_0.$$ There exists no selection of the four prime implicants of $$f_0$$ and $$f_2$$ or their subcubes, such that sharing $$\overline{x}\,y$$ would reduce the cost of the multioutput circuit below that of the separately minimized circuits, $$\mathcal{C}(f_0) + \mathcal{C}(f_2) = 6 + 6 = 12.$$ Before solving the multioutput minimization problem of multioutput function $$(f_0, f_1, f_2),$$ we first establish five useful facts: 1. Shared prime implicants are subcubes of the prime implicants of the multioutput function. For example, the orange shared prime implicant $$\overline{x}\,y\,\overline{z}$$ is a 0-cube, and is a subcube of two 1-cubes, the red prime implicant $$y\,\overline{z}$$ common to $$f_0$$ and $$f_1$$ and the green prime implicant $$\overline{x}\,y$$ common to $$f_0$$ and $$f_2.$$ The red and the green shared prime implicants are not strictly subcubes but are equal to the prime implicants of the corresponding functions. 2. The shared prime implicants of function $$f_i$$ are the maximal subcubes shared by $$f_i$$ with one or more functions. For example, function $$f_0$$ has the orange shared prime implicant $$\overline{x}\,y\,\overline{z},$$ because it is a maximal subcube of $$f_0\cdot f_1\cdot f_2.$$ Function $$f_1$$ does not have the green shared prime implicant $$\overline{x}\,y,$$ because it is neither a shared prime implicant of $$f_0\cdot f_1,$$ $$f_1\cdot f_2,$$ or $$f_0\cdot f_1\cdot f_2.$$ 3. A prime implicant of $$f_i$$ is essential for $$f_i$$ if it is the only prime implicant to cover a 1-cell and no other shared prime implicant of $$f_i$$ covers this 1-cell. For example, the red prime implicant $$y\,\overline{z}$$ of $$f_0$$ is essential for $$f_0,$$ because it is the only one to cover 1-cell $$x\,y\,\overline{z},$$ and the only other shared prime implicant of $$f_0,$$ the orange prime implicant $$\overline{x}\,y\,\overline{z}$$ does not cover 1-cell $$x\,y\,\overline{z}.$$ The red prime implicant is not essential for $$f_1,$$ however. Although it is the only prime implicant to cover 1-cell $$\overline{x}\,y\,\overline{z},$$ this 1-cell is also covered by the organge shared prime implicant of $$f_1.$$ Therefore, we have the choice between the red and orange implicants to cover 1-cell $$\overline{x}\,y\,\overline{z}$$ of $$f_1.$$ 4. The minimal cover of function $$f_i$$ includes the essential prime implicants of $$f_i.$$ For example, in Figure 5.32 the essential prime implicants are those that are the only ones covering a 1-cell marked with a bold 1. The blue prime implicants are essential, and must be included in the minimal cover of their functions. 5. The minimal cover $$f_i$$ may include shared prime implicants of $$f_i$$ to reduce the total cost. For example, the orange shared prime implicant is not essential for either function, but may be used together with the blue prime implicants to form a cover for $$f_1$$ and $$f_2$$ respectively. We summarize our insights about prime implicants in the following theorem. Theorem (Minimal Multioutput SOP Form) The minimal SOP form of multioutput function $$f=(f_0, f_1, \ldots, f_m)$$ consists of an SOP form for each output $$f_i,$$ which is the sum of the essential prime implicants of $$f_i$$ and a subset of the non-essential prime implicants and shared prime implicants of $$f_i$$ such that the total cost across all outputs is minimized. In practice, given the prime implicants and shared prime implicants of a multi-output function, the question is which of the non-essential and shared prime implicants to include into the minimal cover. We illustrate the selection by means of the K-maps in Figure 5.32. The prime implicants, essential prime implicants marked with a $$*,$$ and the shared prime implicants marked with symbol $$\dagger$$ for each of the three functions are according to Figure 5.32: $\begin{split}f_0:\quad &\overline{x}\,z\,^*,\ &y\,\overline{z}\,^{*\dagger},\ &\overline{x}\,y\,^\dagger,\ &\overline{x}\,y\,\overline{z}\,^\dagger \\ f_1:\quad &x\,\overline{z}\,^*, &y\,\overline{z}\,^\dagger, &\overline{x}\,y\,\overline{z}\,^\dagger &\quad \\ f_2:\quad &y\,z\,^*, &\overline{x}\,y\,^\dagger, &\overline{x}\,y\,\overline{z}\,^\dagger &\quad\end{split}$ The minimal cover of function $$f_0$$ consists of its two essential prime implicants $$f_0 = \overline{x}\,z + y\,\overline{z}.$$ Including non-essential prime implicant $$\overline{x}\,y$$ or shared prime implicant $$\overline{x}\,y\,\overline{z}$$ would increase the cost of $$f_0$$ unnecessarily. Since function $$f_1$$ shares prime implicant $$y\,\overline{z}$$ with $$f_0,$$ it is also candidate for the cover of $$f_1.$$ The second possibility for sharing is to include shared prime implicant $$\overline{x}\,y\,\overline{z}.$$ Function $$f_2$$ might share $$\overline{x}\,y\,\overline{z}$$ with $$f_1.$$ We can exclude the third option of sharing prime implicant $$\overline{x}\,y$$ between $$f_2$$ and $$f_0,$$ because including $$\overline{x}\,y$$ in the SOP form of $$f_0$$ increases its cost unnecessarily. This leaves us with two options for the minimal cover. Option 1 shares the red prime implicant $$y\,\overline{z}$$ between $$f_0$$ and $$f_1.$$ Since the $$y\,\overline{z}$$ completes the cover of $$f_1,$$ we minimize $$f_2$$ separately by including the green prime implicant $$\overline{x}\,y$$ rather than shared prime implicant $$\overline{x}\,y\,\overline{z}$$ in the SOP form: $\begin{eqnarray*} f_0 &=& \overline{x}\,z + y\,\overline{z} \\ f_1 &=& x\,\overline{z} + y\,\overline{z} \\ f_2 &=& y\,z + \overline{x}\,y\,. \end{eqnarray*}$ Option 2 uses the orange shared prime implicant for functions $$f_1$$ and $$f_2.$$ In this case, we do not include the red prime implicant $$y\,\overline{z}$$ in $$f_1,$$ because $$f_1$$ is already covered with the orange shared prime implicant $$\overline{x}\,y\,\overline{z}$$ and the blue essential prime implicant $$y\,z.$$ Thus, we minimize $$f_1$$ and $$f_2$$ separately from $$f_0$$: $\begin{eqnarray*} f_0 &=& \overline{x}\,z + y\,\overline{z} \\ f_1 &=& x\,\overline{z} + \overline{x}\,y\,\overline{z} \\ f_2 &=& y\,z + \overline{x}\,y\,\overline{z}\,. \end{eqnarray*}$ The cost of option 1, $$\mathcal{C}_1(f_0,f_1,f_2) = 10 + 6 = 16,$$ is by one unit smaller than the cost of option 2, $$\mathcal{C}_2(f_0,f_1,f_2) = 11 + 6 = 17.$$ We conclude that option 1 constitutes the minimal cover. ### 5.4.2. Multilevel Multioutput Functions¶ The design of a multioutput function as a multilevel circuit opens up opportunities for sharing larger subcircuits than prime implicants in two-level multioutput functions. In the following, we extend the method of algebraic factoring, and discuss how to discover shared kernels, i.e. SOPs with at least two products, in multioutput functions. As a concrete example, consider multioutput function $$F: \mathcal{B}^4 \rightarrow \mathcal{B}^3,$$ where $$F(w,x,y,z) = (f(w,x,y,z), g(w,x,y,z), h(w,x,y,z))$$ and: $\begin{eqnarray*} f(w, x, y, z) &=& x \overline{z} + w y \overline{z}\,, \\ g(w, x, y, z) &=& x z + w x y + w y z\,, \\ h(w, x, y, z) &=& x + w y + w \overline{z} + y \overline{z}\,. \end{eqnarray*}$ Note that SOP $$x + w y$$ is a factor of $$f,$$ a divisor of $$g$$ with quotient $$z,$$ and a divisor of $$h$$ with trivial quotient $$1.$$ Thus, $$x + w y$$ is a common kernel of $$f,$$ $$g,$$ and $$h.$$ Figure 5.34 contrasts the two-level multioutput circuit without sharing and the multilevel multioutput circuit with the shared kernel. Sharing the kernel reduces the cost $$\mathcal{C}(F) = 28$$ of the two-level implementation to $$\mathcal{C}(F) = 19$$ units. Figure 5.34: Multioutput circuit without sharing (left) and with shared kernel $$x + w y$$ (right). We can solve the problem of identifying shared kernels by means of rectangle covering. This method uses rectangle covering twice, first to identify all kernels of each function as introduced for algebraic factoring and, second, to find the largest shared kernel. More specifically, the steps of the method for shared kernel factorization by means of rectangle covering are: 1. For each function $$f_i$$ of multioutput function $$f = (f_0, \ldots, f_{m-1})$$ given in SOP form, determine all kernels by rectangle covering the product-literal matrix of $$f_i.$$ 2. Construct the kernel-product matrix with one row per kernel of all functions of $$f$$ and one column per product term of all kernels of $$f.$$ A matrix element is 1 if the product of the column is part of the kernel SOP in the row, and otherwise 0. 3. The largest rectangle in the kernel-product matrix that covers at least two rows and two columns is a shared kernel. Factor the functions $$f_i$$ using the shared kernel. We illustrate the method by rediscovering shared kernel $$x + w y$$ of multioutput function $$F$$ systematically. In step 1, we use the product-literal matrix for each of the functions $$f,$$ $$g,$$ and $$h$$ to determine all of their kernels, not just the kernels corresponding to the largest rectangles. We also include the kernel associated with trivial cokernel 1. The covered product-literal matrices are: Function $$f$$ has kernel $$x + w y$$ with cokernel $$\overline{z}.$$ Since $$f = x\,\overline{z} + w\,y\,\overline{z}$$ is divisible by $$\overline{z},$$ the SOP form of $$f$$ itself does not qualify as a kernel. Function $$g$$ has one $$2\times 2$$ rectangle with cokernel $$w y$$ and kernel $$x + z.$$ In addition, $$f$$ has two $$2\times 1$$ kernels, one with cokernel $$x$$ and kernel $$z + w y$$ and another with cokernel $$z$$ and kernel $$x + w y.$$ For trivial cokernel $$1,$$ SOP form $$g = x z + w x y + w y z$$ is a trivial kernel, that we include in our search for a common divisor. Function $$h$$ has three $$2\times 1$$ rectangles, cokernel $$w$$ with kernel $$y + \overline{z},$$ cokernel $$y$$ with kernel $$w + \overline{z}$$ and cokernel $$\overline{z}$$ with kernel $$w + y.$$ SOP form $$h = x + w\,y + w\,\overline{z} + y\,\overline{z}$$ itself is a trivial kernel with trivial cokernel $$1.$$ In step 2 we construct the kernel-product matrix for multioutput function $$F.$$ Each row corresponds to one kernel for each function: For clarity, we have also listed the cokernels associated with the kernels. Each column corresponds to one product term of the kernels. Note that the product in the name kernel-product matrix refers to the kernel products, whereas the product in the product-literal matrix refers to the SOP products of the given function in SOP form. To find the products for all columns of the matrix, list the product terms of all kernels determined in step 1, and remove all duplicates from the list. For example, kernel $$x + w y$$ is a sum of two products, $$x$$ and $$w y.$$ Each product is represented by one column in the kernel-product matrix. Since SOP $$x + w y$$ is a kernel of $$f$$ and $$g,$$ we include one row for this kernel for each function. In each of these rows we mark the matrix elements in product columns $$x$$ and $$w y$$ with a 1. The 0-elements are omitted for clarity. Duplicate kernel rows are desired in the kernel-product matrix, because they expose the sharing of kernel products across functions. According to step 3, we find a shared kernel by applying rectangle covering to the kernel-product matrix. A good shared kernel corresponds to the largest rectangle covering at least two rows and two columns. The largest rectangle in the product-literal matrix of multioutput function $$F$$ is the $$3\times 2$$ rectangle shown in the matrix. The corresponding shared kernel $$x + w y$$ is the sum of the column products. The factorizations of $$f,$$ $$g,$$ and $$h$$ with kernel $$x + w y$$ are: $\begin{eqnarray*} f &=& (x + w y)\,\overline{z} \\ g &=& (x + w y)\,z + w x y \\ h &=& (x + w y) + (w + y)\,\overline{z}\,. \end{eqnarray*}$ In function $$h,$$ we have also factored the remainder $$w \overline{z} + y \overline{z}$$ by extracting cokernel $$\overline{z}.$$ The corresponding multilevel multioutput circuit is shown in Figure 5.34 on the right. In general, the largest rectangles of the kernel-product matrix lead to large cost reductions. However, as for algebraic factoring of single-output functions, there is no guarantee that the largest rectangles minimize the cost of the resulting multilevel circuit. Example 5.16: Shared Kernel Factorization of Single-Output Function Shared kernel factorization is particularly effective in the context of multioutput functions. It can be beneficial for single-output functions like $$S_{3,4,5}$$ of Example 5.15 too. Through two rounds of factorization we obtain the factored form $S_{3,4,5}(v, w, x, y, z) = (x + y + z) v w + (v + w + x) y z + (v + w) (y + z) x$ with four kernels, $$x + y + z,$$ $$v + w + x,$$ $$v + w,$$ and $$y + z.$$ Rather than applying algebraic rewriting to reduce the cost of $$S_{3,4,5}$$ further, we now employ rectangle covering to discover kernels rather than cokernels. To that end, we construct the kernel-product matrix with the four kernels identified in Example 5.15: We find two largest $$2\times 2$$ rectangles, the blue rectangle corresponds to shared kernel $$v + w$$ and the red rectangle to shared kernel $$y + z.$$ We introduce new names $$c = y + z$$ and $$d = v + w,$$ and substitute the shared kernels in $S_{3,4,5}(c, d, v, w, x, y, z) = (x + c) v w + (d + x) y z + d c x\,.$ One additional, cost neutral step of renaming, such that $$a = x + c$$ and $$b = d + x,$$ yields the same factored form as in Example 5.15: $S_{3,4,5}(a, b, c, d, v, w, x, y, z) = a v w + b y z + d c x\,.$ In general, we have the choice to apply either of the two methods of factorization in any order. For the time being the decision which order minimizes the cost of the multilevel circuit remains in the realm of black magic. ### 5.4.3. Tree-structured Multioutput Circuits¶ There exist multioutput functions where sharing of subcircuits appears to be an obvious design choice, yet deriving such circuits is anything but trivial. Nevertheless, the insights to be gained from the study of tree-structured multioutput circuits are particularly enlightening. As a concrete example, consider function $$f: \mathcal{B}^8 \rightarrow \mathcal{B}^8$$ with eight inputs and eight outputs such that $$f_i$$ is the conjunction of inputs $$x_0$$ up to $$x_i$$: $f_i(x_0, x_1, \ldots, x_7)\ =\ x_0 \cdot x_1 \cdot \cdots \cdot x_i\,.$ Thus, the eight output functions are $\begin{eqnarray*} f_0 &=& x_0 \\ f_1 &=& x_0 \cdot x_1 \\ f_2 &=& x_0 \cdot x_1 \cdot x_2 \\ f_3 &=& x_0 \cdot x_1 \cdot x_2 \cdot x_3 \\ f_4 &=& x_0 \cdot x_1 \cdot x_2 \cdot x_3 \cdot x_4 \\ f_5 &=& x_0 \cdot x_1 \cdot x_2 \cdot x_3 \cdot x_4 \cdot x_5 \\ f_6 &=& x_0 \cdot x_1 \cdot x_2 \cdot x_3 \cdot x_4 \cdot x_5 \cdot x_6 \\ f_7 &=& x_0 \cdot x_1 \cdot x_2 \cdot x_3 \cdot x_4 \cdot x_5 \cdot x_6 \cdot x_7\,. \end{eqnarray*}$ If we wish to implement $$f$$ with as few AND gates as possible, we observe that we can express $$f$$ as a linear recurrence: $f_i = \begin{cases} x_0\,, & \text{if}\ i = 0\,, \\ f_{i-1} \cdot x_i\,, & \text{if}\ 1 \le i < 8\,. \end{cases}$ This recurrence enables us to compute $$f_i$$ once we know $$f_{i-1}$$ by means of a conjunction with $$x_i.$$ The corresponding combinational circuit in Figure 5.35 forms a chain of AND gates. Figure 5.35: AND chain circuit. The AND chain maximizes the sharing of subcircuits. Each output requires one 2-input AND gate only, for a total of seven AND gates. If we generalize the number of inputs and outputs to $$n,$$ then the AND chains needs only $$n-1$$ AND gates. However, all AND gates lie on the critical path of the circuit from $$x_0$$ to $$f_{n-1}.$$ Thus, for increasing $$n$$ the circuit delay grows quite rapidly, even with gate sizing. If we wish to minimize the circuit delay, we can use a tree-structured circuit for each output. Figure 5.36 shows a forest of AND trees constructed with 2-input AND gates without any sharing. Output $$f_7$$ requires the tree with a maximum height of $$\lg 8 = 3$$ AND gates. In general, the maximum number of AND gates on the critical path of a circuit with $$n$$ inputs and outputs is $$\lceil \lg n\rceil.$$ For large $$n,$$ the tree structure reduces the maximum path delay significantly compared to the chain circuit. Figure 5.36: Forest of tree-structured AND gates without sharing. The obvious disadvantage of the AND forest compared to the AND chain is the significant increase in the number of AND gates. However, we can reduce the number of gates by sharing common subcircuits among the trees. For example, output $$f_1$$ can be used as input for the other trees $$f_2, f_3, \ldots, f_7,$$ which would save six AND gates. On the other hand, we may want to share the larger subcircuit of output $$f_3$$ as input of the trees for $$f_4$$ through $$f_7.$$ In fact, there is a large number of choices for sharing subcircuits among the trees of a forest. Two topologies of shared forests have gained sufficient popularity among circuit designers that they have been named by their inventors. The Kogge-Stone circuit for $$f$$ is shown in Figure 5.37 and the Ladner-Fischer circuit in Figure 5.38. To denote the intermediate conjunctions in the Kogge-Stone and Ladner-Fischer circuits, we introduce the bracket notation. For an associative, binary operator $$\otimes\,,$$ like the AND operation for instance, we define the bracket for indices $$i$$ and $$j,$$ where $$0 \le i \le j < n$$ on inputs $$x_0, x_1, \ldots, x_{n-1}$$ such that $\begin{split}[i{:}j] = \begin{cases} x_i\,, & \text{if}\ i = j\,, \\ x_i \otimes x_{i+1} \otimes \cdots \otimes x_j\,, & \text{if}\ i < j\,. \end{cases}\end{split}$ In case of the AND operation, bracket $$[i{:}j]$$ denotes the conjunction of the inputs $$x_k$$ in index range $$i \le k \le j.$$ Since brackets refer to contiguous index ranges, we can define the composition of two consecutive brackets $$[i{:}k]$$ and $$[k+1{:}j],$$ where $$0 \le i \le k < j < n$$ such that $[i{:}j] = [i{:}k] \otimes [k+1{:}j]\,.$ The composition of two consecutive brackets concatenates the index ranges. For example, assuming the operator is the AND operation, given brackets $$[0{:}1] = x_0 \cdot x_1$$ and $$[2{:}3] = x_2 \cdot x_3,$$ then the composition denotes the conjunction of all inputs in index range 0 through 3, i.e. $$[0{:}1] \cdot [2{:}3] = [0{:}3] = x_0 \cdot x_1 \cdot x_2 \cdot x_3.$$ The circuits in Figure 5.37 and Figure 5.38 compose intermediate results such that the bracket composition applies. Output $$f_i = [0{:}i]$$ in bracket notation. Figure 5.37: Kogge-Stone circuit due to Peter Kogge and Harold S. Stone. The bracket notation is convenient for analyzing the Kogge-Stone and Ladner-Fischer circuits. For each output $$f_i,$$ verify that $$f_i = [0{:}i]$$ by means of bracket compositions. The Kogge-Stone circuit combines all pairs of next-neighbor inputs in the first level, all pairs of neighbors with distance two in the second level, all pairs of neighbors with distance four in the third level, and so on for larger $$n.$$ The Ladner-Fischer circuit uses fewer intermediate results and, therefore, requires fewer AND gates than the Kogge-Stone circuit. On the other hand, the Ladner-Fisher circuit has AND gates with a larger fan-out than the Kogge-Stone circuit, which requires careful gate sizing to minimize the propagation delay. So far, we have considered three different circuit topologies for multioutput function $$f,$$ the chain circuit, the forest of trees, and the shared tree circuits of which the Kogge-Stone and Ladner-Fischer circuits are representative examples. We compare the quality of these multioutput circuits as a function of the number of inputs and outputs $$n$$ based on their cost, which reflects the number of 2-input gates, and the number of gates on the critical path. The table below lists the asymptotic behavior of cost and critical path as a function of $$n,$$ neglecting constant factors. cost critical path length chain $$n$$ $$n$$ forest $$n^2$$ $$\lg n$$ shared trees $$n \lg n$$ $$\lg n$$ We find that the chain circuit minimizes the cost, whereas the forest and shared tree circuits minimize the critical path length. Although the shared trees have a smaller cost than the forest without sharing, the cost of the shared trees is by a factor of $$\lg n$$ larger than the cost of the chain circuit. The comparison above raises the question whether we can design a circuit that combines the best of both worlds, a cost proportional to $$n$$ and a critical path length proportional to $$\lg n.$$ This is the case indeed, and a class of circuits with these properties is known as prefix circuits because they perform prefix computations: A prefix computation is a multioutput function with $$n$$ inputs $$x_0, x_1, \ldots, x_{n-1}$$ and an associative binary operator $$\otimes$$ that produces $$n$$ outputs $$y_0, y_1, \ldots, y_{n-1}$$ such that $\begin{split}y_i = \begin{cases} x_0\,, & \text{if}\ i = 0\,, \\ y_{i-1} \otimes x_i\,, & \text{if}\ 1 \le i < n\,. \end{cases}\end{split}$ The name prefix computation is borrowed from the prefix of a sequence or a string, considering that $$y_i$$ combines the prefix of the input sequence which starts at index $$0$$ and ends at index $$i.$$ Note that prefix computations extend beyond the Boolean domain. For example, if the $$x_i$$ and $$y_i$$ are integer numbers and we use integer addition as associative binary operator, then the prefix computation is a prefix sum: $y_i = \sum_{k=0}^i x_i$ for $$0 \le i < n.$$ Our Boolean multioutput function $$f$$ is a prefix conjunction with $$n=8.$$ The key idea of a prefix circuit is a recursive construction algorithm: 1. Combine even pairs of next neighbors of the inputs. 2. Recurse on the outputs of step 1. 3. Combine odd pairs of the outputs of step 2 and inputs. We illustrate the recursive construction by means of a prefix sum of $$n=8$$ numbers $$x_i = i+1.$$ The output of the prefix sum are sums of the first positive numbers $$y_i = \sum_{k=0}^i k+1 = \sum_{k=1}^{i+1} k = (i+1) (i+2)/2$$: $$i$$ 0 1 2 3 4 5 6 7 $$x_i$$ 1 2 3 4 5 6 7 8 $$y_i$$ 1 3 6 10 15 21 28 36 The circuit below illustrates the recursive construction algorithm for the prefix sum with $$n=8.$$ Step 1 performs the pairwise addition of next neighbors. Here, even pairs have even indices in the first element of each pair, that is (0,1), (2,3), etc. The recursive step 2 uses the sums of step 1 as inputs of a prefix sum computation with $$n=4.$$ We draw the recursion as a black-box, assuming that the outputs are computed as shown. In step 3, we combine odd pairs of outputs of step 2 with inputs to compute the prefix sum. The total number of additions in steps 1 and 3 is 7 or $$n-1$$ in general. Thus, we can express the number of additions $$A(n)$$ recursively as $$A(n) = A(n/2) + n-1.$$ We halve problem size $$n$$ recursively until $$n = 1,$$ where no additions are required. Solving this recurrence yields an upper bound for the number of additions $$A(n) \le 2 (n-1).$$ This count is proportional to $$n$$ and, thus, asymptotically equal to the minimum number of additions required in an equivalent chain circuit. Nevertheless, the prefix circuit requires up to a constant factor of 2 more additions than a chain circuit. Analogously, we find that the number of adders on the critical path is less than $$2 \lg n,$$ which is up to a constant factor of 2 more than the minimum of the equivalent forest or shared tree circuits. Thus, beyond the factors of 2, the recursive construction algorithm guarantees a cost proportional to $$n$$ and a critical path length proportional to $$\lg n.$$ Figure 5.39: Prefix circuit with associative binary operator $$\otimes$$ for $$n=8.$$ Figure 5.39 shows the topology of a prefix circuit for $$n=8$$ with the expanded recursion. The circuit can be viewed as two back-to-back binary trees, one with the leaves at the inputs and the other with the leaves at the outputs. For a prefix sum replace each $$\otimes$$ operator with an adder, and for a prefix conjunction with an AND gate. The prefix circuit with problem size $$n=8$$ requires 11 operators, which is less then $$2 (n-1),$$ and has a critical path length of 4 operators, which is less than $$2 \lg n.$$ The advantages of the prefix circuit become more pronounced for larger values of $$n.$$ ## 5.5. Basic Arithmetic Circuits¶ In this section, we introduce combinational circuits for arithmetic operations that can be found in every digital computer. We discuss the design of an adder and a magnitude comparator for unsigned binary numbers. When we add two decimal numbers by paper and pencil, we tabulate the numbers aligning the least significant digits in the rightmost column. Then we add the digits from right to left, starting with the least significant digits, potentially adding a carry digit to the more significant column on the left. The example on the right shows an addition chart for adding decimal numbers 4528 and 937. First, we add 8 and 7, which is 15. Since number 15 occupies two digits, it generates a carry of value 1. This is easily seen by expanding 15 into its polynomial representation $$15 = 1 \cdot 10 + 5.$$ For the addition chart to preserve the positional notation of the sum, digit 5 is the least significant sum digit, whereas digit 1 counts tens, and is carried into the next position, where we need to add the carry to the sum of digits 2 and 3. The result is 6, which does not generate a carry into the next position. Equivalently, we may interpret 6 as a two-digit number 06, and carry the leading zero into the next position. Since binary numbers and decimal numbers are both instances of a positional number system, adding two binary numbers by paper and pencil works analogously to decimal numbers. The example on the right shows the addition chart for binary numbers 1011 and 11. In fact, the addition of binary numbers is even simpler than with decimal numbers because there are only four combinations of two bits compared to one hundred for two decimal digits. A carry occurs if the sum is larger than 1, which is the case for the least significant bits in the example on the right. Since $$1 + 1 = 10_2,$$ we add the carry of the least significant position into the next position to obtain $$1 + 1 + 1 = 11_2.$$ As a first step towards an adder circuit consider the addition of two 1-bit binary numbers $$a$$ and $$b.$$ Rather than using an addition chart, we list all four combinations of values for $$a$$ and $$b$$ in a truth table, and derive the carry and sum bits. a b carry sum 0 0 0 0 0 1 0 1 1 0 0 1 1 1 1 0 Since $$0 + 0 = 0,$$ the top row contains 0 bits for both carry and sum. If one of $$a$$ or $$b$$ is 1 then $$0 + 1 = 1 + 0 = 1$$ such that the sum bit is 1 and the carry bit is 0. Only if both $$a$$ and $$b$$ are 1 is the sum $$1 + 1 = 2_{10} = 10_2,$$ that is the carry bit is 1 and the sum bit is 0. A half adder is a combinational circuit that implements the addition of two 1-bit numbers. It has two inputs $$A$$ and $$B$$ and two outputs one for carry $$C_{out}$$ and the other for sum $$S.$$ Thus, the half adder is a multioutput function with the two Boolean functions defined in the truth table above and algebraic expressions: $\begin{eqnarray*} S(A,B) &=& A \oplus B \\ C_{out}(A,B) &=& A \cdot B\,. \end{eqnarray*}$ The half adder has a carry output but no carry input. If we wish to build an adder for two multibit binary numbers, we need to extend the half adder with a carry input. The truth table for all combinations of input bits $$a,$$ $$b,$$ and a carry-in bit is: a b carry-in carry-out sum 0 0 0 0 0 0 0 1 0 1 0 1 0 0 1 0 1 1 1 0 1 0 0 0 1 1 0 1 1 0 1 1 0 1 0 1 1 1 1 1 If all three inputs are 0, then the carry-out and the sum are both 0. If one of the three inputs is 1, then the sum is 1 and the carry-out 0. If two of the three inputs are 1, then the carry-out is 1 and the sum is 0. These three cases are covered by the half adder as well. New is the case in the bottom row where all three inputs are 1. In this case, $$1 + 1 + 1 = 3_{10} = 11_2,$$ so that both carry-out and sum assume value 1. The combinational circuit that implements this truth table is called a full adder. With three inputs, $$A,$$ $$B,$$ and carry-in $$C_{in},$$ and two outputs for carry-out $$C_{out}$$ and sum $$S,$$ the full adder is a multioutput function. We know the 3-variable functions defined in the truth table already. The carry-out is the majority function $$M(A,B,C_{in})$$ and the sum is the odd parity function $$P(A,B,C_{in})$$: $\begin{eqnarray*} S(A,B,C_{in}) &= &P(A,B,C_{in}) &= &A \oplus B \oplus C_{in} \\ C_{out}(A,B,C_{in}) &= &M(A,B,C_{in}) &= &A \cdot B + A \cdot C_{in} + B \cdot C_{in}\,. \end{eqnarray*}$ The full adder is the building block for an adder of two $$n$$-bit binary numbers. A carry propagate adder (CPA) is a combinational circuit with two $$n$$-bit inputs $$A$$ and $$B$$ and a 1-bit carry input $$C_{in}$$ that computes the $$n$$-bit sum $$S$$ and the carry bit $$C_{out}.$$ Before implementing a CPA, we may wonder whether the $$n+1$$ output bits, $$n$$ sum bits plus one carry-out msb suffice to represent all sums of two $$n$$-bit numbers. The answer is affirmative, and important enough to formulate as a lemma: Lemma (CPA bit width) The sum of two unsigned $$n$$-bit binary numbers plus a carry bit into the least significant position can be represented with $$n+1$$ bits. Proof. We show that the largest possible sum of a CPA can be represented with $$n+1$$ bits. The range of an unsigned $$n$$-bit binary number is $$[0, 2^n-1],$$ and the largest unsigned $$n$$-bit number is $$2^n-1.$$ The largest possible sum of two $$n$$-bit unsigned numbers and a carry into the lsb is the sum of the two largest $$n$$-bit numbers plus a carry-in of 1: $\begin{eqnarray*} \max (A + B + C_{in}) &=& \max A + \max B + \max C_{in} \\ &=& (2^n - 1) + (2^n - 1) + 1 \\ &=& 2^{n+1} - 1\,, \end{eqnarray*}$ which is the largest unsigned binary number representable with $$n+1$$ bits. Now that we know that the black-box specification of a CPA fits our needs for unsigned binary numbers, we mimick the paper-and-pencil method to implement a CPA with a chain of full adders. The resulting ripple carry adder (RCA) in Figure 5.40 constitutes the simplest implementation of a CPA. Alternative CPA designs are the carry-lookahead adder and the prefix adder, which are generally faster but more complex logic designs. Figure 5.40: A ripple carry adder is a chain of full adders. As a naming convention we use subscript $$i$$ to refer to the full adder in bit position $$i.$$ It has inputs $$A_i,$$ $$B_i,$$ and carry-in $$C_{i-1},$$ and the outputs are sum $$S_i$$ and carry-out $$C_i.$$ Carry-out $$C_i$$ of the full adder in position $$i$$ drives the carry-in of the full adder in position $$i+1.$$ The boundary cases are carry-in $$C_{in} = C_{-1}$$ and carry-out $$C_{out} = C_{n-1}.$$ When adding two unsigned numbers $$A$$ and $$B,$$ we enforce $$C_{in} = 0,$$ apply the input signals $$A_i$$ and $$B_i$$ for $$0 \le i < n.$$ Since the ripple carry adder is a combinational circuit, the propagation delay is determined by the critical path, which spans positions 0 to $$n-1$$ along the carry wires. The carry signal propagates, or ripples, from lsb position 0 through the carry chain to msb position $$n-1.$$ Therefore, to the first order, the propagation delay of the adder is proportional to the number of bits $$n.$$ Figure 5.41 shows on the left a glass-box diagram of the full adder based on a 3-input majority gate for the carry output and a 3-input XOR or parity gate for the sum. The 4-bit RCA on the right replicates this full adder design, and emphasizes the carry chain as the critical path. Figure 5.41: Full adder circuit with majority and parity (3-input XOR) gates (left) and gate-level 4-bit RCA (right). The design of an RCA does not end with Figure 5.41. Instead, one may argue that Figure 5.41 is where the fun really starts. In the following, we apply the method of logical effort to explore the design space of RCA circuits. We wish to assess the propagation delay of an $$n$$-bit RCA as a function of $$n$$ by comparing different circuit designs. We assume that the adder encounters relatively small load capacitances at its outputs. Otherwise, we may insert inverter stages to drive larger loads. Our strategy is to design a full adder, and to replicate this full adder $$n$$ times as suggested in Figure 5.41. The primary design goal is to identify a combinational circuit that minimizes the delay of the carry chain. To that end we present three alternative circuit designs. Additional alternatives, for example circuits based on compound gates or majority and parity CMOS gates, are left as exercises. #### RCA with Two-Level Logic¶ As a reference design, we implement the 3-input majority and parity gates of the full adder using two-level logic. More specifically, we choose to implement the full adder with NAND gates only. To that end, we apply the NAND transform to the minimal SOP form of the majority gate: $\begin{eqnarray*} C_{out}(A,B,C_{in}) &=& A B + A C_{in} + B C_{in} \\ &=& \overline{\overline{A B} \cdot \overline{A C_{in}} \cdot \overline{B C_{in}}} \end{eqnarray*}$ and the minimal SOP form of the parity gate: $\begin{eqnarray*} S(A,B,C_{in}) &=& \overline{A}\,\overline{B}\,C_{in} + \overline{A}\,B\,\overline{C}_{in} + A\,\overline{B}\,\overline{C}_{in} + A\,B\,C_{in} \\ &=& \overline{\overline{\overline{A}\,\overline{B}\,C_{in}} \cdot \overline{\overline{A}\,B\,\overline{C}_{in}} \cdot \overline{A\,\overline{B}\,\overline{C}_{in}} \cdot \overline{A\,B\,C_{in}}}\,. \end{eqnarray*}$ Figure 5.42 shows the corresponding two-level circuits. Both circuits are symmetric in the sense that all paths from any input to the output are 2-stage paths of two NAND gates. However, for the sum we need the inputs in both complemented and uncomplemented form. Figure 5.42: NAND gate implementation of the majority (left) and parity gates (right). The logical efforts of the Figure 1.48, 3-input, and 4-input NAND gates are $$g_{nand2} = 4/3,$$ $$g_{nand3} = 5/3,$$ and $$g_{nand4} = 6/3.$$ Assuming path electrical effort $$H_C$$ for the carry and $$H_S$$ for the sum, and with branching effort $$B=1,$$ the path efforts $$F_C$$ of the carry circuit and $$F_S$$ of the sum circuit in Figure 5.42 are $\begin{eqnarray*} F_C &= &G_C B_C H_C &= &g_{nand2} g_{nand3} \cdot 1 \cdot H_C &= &\frac{20}{9}\,H_C\,, \\ F_S &= &G_S B_S H_S &= &g_{nand3} g_{nand4} \cdot 1 \cdot H_S &= &\frac{10}{3}\,H_S\,. \end{eqnarray*}$ Since the carry chain is on the critical path of an $$n$$-bit RCA, we inspect the load of the carry output to determine electrical effort $$H_C.$$ According to Figure 5.41, carry output $$C_{out}$$ of position $$i$$ drives one input of the majority gate and one input of the parity gate in position $$i+1.$$ Since the parity gate requires the complemented and uncomplemented input signal, we use a 2-fork to generate the complement. Figure 5.43 shows the gate-level circuit of a majority gate in position $$i$$ driving the inputs of the 2-fork and the majority gate in position $$i+1.$$ Figure 5.43: The capacitive load $$C_L$$ of carry output $$C_{out}$$ in position $$i$$ is the sum of the input capacitances of the 2-fork and the inputs of the majority gate in position $$i+1.$$ Here $$C_{in} = C_{out} = C_i$$ denotes the carry-in signal into position $$i+1,$$ not the input capacitance. To keep the area requirements of the RCA small, we assume that the stage-1 gates of the subcircuits are matched gates of minimum size, i.e. with scale factor $$\gamma = 1.$$ Thus, as annotated in Figure 5.43, the input capacitance of a stage-1 inverter is $$C_{in}(inv) = 3$$ units and of a 2-input NAND gate $$C_{in}(nand2) = 4$$ units. Since each majority gate input drives two NAND gates, the input capacitance of the majority gate is $$C_{in}(M) = 8$$ units. The load capacitance of the majority gate is $$C_L(M) = 2 \cdot C_{in}(inv) + C_{in}(M) = 14$$ units. Therefore, the electrical effort of the majority gate is $$H_C = C_L(M)/C_{in}(M) = 7/4$$ with path branching effort $$B_C = 2$$ due to the input fork. If we size the stage-2 NAND gate of the majority gate for minimum delay according to the method of logical effort, we obtain a minimum delay for the carry output of $\hat{D}_C = 2 \sqrt{F_C} + (p_{nand2} + p_{nand3}) = 2 \sqrt{\frac{20}{9}\cdot 2\cdot\frac{7}{4}} + (2 + 3) = 10.58$ time units. Thus, an $$n$$-bit RCA with 14 units of load capacitance at its carry output has a delay of $$D_{2\text{-}level} = n\,\hat{D}_C$$ or $D_{2\text{-}level}(n) = 10.58\,n\,.$ Note that the sum outputs are not on the critical path of the RCA in Figure 5.41, with the exception of sum output $$S_{n-1}.$$ However, for large $$n$$ the difference of the delays for $$C_{out}$$ and $$S_{n-1}$$ can be considered negligible. #### RCA with Multilevel Logic¶ As an alternative to the full adder implementation with two-level logic, we now implement the full adder of the RCA with multilevel circuits. We seek inspiration from Boolean algebra, and observe that the inclusive and exclusive OR operations are related through these identities $\begin{eqnarray*} x + y &=& x \oplus y + x\,y\,, \\ x \oplus y &=& (x + y) \cdot \overline{x\,y}\,, \end{eqnarray*}$ that are easily proven by perfect induction. We use the second identity to prove that we can assemble a full adder from two half adders plus an OR gate as shown in Figure 5.44 below. Call the outputs of the stage-1 half adder $$C_1$$ and $$S_1.$$ Then, the outputs of the full adder are $\begin{eqnarray*} S &= &S_1 \oplus C_{in} & \\ &= &A \oplus B \oplus C_{in}\,, & \\ C_{out} &= &C_1 + S_1 \cdot C_{in} & \\ &= &A\,B + (A\oplus B) \cdot C_{in} & \\ &= &A\,B + ((A + B) \cdot \overline{A\,B}) \cdot C_{in}\qquad & \text{by}\ 2^{nd}\ \text{identity} \\ &= &A\,B + (A + B) \cdot C_{in} & \text{by absorption} \\ &= &A\,B + A\,C_{in} + B\,C_{in}\,, & \end{eqnarray*}$ which proves that the circuit implements a full adder. Figure 5.44: Full adder assembled with two half adders and an OR gate. Note that the NAND transform applies to $$C_{out},$$ if we restrict the transform to those two levels of the 3-level circuit with SOP form, ignoring the XOR gate in HA1. The NAND transform replaces the OR and AND gates in Figure 5.44 with NAND gates. The XOR gates remain unaffected. If we replicate this multilevel design to form an $$n$$-bit RCA, we find that the critical path consists of one pair of half adder and OR gate per position. Figure 5.45 shows the adder design with the HA1 half adders drawn near the inputs to emphasize the carry chain. Since the carry path through a half adder consists of an AND gate only, this design should offer competitive performance to our reference design with two-level logic. Figure 5.45: Multilevel 4-bit RCA implemented with half adders and OR gates, cf. Figure 5.41. Using the method of logical effort, we can quickly assess the delay by considering the gate-level circuit of the critical carry chain. In Figure 5.46, we have applied the NAND transform, so that in position $$i$$ of an $$n$$-bit RCA the carry signal passes through two stages of 2-input NAND gates. The XOR gate for the sum computation branches off the carry path. For small electrical efforts, the evaluation in Figure 4.28 shows that our fastest option is an XOR circuit with a 2-fork to generate the complemented and uncomplemented inputs for the CMOS XOR gate. Therefore, the carry output drives two inverters in each leg of the 2-fork, and the NAND gate of the stage-1 half adder. Figure 5.46: The capacitive load $$C_L$$ of carry output $$C_{out}$$ in position $$i$$ is the half adder of position $$i+1.$$ Assuming that the stage-1 gates of the half adders are matched and have minimum size with $$\gamma = 1,$$ each input of the NAND gate has an input capacitance of $$C_{in}(nand2) = 4$$ units and each stage-1 inverter of the 2-fork driving an XOR gate has $$C_{in}(inv) = 3$$ units of capacitance. Therefore, the branching effort of the carry path is $B_C = \frac{C_{in}(nand2) + 2\,C_{in}(inv)}{C_{in}(nand2)} = \frac{5}{2}\,.$ The electrical effort of the carry path is $$H_C = 1,$$ because both input capacitance and load capacitance of the carry chain is equal to the input capacitance of one half adder. The logical effort of the carry path through two stages of NAND gates is $$G_C = g_{nand2}^2 = (4/3)^2$$. Therefore, the carry path effort is $$F_C = G_C B_C H_C = 40/9,$$ and the minimum path delay is $\hat{D}_C = 2 \sqrt{F_C} + 2 p_{nand2} = 2 \sqrt{\frac{40}{9}} + 4 = 8.22$ time units. Although position 0 in Figure 5.45 shows an additional half adder on the critical carry path, for large $$n$$ we may approximate the delay of an $$n$$-bit RCA with multilevel circuits reasonably well as $$n$$ times $$\hat{D}_C,$$ such that $D_{multilevel}(n) = 8.22\,n\,.$ We find that the multilevel RCA design is marginally faster than the two-level design with a delay of $$D_{2\text{-}level}(n) = 10.58\,n$$. #### RCA with Carry-Propagation Gate¶ If we wish to speed up the carry chain beyond the two-level and multilevel circuits, our best bet is to consider designing a CMOS circuit for fast carry propagation. Such a circuit requires abstract thinking to arrive at a Boolean expression for the carry-out signal of the full adder. More succinctly, we introduce two intermediate signals from the truth table of the full adder that enable us to express $$C_{out}$$ such that we obtain a faster circuit. To that end, consider the truth table of the full adder below, reorganized such that the upper four rows are associated with a carry-in of 0, and the lower four rows with a carry-in of 1. Because the majority and parity functions are symmetric, the carry-out and sum columns remain unchanged. $$C_{in}$$ $$A$$ $$B$$ $$C_{out}$$ $$S$$ $$G$$ $$K$$ 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 1 1 1 0 1 0 1 0 0 0 1 0 1 1 0 1 1 0 0 0 1 1 0 1 0 0 0 1 1 1 1 1 1 0 We make the following, abstract observations about the carry-out signal: 1. The carry-out equals 0, independent of the carry-in, if and only if both inputs $$A$$ and $$B$$ are 0. We say that input combination $$A=B=0$$ kills the carry-in and outputs a carry-out of 0. 2. The carry-out equals 1, independent of the carry-in, if and only if both inputs $$A$$ and $$B$$ are 1. We say that input combination $$A=B=1$$ generates a carry-out of 1 independent of the carry-in. 3. If one of inputs $$A$$ or $$B$$ equals 1, then the carry-out equals the carry-in. We say that the full adder propagates the carry input to the carry output. The first two observations motivate the definition of two Boolean functions for the generate signal, defined as $G = A \cdot B\,,$ and for the kill signal $K = \overline{A} \cdot \overline{B}\,.$ The truth table of the full adder above contains separate columns for the generate and kill functions. We can use these functions to express the carry-out and its complement as follows: $\begin{eqnarray*} C_{out} &=& G + \overline{K}\,C_{in}\,, \\ \overline{C}_{out} &=& K + \overline{G}\,\overline{C}_{in}\,. \end{eqnarray*}$ The carry-out equals 1 if we generate the carry or if we do not kill a carry-in of value 1. The complemented form corresponds to the inverting carry-propagation gate shown in Figure 5.47. The pull-up and pull-down networks of this CMOS gate are not duals of each other. Furthermore, the carry-propagation gate is asymmetric, with different logical efforts per input, $$g_{cp}(G) = 5/3,$$ $$g_{cp}(\overline{K}) = 4/3,$$ and $$g_{cp}(C_{in}) = 2.$$ Although the parasitic delay $$p_{cp} = 3$$ is relatively large for a CMOS gate, the key for a fast RCA is to have just a single carry-propagation gate per bit position. Figure 5.47: Inverting carry-propagation gate based on generate and kill signals. The transistor scale factors belong to the matched gate. If we substitute the inverting carry-propagation gate for the majority gate in the full adder, we complement the carry output. We indicate the change in the full-adder symbol by placing a bubble on the carry output. Rather than using an inverter to compute the uncomplemented carry-out signal, we construct an $$n$$-bit RCA using the inverting ripple carry chain shown in Figure 5.48. Figure 5.48: A ripple carry adder with an inverting carry chain. This design exploits the fact that 3-input majority and parity functions are self-dual, i.e. the Boolean identities $\begin{eqnarray*} P(A,B,C_{in}) &=& \overline{P(\overline{A}, \overline{B}, \overline{C}_{in})} \\ M(A,B,C_{in}) &=& \overline{M(\overline{A}, \overline{B}, \overline{C}_{in})} \end{eqnarray*}$ hold. Therefore, rather than adding inverters to the critical carry chain, we can add inverters in every other position of the RCA to the noncritical inputs and sum outputs. Figure 5.49 shows a 4-bit RCA with the inverting carry chain. The inverting carry propagation gates ($$CP$$) and the kill and generate ($$\overline{K} G$$) logic are shown as black boxes. Figure 5.49: 4-bit RCA with fast carry propagation. We approximate the delay of an $$n$$-bit RCA as $$n$$ times the delay of the carry propagation gate. We assume that all carry propagation gates are minimum sized, matched gates. Then, the relevant logical effort for the carry chain is $$g_{cp}(C_{in}) = 2$$ of the carry input. The load capacitance of the gate is the input capacitance of the carry-input of the carry propagation gate in the next position, $$C_{in}(cp(C_{in})) = 6$$ units, plus the input capacitance of the 2-fork for the XOR gate, $$2 C_{in}(inv) = 6$$ units, for minimum sized stage-1 inverters in each leg. Therefore, the electrical effort of the carry propagation gate is $$h_{cp} = C_L/C_{in}(cp(C_{in})) = 12/6 = 2.$$ The delay of the gate is hence $d_{cp} = g_{cp} h_{cp} + p_{cp} = 2 \cdot 2 + 3 = 7$ time units. For large $$n,$$ the delay of an RCA with carry propagation gates amounts to approximately $D_{cp}(n) = n\,d_{cp} = 7\,n\,,$ which is roughly 20% faster than the two-level and multilevel designs. Another delay reduction is possible by reducing the off-path capacitance for the sum outputs. Instead of driving the 2-fork of the XOR gate directly with the carry output, we may insert a minimum sized inverter before the 2-fork, and obtain a gate delay of $$d_{cp} = 6$$ time units. ### 5.5.2. Magnitude Comparator¶ A fundamental operation on numbers is their comparison. We have discussed circuits for the special case of equality comparison already. Here, we consider the more general case of comparing two $$n$$-bit unsigned binary numbers $$A$$ and $$B.$$ The four magnitude comparisons are $$A < B,$$ $$A \le B,$$ $$A > B,$$ $$A \ge B.$$ We can design circuits for each of these operations or combinations of them. If we have a binary adder, we can use the equivalence $$A < B \Leftrightarrow A - B < 0$$ to compute the less-than comparison of $$A$$ and $$B$$ by adding the 2’s complement of $$B$$ to $$A$$ and inspecting the sign bit, the msb, of the sum. If the sign bit is 1, then the subtraction is negative, i.e. $$A < B.$$ Otherwise, if the sign bit is 0, we conclude that $$A \ge B.$$ We may view two unsigned numbers as bitstrings, and perform the comparison based on their lexicographical order. Given bitstrings $$A = A_{n-1} A_{n-2} \ldots A_0$$ and $$B = B_{n-1} B_{n-2} \ldots B_0$$ both of length $$n,$$ then $$A$$ is lexicographically less than $$B$$ if there exists an integer $$k,$$ $$0 \le k < n,$$ such that $\begin{split}A_i = B_i\quad\text{for}\ \ n > i > k,\ \ \text{and}\quad A_k < B_k\,.\end{split}$ That is, if the prefix of $$A$$ equals the prefix of $$B$$ up to and excluding index $$k,$$ and in position $$k$$ we find $$A_k < B_k,$$ then $$A < B.$$ The lexicographical greater-than order is defined analogously. For example, $\begin{split}A = 1101\ <\ 1110 = B\,,\end{split}$ because both numbers have prefix $$11$$ in bit positions 3 and 2, and for $$k=1$$ we have $$A_1 = 0$$ is less than $$B_1 = 1.$$ Here is an example for the extreme case where $$k = n-1,$$ i.e. the numbers have no common prefix: $\begin{split}A = 1101\ >\ 0111 = B\,.\end{split}$ In the other extreme case the common prefix covers the numbers entirely, and the numbers are equal, for instance: $A = 1101\ =\ 1101 = B\,.$ The lexicographical order suggests combinational circuits for magnitude comparisons, with a chain structure similar to the ripple-carry adder. In the following, we discuss three different comparator designs at the logic level. As for the RCA designs, alternative circuits can be compared in terms of delay by means of the method of logical effort. #### Downward Comparator Chain¶ The downward comparator chain implements the lexicographical comparison discussed above, starting at the msb down to the lsb. We choose to implement a less-than-or-equal comparator for two $$n$$-bit unsigned binary numbers $$A$$ and $$B$$: $\begin{split}Y = \begin{cases} 1\,, & \text{if}\ A \le B\,, \\ 0\,, & \text{otherwise}\,. \end{cases}\end{split}$ A greater-than-or-equal comparator results in similar logic, whereras a less-than or a greater-than comparator can be built with less logic. We consider bit position $$i,$$ with the goal to design a comparator circuit for bit position $$i$$ as a function of inputs $$A_i$$ and $$B_i,$$ and the comparison results of the next more significant position $$i+1.$$ Our comparator generates two outputs. Output $$E_{out} = E_i$$ shall be 1 if the prefix of $$A$$ equals the prefix of $$B$$ up to and including position $$i.$$ Output $$L_{out} = L_i$$ shall be 1 if the prefix of $$A$$ is less than the prefix of $$B$$ up to and including position $$i.$$ The comparator receives $$E_{in} = E_{i+1}$$ and $$L_{in} = L_{i+1}$$ as inputs from the comparator in position $$i+1.$$ Next, we derive a compact truth table for the comparator. $$E_{in}$$ $$L_{in}$$ $$A_i$$ $$B_i$$ $$E_{out}$$ $$L_{out}$$ 1 0 0 0 1 0 1 0 0 1 0 1 1 0 1 0 0 0 1 0 1 1 1 0 0 0 $$\ast$$ $$\ast$$ 0 0 0 1 $$\ast$$ $$\ast$$ 0 1 The four top rows of the truth table cover the case where the prefix of $$A$$ equals the prefix of $$B$$ up to and excluding position $$i.$$ Then, the outputs depend on the values of $$A_i$$ and $$B_i.$$ If $$A_i = B_i,$$ the common prefix extends into position $$i,$$ and we output $$E_{out} = 1$$ and $$L_{out} = 0.$$ If $$A_i = 0$$ and $$B_i = 1,$$ we have $$A_i < B_i,$$ and output $$E_{out} = 0$$ and $$L_{out} = 1.$$ Otherwise, if $$A_i = 1$$ and $$B_i = 0,$$ we have $$A_i > B_i,$$ resulting in $$A > B.$$ Therefore, we output $$E_{out} = 0$$ and $$L_{out} = 0.$$ Row five of the truth table covers the case where the decision that $$A > B$$ is made in one of the more significant bit positions. When the comparator in position $$i$$ receives inputs $$E_{in} = L_{in} = 0,$$ then $$A > B$$ independent of the values of $$A_i$$ and $$B_i.$$ Therefore, we output $$E_{out} = L_{out} = 0.$$ The bottom row of the truth table covers the case where the decision that $$A < B$$ is made in one of the more significant bit positions. In this case, input $$E_{in} = 0$$ and $$L_{in} = 1.$$ Since $$A < B$$ independent of the values of $$A_i$$ and $$B_i,$$ we output $$E_{out} = 0$$ and $$L_{out} = 1.$$ The compact truth table does not include the case where $$E_{in} = L_{in} = 1,$$ because the equality and less-than orderings are exclusive, and cannot occur simultaneously. The corresponding K-maps below enable us to derive Boolean expressions for the outputs $$E_{out}$$ and $$L_{out}$$: $\begin{eqnarray*} E_{out} &=& E_{in}\,\overline{A}_i\,\overline{B}_i + E_{in}\,A_i\,B_i\ =\ E_{in} \cdot (\overline{A_i \oplus B_i})\,, \\ L_{out} &=& L_{in} + E_{in}\,\overline{A}_i\,B_i\,. \end{eqnarray*}$ Given the comparator module for bit position $$i,$$ we assemble an $$n$$-bit comparator by composing a chain of these modules. Figure 5.50 shows a 4-bit comparator as an example. We drive a logical 1 into input $$E_n$$ and logical 0 into $$L_n$$ assuming that an imaginary prefix of leading zeros is equal for both numbers. Output $$Y$$ of the comparator shall be 1 if numbers $$A$$ and $$B$$ are less than or equal. Consequently, we include an OR gate with inputs $$E_0$$ and $$L_0$$ to compute $$Y.$$ Figure 5.50: 4-bit comparator with downward chain. The delay of an $$n$$-bit comparator with a downward chain is proportional to $$n.$$ The proportionality constant depends on the specific circuit design for $$E_{out}$$ and $$L_{out}.$$ Figure 5.51 shows a logic design of the comparator module emphasizing the logic on the critical chain path. Figure 5.51: Logic design of comparator module for downward chain. This downward chain has one AND gate on the $$E$$-path and one compound gate on the $$L$$-path. Since $$E_{out}$$ drives both gates of the next comparator module, signal $$E_{out}$$ has a larger load than $$L_{out}.$$ These circuit features deserve our attention when minimizing the propagation delay. #### Upward Comparator Chain¶ In search for a faster comparator circuit, we may consider reversing the direction of the chain such that the signals propagate upwards from lsb to msb, as in a ripple-carry chain. This change requires a redesign of the comparator logic. We briefly show that this redesign is well worth the effort for the less-than-or-equal comparison. As before, we consider bit position $$i,$$ now assuming that the chain propagates information from less significant bit position $$i-1$$ to position $$i.$$ Thus, we compare the suffix of number $$A$$ with the suffix of number $$B$$ at position $$i$$ using the information about the suffix of position 0 up to and including position $$i-1.$$ Introduce a new signal $$LE_i,$$ and let $$LE_i$$ be 1 if the suffix of $$A$$ up to including position $$i$$ is less than or equal to the corresponding suffix of $$B.$$ Then, the lexicographical order enables us to argue that $$LE_i = 1$$ if $\begin{split}A_i < B_i\quad\text{or if}\quad A_i = B_i\ \ \text{and}\ \ LE_{i-1} = 1\,.\end{split}$ This argument implies that we need just one wire to carry the information of $$LE_{i-1}$$ from the comparator in position $$i-1$$ to the comparator in position $$i.$$ Furthermore, we can formalize the argument directly into a Boolean expression: $LE_i = \overline{A}_i\,B_i + (\overline{A_i \oplus B_i}) \cdot LE_{i-1}\,.$ Figure 5.52 shows a 4-bit comparator using a single wire between the comparators to propagate $$LE_i$$ from position $$i$$ to position $$i+1.$$ In an $$n$$-bit comparator, output $$Y = LE_{n-1}$$ and input $$LE_{-1} = 1.$$ Figure 5.52: 4-bit comparator with upward chain. One advantage of the comparator with an upward chain compared to the downward chain is that it saves wires. Whether the upward chain is faster than the downward chain is a matter of concrete circuit design, and is left as exercise. A common trick for speeding up arithmetic circuits is the grouping of bits with the goal to reduce the length of a chain at the expense of more complex group logic. In case of a comparator, we may group pairs of consecutive bits, essentially interpreting the numbers as base-4 or radix-4 numbers. Higher radix groups are possible too. Bit triples, for example, may be viewed as octal radix-8 digits and bit quadruples as hexadecimal radix-16 digits. We illustrate the idea by means of a radix-4 design of the less-than-or-equal comparator with a downward chain. Recall our design of the 4-bit comparator in Figure 5.50, which we now view as a radix-2 comparator chain. We design the radix-2 comparator for bit position $$i,$$ and replicate the radix-2 comparator four times. In a radix-4 design, we consider pairs of bits and design a comparator for 2-bit digits. We win if we can design a circuit for the radix-4 comparator such that its chain delay is less than two times the chain delay of the original radix-2 comparator, because the radix-4 chain requires only half as many comparators than the radix-2 chain. Figure 5.53 shows the block diagram of the radix-4 comparator for two 4-bit numbers. Figure 5.53: 4-bit radix-4 comparator with downward chain. The logic of the radix-4 comparator requires expressing $$E_{out}$$ and $$L_{out}$$ as a functions of $$A_{2i+1},$$ $$A_{2i},$$ $$B_{2i+1},$$ $$B_{2i},$$ $$E_{in},$$ and $$L_{in}$$: $\begin{eqnarray*} E_{out} &=& E_{in} \cdot (\overline{A_{2i} \oplus B_{2i}}) \cdot (\overline{A_{2i+1} \oplus B_{2i+1}}) \\ L_{out} &=& L_{in} + E_{in} \bigl(\overline{A}_{2i+1}\,B_{2i+1} + (\overline{A_{2i+1} \oplus B_{2i+1}})\,\overline{A}_{2i}\,B_{2i}\bigr) \end{eqnarray*}$ The corresponding circuit diagram in Figure 5.54 arranges the gates such that the critical chain path has the same logic as for the radix-2 design in Figure 5.51. We conclude that the chain delay of our radix-4 comparator module equals the chain delay of the radix-2 comparator. Therefore, for large $$n,$$ we expect the $$n$$-bit radix-4 comparator to be approximately twice as fast as the radix-2 comparator. Figure 5.54: Logic design of radix-4 comparator module for downward chain. The $$n$$-bit comparator also permits designs with a radix larger than 4. Every doubling of the radix halves the number of modules on the chain, and reduces the delay by a factor of two. When the number of modules becomes so small that the delay of the comparator logic in the most significant comparator module cannot be considered negligible w.r.t. the chain delay any longer, the radix trick has reached its point of diminishing returns. Thus, for any given number of bits $$n,$$ there exists a particular radix or group size that minimizes the comparator delay. The method of logical effort enables us to determine the best group size swiftly. 5.9 Use binary paper-and-pencil arithmetic to compute 1. $$37_{10} - 44_{10}$$ with 8-bit binary numbers, 2. $$11_{10} \cdot 13_{10}$$ with 4-bit binary operands. 1. We know from basic algebra that subtraction $$x - y$$ is equal to addition $$x + (-y)$$ of negative $$y.$$ Therefore, our plan is to use 2’s complement format to represent $$x$$ and $$y$$ as signed binary numbers, negate $$y,$$ and use paper-and-pencil addition. Using signed binary numbers, we need sufficiently many bits so as to include the sign bit. In this exercise, we are given the number of bits as 8. We begin by converting $$37_{10}$$ and $$44_{10}$$ from decimal to unsigned binary: $\begin{eqnarray*} 37_{10} &=& 1 \cdot 32 + 0 \cdot 16 + 0 \cdot 8 + 1 \cdot 4 + 0 \cdot 2 + 1 \cdot 1 \\ &=& 100101_2 \\ 44_{10} &=& 1 \cdot 32 + 0 \cdot 16 + 1 \cdot 8 + 1 \cdot 4 + 0 \cdot 2 + 0 \cdot 1 \\ &=& 101100_2 \end{eqnarray*}$ Note that both unsigned binary numbers occupy 6 bits. Before negating $$44_{10}$$ by forming the 2’s complement, we zero extend the binary number to 8 bits. Thus, we include one more bit than needed for a sign bit. $\begin{eqnarray*} && 0010\,1100 \\ \text{1's complement:} && 1101\,0011 \\ \text{add 1:} && \phantom{0000\,000}1 \\ \text{2's complement:} && 1101\,0100 \end{eqnarray*}$ We find that $$-44_{10} = 1101\,0100_2$$ in 8-bit 2’s complement format. Next we perform the addition of the 8-bit signed binary numbers: $\begin{eqnarray*} 37_{10} + (-44_{10}) &=& \phantom{+}\ 0010\,0101 \\ && +\ 1101\,0100 \\ &=& \phantom{+}\ 1111\,1001_2 \end{eqnarray*}$ We verify the difference by converting the signed binary number into decimal, and check whether it equals the expected difference $$37_{10} - 44_{10} = -7_{10}.$$ Since the sign bit (msb) of our binary sum is 1, the number is negative, and we form the 2’s complement to determine its magnitude: $\begin{eqnarray*} && 1111\,1001 \\ \text{1's complement:} && 0000\,0110 \\ \text{add 1:} && \phantom{0000\,000}1 \\ \text{2's complement:} && 0000\,0111\ =\ 7_{10} \end{eqnarray*}$ We conclude that the result of our paper-and-pencil subtraction is $$-7_{10},$$ which equals the expected result by decimal subtraction. 2. We multiply two binary numbers with paper-and-pencil just as we multiply two decimal numbers: form the partial products and add. Recall the paper-and-pencil multiplication with decimal arithmetic: Multiply multiplicand $$11$$ with each digit of multiplier $$13,$$ and right align the partial products with the multiplier digits. Then, add the partial products. For paper-and-pencil binary multiplication, we convert the decimal numbers into unsigned binary format: $\begin{eqnarray*} 11_{10} &=& 1011_2 \\ 13_{10} &=& 1101_2\,. \end{eqnarray*}$ Both numbers are 4-bit numbers, as the problem requests. The binary multiplication chart is: To check the result we convert the unsigned binary product into decimal format: $\begin{eqnarray*} 1000\,1111_2 &=& 2^7 + 2^3 + 2^2 + 2^1 + 2^0 \\ &=& 143_{10}\,, \end{eqnarray*}$ which matches the expected product of the decimal multiplication. 5.10 The paper-and-pencil method for substraction with borrowing is admired in the US for its simplicity, where it is known as Austrian subtraction: Subtraction $$294_{10} - 154_{10}$$ does not require any borrowing, whereas $$8205_{10} - 4696_{10}$$ does. If the difference in a digit position is negative, such as $$5 - 6$$ in the least significant digit, we borrow $$10$$ from the left, and remember the fact by writing borrow digit $$1$$ between the digits. Now, we add the borrowed 10 to the minuend digit and subtract again, here $$(10 + 5) - 6 = 9.$$ Then, in the subtraction of the next position to the left, we make up for the borrowed $$10$$ by adding borrow digit $$1$$ to the subtrahend, here forming subtraction $$0 - (9 + 1).$$ Since the difference is negative, we borrow another $$10$$ from the left, and so on. Apply Austrian subtraction to unsigned binary numbers: 1. $$101_2 - 100_2,$$ 2. $$10010_2 - 01011_2.$$ 1. We apply the Austrian subtraction to unsigned binary numbers $$101_2 - 100_2.$$ The paper-and-pencil chart is shown on the right. We begin with the least significant bits: $$1 - 0 = 1$$ yields a positive 1. The next position $$0 - 0 = 0$$ yields a 0, and the most significant position $$1 - 1 = 0$$ yields a 0 as well. Since none of the differences is negative, no borrowing is required. We check the result by converting the operands to decimal, and compare the decimal difference with the binary result. We find the operands $$101_2 = 5_{10}$$ and $$100_2 = 4.$$ The difference computed in decimal is $$5_{10} - 4_{10} = 1_{10}.$$ This matches our binary result $$001_2 = 1_{10},$$ indeed. 2. We apply the Austrian subtraction to unsigned binary numbers $$10010_2 - 01011_2.$$ This example requires borrowing. The difference of the least significant bits $$0 - 1 = -1$$ is negative. Hence, we borrow a $$2$$ from the next position, write borrow $$1$$ between the bits, and redo the subtraction adding the borrowed $$2$$ to the minuend: $$(2 + 0) - 1 = 1.$$ Note that we do the arithmetic in decimal for convenience, although the result is a binary digit. When designing a digital subtractor circuit, we would implement the subtraction in binary format using two bits, and place the borrow bit in the second (msb) position of the minuend: $$(10_2 + 0) - 1 = 1.$$ In the second position, we make up for the borrowed $$2$$ by adding borrow bit $$1$$ to the subtrahend. The resulting subtraction becomes: $$1 - (1+1) = -1.$$ Since the result is negative, we borrow another $$2$$ from the third position, and redo the subtraction: $$(2+1) - (1+1) = 1$$ yields bit $$1$$ in the second position. In the third position, we include the borrowed $$2$$ in the subtrahend: $$0 - (0 + 1) = -1.$$ This negative result requires borrowing yet another $$2$$ from the fourth position. We redo the subtraction adding the borrowed $$2$$ to the minuend: $$(2 + 0) - (0 + 1) = 1$$ yields bit $$1$$ in the third position. We need another borrow bit for the fourth position, whereas the fifth position does not. We check the result using decimal arithmetic. The operands are $$10010_2 = 18_{10}$$ and $$01011_2 = 11_{10}.$$ The difference is $$18_{10}-11_{10}=7_{10},$$ which is equal to the binary difference $$111_2.$$ 5.11 An $$n$$-bit comparator computes $$A < B$$ of two $$n$$-bit numbers $$A$$ and $$B$$ by subtracting $$A-B$$ and outputting the sign bit. Implement these magnitude comparisons using the $$n$$-bit comparator circuit: 1. $$A > B,$$ 2. $$A \ge B,$$ 3. $$A \le B.$$ 1. Since $$A > B = B < A,$$ we implement the greater-than comparison with a less-than comparator by exchanging the inputs. 2. Note that $$A \ge B = \overline{A < B}.$$ Therefore, we implement the greater-than-or-equal comparison with a less-than comparator and an inverter. 3. Since $$A \le B = \overline{A > B},$$ we implement the less-than-or-equal comparison using the less-than comparator with swapped inputs and an inverter to complement the output. 5.12 We study the connection between bit counting and addition. 1. We wish to count the number of 1-bits in a 2-bit binary number $$A = a_1 a_0,$$ and output the number of 1-bits as an unsigned binary number. Design a 1-bit counter using half adders as building blocks. 2. We wish to count the number of 1-bits in a 3-bit binary number $$A = a_2 a_1 a_0,$$ and output the number of 1-bits as an unsigned binary number. Design a 1-bit counter using full adders as building blocks. 1. We begin by formalizing the functionality of the 1-bit counter using a truth table. We are given 2-bit input $$A = a_1 a_0,$$ and wish to count the number of bits with value 1. Our truth table lists the four input combinations, and for each input combination we count the number of 1-bits. We find that the number of 1-bits is in range $$[0,2].$$ $$a_1$$ $$a_0$$ $$\text{# 1-bits}$$ $$y_1$$ $$y_0$$ 0 0 0 0 0 0 1 1 0 1 1 0 1 0 1 1 1 2 1 0 We wish to output the number of 1-bits as an unsigned binary number. Since the range of the number of 1-bits is $$[0,2],$$ we need two bits to encode the output. Denote output $$Y = y_1 y_0,$$ then our 1-bit counter has input $$A$$ and output $$Y$$ as shown in the black box diagram on the right. We include columns for the binary encoding of the number of 1-bits, $$y_1$$ and $$y_0,$$ in the truth table. Now, notice that our truth table specifies a half adder if we interpret $$y_1$$ as carry and $$y_0$$ as sum. Therefore, we can implement the 1-bit counter simply with a half adder. Even more insightful is the observation that a 1-bit counter for 2-bit number $$A = a_1 a_0$$ is equivalent to an adder of two 1-bit numbers, $$a_0 + a_1.$$ 2. We extend our insight that a 1-bit counter for 2-bit numbers is an adder for two 1-bit numbers to 3-bit numbers. We hypothesize that a 1-bit counter for 3-bit numbers is an adder of three 1-bit numbers. Since three bits can have a minimum of zero 1-bits and a maximum of three 1-bits, the number of 1-bits must be in range $$[0,3],$$ which we can encode with two bits. Thus, our 1-bit counter module must have the black box specification shown on the right. To specify the 1-bit counter function, we derive a truth table with three input bits $$A = a_2 a_1 a_0,$$ the number of 1-bits in decimal and in binary representation $$Y = y_1 y_0.$$ $$a_2$$ $$a_1$$ $$a_0$$ $$\text{# 1-bits}$$ $$y_1$$ $$y_0$$ 0 0 0 0 0 0 0 0 1 1 0 1 0 1 0 1 0 1 0 1 1 2 1 0 1 0 0 1 0 1 1 0 1 2 1 0 1 1 0 2 1 0 1 1 1 3 1 1 Compare this truth table with that of the full adder. If we interpret $$y_1$$ as carry-out and $$y_0$$ as sum bit, our 1-bit counter for 3-bit input $$A$$ and the full adder truth tables specify the same function. Therefore, we can implement the 1-bit counter for three inputs using nothing but a full adder. Furthermore, we conclude that our hypothesis is true, i.e. a 1-bit counter for 3-bit number $$A = a_2 a_1 a_0$$ is equivalent to an adder for three 1-bit numbers, $$a_0 + a_1 + a_2.$$ The corresponding paper-and-pencil addition is illustrated on the right. 3. We design a full adder from half adders. From the perspective of bit counting, a full adder counts the number of 1-bits in three inputs, whereas a half adder counts the number of 1-bits in two inputs. Thus, we should be able two use two half adders to count the 1-bits in two of the three inputs, and combine the result with the 1-bit of the third input. From the perspective of adding three 1-bit numbers, we may view this composition as a parenthesization: $$a_0 + a_1 + a_2 = (a_0 + (a_1 + a_2)).$$ Circuit (a) below uses one half adder to count the 1-bits in 2-bit number $$a_2 a_1,$$ and the second half adder to count the 1-bits in 2-bit number $$s_0 a_0.$$ This circuit is incomplete, because it does not incorporate the carry outputs of the half adders. Nevertheless, it counts already, if the number of 1-bits is in range $$[0,1]$$ which we can represent in a single bit $$y_0.$$ If all bits are 0, i.e. $$a_0 = a_1 = a_2 = 0,$$ then $$s_0 = 0$$ and $$s_1 = y_0 = 0.$$ If $$a_0 = 1$$ and $$a_1 = a_2 = 0,$$ then $$s_0 = 0$$ and $$s_1 = y_0 = 1$$ as expected. Circuit (a) handles two more cases: if $$a_0 = 0$$ and one of $$a_1$$ or $$a_2$$ equals 1. Then, $$s_0 = 1$$ and $$s_1 = y_0 = 1.$$ Circuit (a) does not count two or three 1-bits. For example, if $$a_0 = 0$$ and $$a_1 = a_2 = 1,$$ then $$s_0 = s_1 = y_0 = 0.$$ We need to incorporate output $$c_0,$$ because it carries the information that both $$a_1$$ and $$a_2$$ are 1. Since value 2 is the weight of the second bit position in a binary number, carry output $$c_0$$ contributes to bit $$y_1$$ of binary count $$Y = y_1 y_0.$$ Also, if $$s_0 = 1$$ because one of $$a_1$$ or $$a_2$$ is 1, and $$a_0 = 1,$$ then the 1-bit count is two and $$c_1 = 1.$$ Hence, carry output $$c_1$$ should contribute to $$y_1$$ as well. Circuit (b) shows a solution that uses an OR gate to combine carry outputs $$c_0$$ and $$c_1$$ into output $$y_1.$$ This circuit works, because $$c_0 = 1$$ if both $$a_1$$ and $$a_2$$ are 1, i.e. the 1-bit count is 2, then $$y_1 = 1$$ independent of $$a_0.$$ Furthermore, if one of $$a_1$$ or $$a_2$$ is 1, i.e. their 1-bit count is 1, and $$a_0 = 1,$$ then $$y_1 = 1.$$ For all other input combinations $$y_1 = 0$$ because their 1-bit count is less than two. Circuit (c) replaces the OR gate of circuit (b) with a third half adder to produce the requested implementation of a full adder based on half adders. Carry output $$c_2$$ remains unused. The sum of the half adder is the XOR rather than the OR of $$c_0$$ and $$c_1,$$ however. Using an XOR instead of the OR gate in circuit (b) does not affect the 1-bit count, if we notice that input combination $$c_0 = c_1 = 1$$ cannot occur, because it would require four 1-bits in three inputs. Circuits (b) and (c) are both implementations of the full adder. The abstract perspective of bit counting serves as an aid for the design of these circuits. However, for a rigorous proof of equivalence, we resort to a perfect induction or Boolean algebra. 5.13 We investigate a modification of the ripple carry adder in Figure 5.41 with an inverting carry chain: 1. Design a CMOS inverting majority gate with inputs $$A,$$ $$B,$$ $$C,$$ and output $$Y$$: $Y = \overline{A B + A C + B C}\,.$ 1. Estimate the delay of an $$n$$-bit RCA with inverting carry chain. The circuit fragment below shows two stages of the inverting carry chain. For comparison, the RCA delay of the non-inverting carry chain with a NAND-NAND mapping of the majority gate is $$D_{2\text{-}level}(n) \approx 10.58 n$$ and for the fast carry chain with carry-propagation gates $$D_{cp}(n) \approx 7 n.$$ 1. The inverting majority gate is discussed in Section Majority Gate. In the symmetric majority gate all inputs have logical effort $$g_{Msym} = 4$$ and the parasitic delay is $$p_{Msym} = 6$$ capacitive units. The asymmetric majority gate has one fast input with logical effort $$g_{Masym} = 2$$ and the parasitic delay is $$p_{Masym} = 4.$$ 2. In an $$n$$-bit RCA each bit position contributes a single stage, one inverting majority gate, to the carry chain. To determine the delay of one stage in the chain, we consider bit position $$i.$$ The load of carry output $$C_{out}$$ in bit position $$i$$ is the input capacitance of bit position $$i+1.$$ Assuming that all gates at the input of a stage are minimum sized matched gates, the load capacitance is the sum of the input capacitances of the inverters in the 2-fork and of the majority gate: $C_L = 2 C_{inv} + C_{in}(M)\,.$ If we use a symmetric majority gate, we have input capacitance $$C_{in}(M) = C_{in}(M_{sym}) = 12,$$ and if we use the fast input of the asymmetric majority gate for the carry input, then the input capacitance is $$C_{in}(M_{asym}) = 6.$$ Thus, the load capacitance using the symmetric majority gate is $$C_{Lsym} = 2 \cdot 3 + 12 = 18$$ and with the asymmetric majority gate $$C_{Lasym} = 2 \cdot 3 + 6 = 12$$ capacitive units. The delay of the symmetric inverting majority gate in the carry chain is $d_{Msym} = g_{Msym} \frac{C_{Lsym}}{C_{in}(M_{sym})} + p_{Msym} = 4 \frac{18}{12} + 6 = 12$ time units, and the delay of the asymmetric inverting majority gate only $d_{Masym} = g_{Masym} \frac{C_{Lasym}}{C_{in}(M_{asym})} + p_{Masym} = 2 \frac{12}{6} + 4 = 8$ time units. For large $$n,$$ the propagation delay of the RCA is approximately the delay of the carry chain, here $D_{Msym}(n) \approx 12\,n\,,\qquad D_{Masym}(n) \approx 8\,n$ for our two choices of symmetric and asymmetric inverting majority gates. We conclude that the asymmetric inverting majority gate provides a competitive alternative for an RCA design that is almost as fast as the RCA with carry-propagation gates and a delay of $$D_{cp}(n) \approx 7 n.$$ ## 5.6. Timing Analysis¶ A typical task for combinational circuit designers is to derive a circuit with minimum delay for a given functional specification. We accomplish this goal by means of the method of logical effort, because it enables us to analyze the delay of a circuit, and to understand and fix potential shortcomings within one or more design iterations. However, the timing behavior of a combinational circuit is usually more complex than the single delay number produced by the method of logical effort suggests. In the following, we characterize the basic effects that cause combinational circuits to exhibit a rather complex timing behavior. ### 5.6.1. Timing Diagrams¶ Consider the buffer circuit in Figure 5.55. Assume that both inverters have the same size, and the input capacitances are equal to the load capacitance. Since the electrical efforts of the inverters are equal, they have equal delays. If you build such a circuit, and measure the voltages of signal $$A,$$ $$B,$$ and $$Y$$ with an oscilloscope, you may obtain an analog timing diagram, as shown on the left in Figure 5.55. The voltages transition between 0 and $$V_{DD}$$ within a finite amount of time. If we could apply a step function as input signal $$A,$$ the transitions would have the exponential response of Figure 1.38. However, the step function is merely a convenient idealization. In reality, signals cannot transition in zero time. Therefore, analog signals resemble the shape of an exponential step response only. Figure 5.55: Buffer with analog timing diagram of signal voltages (left) and digital timing diagram (right). The finite slope of the analog voltage transitions complicates a measurement of the buffer delay. It forces us to pick a particular point in time for a transition. A convenient point in time is where the voltage equals 50% of the maximum voltage, here $$V_{DD}/2.$$ We have marked these transition times in Figure 5.55 as $$t_0, t_1, \ldots, t_5.$$ In our ideal model for the exponential step response of Figure 1.38, the 50% crossing occurs where $$e^{-t/RC} = 1/2,$$ or $$t = RC \ln 2 = 0.69\,RC.$$ The propagation delay of the buffer is the time difference between the 50% crossing of the input transition and the corresponding 50% crossing of the output transition. In Figure 5.55, we find propagation delay $$t_{pd}(buf) = t_2 - t_0$$ for the buffer. If the rising and falling transitions are symmetric, then the propagation delay of the buffer is also equal to $$t_{pd}(buf) = t_5 - t_3.$$ We can measure these delays, and determine the technology specific time constant $$\tau$$ of the model of logical effort experimentally. An immediate consequence of characterizing all transitions by their 50% crossing points is that the corresponding gate delays add up to path delays. For example, buffer delay $$t_{pd}(buf) = t_2 - t_0$$ in Figure 5.55 is the propagation delay of the stage-1 inverter $$t_{pd}(inv_1) = t_1 - t_0$$ plus the propagation delay of the stage-2 inverter $$t_{pd}(inv_2) = t_2 - t_1.$$ The gate delays of a path form a telescoping sum. The digital abstraction ignores the details of the analog voltages, in particular the actual voltage value of $$V_{DD}$$ and the finite slope of transitions. Instead, we approximate the analog signals with digital step transitions between Boolean values 0 and 1. As shown in Figure 5.55 on the right, we assume that the ideal transitions occur at the 50% crossings of the actual transitions. Then, all gate and path delays are reflected properly in the digital timing diagram. Figure 5.56: Value oblivious digital timing diagram. If we are interested primarily in the signal delays of a digital circuit, but not in their concrete Boolean values, then we draw the digital timing diagram as shown in Figure 5.56. This variant of a timing diagram displays both complemented and uncomplemented signal values. Transitions occur at their crossing points. ### 5.6.2. Path Delays¶ An inverter with a given size and load has a particular propagation delay from input to output. Likewise, symmetric CMOS gates with multiple inputs have equal propagation delays from each input to the output. In contrast, asymmetric gates do not have equal propagation delays from each input to the output, and neither do most circuits with multiple inputs and multiple stages. As a concrete example, consider 4-variable function $$Y = \overline{A}\,C + \overline{B}\,C + \overline{D},$$ that we can implement as a 3-stage NAND chain, shown in Figure 5.57. Figure 5.57: 3-stage circuit for timing analysis. Using the method of logical effort, we can minimize the delay of the critical path from inputs $$A$$ or $$B$$ to output $$Y.$$ Assuming that the stage-1 NAND gate has input capacitance $$C_{in} = 4,$$ the path electrical effort is $$H = C_L/C_{in} = 64.$$ With path logical effort $$G = (4/3)^3$$ and branching effort $$B=1,$$ the path effort is $$F = (16/3)^3.$$ We obtain a minimum path delay of $$\hat{D} = 3 F^{1/3} + 3 p_{nand2} = 22$$ time units if each stage bears effort $$\hat{f} = F^{1/3} = 16/3.$$ Then, each NAND gate incurs a delay of $$\hat{d} = \hat{f} + p_{nand2} = 22/3$$ time units. If we account for the technology specific time constant $$\tau,$$ we obtain propagation delay $$t_{pd}(nand2) = \tau \hat{d}.$$ Minimum path delay $$\hat{D}$$ of the circuit is a worst-case delay that minimizes the delay of the longest, the critical path of the circuit. Actually, the circuit in Figure 5.57 has two critical paths with delay $$t_{A\rightarrow Y} = t_{B\rightarrow Y} = \hat{D} \tau = 3 \tau \hat{d}.$$ Other paths of the circuit are shorter, and have smaller delays than $$\hat{D}.$$ The path from input $$C$$ to $$Y$$ traverses two NAND gates, and has a delay of $$t_{C\rightarrow Y} = 2 \tau \hat{d}.$$ The path from input $$D$$ to $$Y$$ is even shorter. It traverses only one NAND gate with a delay of $$t_{D\rightarrow Y} = \tau \hat{d}.$$ In general, a circuit has one or more shortest paths with the smallest delay, one or more longest paths with the largest delay, and all other paths have delays between the smallest and largest delays. We characterize the timing behavior of a combinational circuit with the smallest and largest delays. The contamination delay $$t_{cd}$$ is the minimum delay from any input to an output of a circuit. The propagation delay $$t_{pd}$$ is the maximum delay from any input to an output of a circuit. The propagation delay is the worst-case delay that we minimize by means of the method of logical effort. Together the contamination and propagation delays bound the delay from any input to an output of a circuit. For example, the circuit in Figure 5.57 has $$t_{cd} = \tau \hat{d}$$ and $$t_{pd} = 3 \tau \hat{d}.$$ The delay of path $$C \rightarrow Y$$ lies within this range, $$t_{cd} < 2 \tau \hat{d} < t_{pd}.$$ To observe a path delay in a circuit, we stimulate the path by means of an input transition at time $$t_0$$ and measure time $$t_1$$ when the output transition occurs. Then, the path delay is the difference $$t_1 - t_0.$$ Figure 5.58 illustrates the path delays in the timing diagram for the critical path and the shortest path of the 3-stage NAND chain. We define an initial state $$Y(A,B,C,D) = Y(0,1,1,1) = 1,$$ apply the inputs and assume that the circuit has enough time, i.e. more than its propagation delay, to stabilize output $$Y = 1,$$ see Figure 5.58(a). Figure 5.58: Timing analysis of transitions in 3-stage NAND circuit: (a) initial state, (b) transition along critical path, (c) transition along shortest path. Stimulating the critical path of the NAND chain requires a transition on one of the inputs $$A$$ or $$B.$$ If we change input $$B$$ from 1 to 0, then output $$W$$ remains unchanged. This transition does not trigger a change at output $$Y.$$ However, changing $$A$$ from 0 to 1 causes $$W$$ to transition from 1 to 0, which causes $$X$$ to transition from 0 to 1, which in turn causes output $$Y$$ to transition from 1 to 0. The sequence of transitions is shown in Figure 5.58(b) and the timing diagram. We stimulate the transitition of $$A$$ at time $$t_0.$$ After a delay of $$\tau \hat{d},$$ i.e. at time $$t_1 = t_0 + \tau \hat{d},$$ node $$W$$ transitions from 1 to 0. Output $$Y$$ transitions after the propagation delay of the circuit at time $$t_3 = t_0 + t_{pd} = t_0 + 3 \tau \hat{d}.$$ Figure 5.58(c) illustrates the transitions that enables us to observe the contamination delay of the 3-stage NAND chain. We stimulate the shortest path by changing input $$D$$ from 1 to 0 at time $$t_4 > t_3.$$ This causes output $$Y$$ to transition from 0 to 1 after the delay of the stage-3 NAND gate, $$\tau \hat{d}.$$ Therefore $$t_5 - t_4 = \tau \hat{d},$$ which equals contamination delay $$t_{cd}$$ of the circuit. Figure 5.59: Timing diagram with unknown state between $$t_{cd}$$ and $$t_{pd}.$$ When we design larger circuits, we modularize smaller circuits. We draw the module as a black box and provide the functional and timing specifications. For example, to modularize the 3-stage NAND circuit, we may define a 4-bit input bus $$A,$$ as shown in Figure 5.59 on the left, and specify its function $$Y = \overline{A}_0 A_2 + \overline{A}_1 A_2 + \overline{A}_3.$$ Furthermore, we summarize the timing behavior by specifying the contamination delay and propagation delay of the circuit. The graphical version of the timing specification is shown in form of a timing diagram in Figure 5.59 on the right. A transition of input bus $$A$$ causes output $$Y$$ to transition after a delay in range $$[t_{cd}, t_{pd}].$$ We mark the signal of $$Y$$ in this time range with a zig-zag pattern, indicating that the actual value is unknown or unstable during this period of time. The essential information provided by this diagram is (1) the output does not change until delay $$t_{cd}$$ after an input transition, and (2) the output is stable beyond delay $$t_{pd}$$ after an input transition. ### 5.6.3. Algorithmic Timing Analysis¶ Assume we have designed a combinational circuit and know the delay of each gate and the contamination and propagation delays of all subcircuits. We wish to determine the timing specification of the combinational circuit, i.e. its contamination delay and its propagation delay. The following algorithm determines the propagation delay of an acyclic combinational circuit. Algorithm (Propagation Delay) 1. Initialize the arrival times of all terminal inputs with 0. 2. For each circuit element, if the arrival times $$t_a(A_i)$$ of all inputs $$A_i$$ are determined, set the output times of all outputs $$Y_j$$ to $\max_i(t_a(A_i)) + t_{pd}(Y_j)\,.$ 3. The maximum output time of all terminal outputs is the propagation delay of the circuit. We illustrate the algorithm by means of the 3-stage NAND circuit in Figure 5.57. We assume that each NAND gate has a delay of $$\tau \hat{d} = 50\,\mathit{ps}.$$ Since the NAND gate is symmetric, all paths from each input to the output have equal delay. Therefore, the contamination delay of the NAND gate equals its propagation delay, such that $$t_{cd}(nand2) = t_{pd}(nand2) = 50\,\mathit{ps}.$$ We initialize the arrival time of terminal inputs $$t_a(A) = t_a(B) = t_a(C) = t_a(D) = 0\,\mathit{ps}.$$ Figure 5.60(a) shows the annotated arrival times. Figure 5.60: Algorithmic deduction of propagation delay: (a) initialization, (b) update output time of stage-1 NAND gate, (c) update output time of stage-2 NAND gate, (d) update output time of output $$Y.$$ Next, according to step 2 of the algorithm, we update the output times of those NAND gates for which the arrival times of all inputs are known. This is the case for the stage-1 NAND gate only. Its input arrival times are $$t_a(A) = t_a(B) = 0\,\mathit{ps}.$$ Therefore, the output time is $t_a(W) = \max(t_a(A), t_a(B)) + t_{pd}(nand2) = (0 + 50) \mathit{ps} = 50\,\mathit{ps}\,,$ as shown in Figure 5.60(b). Since output $$W$$ of the stage-1 NAND gate is the input of the stage-2 NAND gate, we know the arrival times of all inputs of the stage-2 NAND gate, $$t_a(W) = 50\,\mathit{ps}$$ and $$t_a(C) = 0\,\mathit{ps}.$$ Therefore, we can update its output time to $t_a(X) = \max(t_a(C), t_a(W)) + t_{pd}(nand2) = (50 + 50) \mathit{ps} = 100\,\mathit{ps}\,,$ as shown in Figure 5.60(c). Now, we know the arrival time of nodes $$X$$ and $$D,$$ so that we can update the output time of the stage-3 NAND gate to $t_a(Y) = \max(t_a(D), t_a(X)) + t_{pd}(nand2) = (100 + 50) \mathit{ps} = 150\,\mathit{ps}\,.$ Output $$Y$$ of the stage-3 NAND gate is the only terminal output of the circuit. Therefore, step 3 of the algorithm is trivial. The propagation delay of the circuit is the arrival time at output $$Y,$$ that is $$t_{pd}(Y) = 150\,\mathit{ps}.$$ The algorithm can be adapted in a straightforward manner to determine the contamination delay of an acyclic combinational circuit. In steps 2 and 3 substitute contamination delay for propagation delay and minimum for maximum: Algorithm (Contamination Delay) 1. Initialize the arrival times of all terminal inputs with 0. 2. For each circuit element, if the arrival times $$t_a(A_i)$$ of all inputs $$A_i$$ are determined, set the output times of all outputs $$Y_j$$ to $\min_i(t_a(A_i)) + t_{cd}(Y_j)\,.$ 3. The minimum output time of all terminal outputs is the contamination delay of the circuit. Figure 5.61 illustrates the algorithm for the 3-stage NAND chain. We assume a contamination delay for each NAND gate of $$t_{cd}(nand2) = 50\,\mathit{ps}.$$ The output time of the stage-1 NAND gate is $$\min(0,0)+t_{cd}(nand2) = 50\,\mathit{ps}.$$ Analogously, the output time of the stage-2 NAND gate is $$\min(50,0)+t_{cd}(nand2) = 50\,\mathit{ps}$$ and of the stage-3 NAND gate $$\min(50,0)+t_{cd}(nand2) = 50\,\mathit{ps}.$$ The contamination delay of the circuit is the arrival time at output $$Y,$$ or $$t_{cd}(Y) = 50\,\mathit{ps}.$$ Figure 5.61: Algorithmic deduction of contamination delay: (a) initialization, (b) update output time of stage-1 NAND gate, (c) update output time of stage-2 NAND gate, (d) update output time of output $$Y.$$ We conclude that the 3-stage NAND chain has a contamination delay of $$50\,\mathit{ps}$$ and a propagation delay of $$150\,\mathit{ps}.$$ These delays define the lower and upper delay bounds of the timing specification shown in Figure 5.59. Example 5.17: Timing Analysis of 4-bit Ripple-Carry Adder We determine the contamination and propagation delay of the 4-bit RCA with majority and parity gates of Figure 5.41, redrawn below. The critical path of the RCA is the carry chain. We assume that the majority and parity gates have equal contamination and propagation delays. The normalized delays are given as $$t_{cd}(M) = t_{pd}(M) = 9$$ time units for the majority gates and $$t_{cd}(P) = t_{pd}(P) = 29$$ time units for all parity or XOR gates. We determine the contamination and propagation delays algorithmically. The annotated circuit diagram below shows pairs of arrival times for the contamination delay as first component and propagation delay as second component at each node. Terminal inputs $$A_i,$$ $$B_i,$$ $$C_{in}$$ are initialized with arrival time 0. Since the RCA is a multioutput circuit, step 3 requires finding the minimum and maximum arrival times of the terminal outputs $$S_i$$ and $$C_{out}.$$ The contamination delay is the minimum of the first components, $$t_{cd}(rca) = 9$$ time units, and the propagation delay is the maximum of the second components, $$t_{pd}(rca) = 56$$ time units. Thus, the shortest paths of the RCA stretch from inputs $$A_3$$ or $$B_3$$ to output $$C_{out}.$$ The longest, critical paths start at $$A_0,$$ $$B_0,$$ or $$C_{in},$$ and end at output $$S_3.$$ The timing diagram below shows the delays of all nodes of the RCA. The arrows record which gate input transition causes the output transition. If a circuit node has an arrival time for the contamination delay that is smaller than the arrival time for the propagation delay, the signal begins switching at the former and stabilizes at the latter delay. The signal is unstable in between. For example, for node $$C_2,$$ our delay algorithms yield arrival time 9 for the contamination delay and 27 for the propagation delay. Correspondingly, in the timing diagram below, signal $$C_2$$ becomes unstable at time 9 and becomes stable at time 27 again. If we wish to use the RCA as a subcircuit, we summarize this complex timing behavior with the simpler delay bounds $$t_{cd} = 9$$ and $$t_{pd} = 56$$ time units. 5.14 Perform a timing analysis of the multilevel circuit with these gate delays: gate delay nand $$20\,ps$$ nor $$25\,ps$$ and $$30\,ps$$ or $$35\,ps$$ 1. Determine the propagation delay of the circuit. 2. Determine the contamination delay of the circuit. 1. We apply the algorithm for propagation delay to the circuit. We begin by assigning arrival time 0 to each of the inputs, as shown in step (a) below. In subsequent steps (b)-(e), we update the output times of those gates for which the arrival times of all inputs are known. The output time is the maximum of all input arrival times plus the gate delay. We find that the propagation delay of the circuit is $$t_{pd} = 110\,ps,$$ i.e. the output stabelizes at most $$110\,ps$$ after one of its inputs changes. 2. We apply the algorithm for contamination delay to the circuit. In step (a) we assign arrival time 0 to each of the inputs. In steps (b)-(e), we update the output times of those gates for which the arrival times of all inputs are known. The output time is the minimum of all input arrival times plus the gate delay. We find that the circuit has a contamination delay of $$t_{cd} = 75\,ps,$$ i.e. the output begins to change at earliest $$75\,ps$$ after one of its inputs changes. ## 5.7. Hazards¶ The timing behavior of combinational circuits can be surprising at times. For example, let’s assume we have designed a 2:1 multiplexer and want to verify its timing behavior with an oscilloscope. Figure 5.62 shows the multiplexer circuit. All gates shall be symmetric, so that their contamination and propagation delays are equal. The inverter has a delay of $$t(inv) = 2$$ time units, the AND gates have a delay of $$t(and) = 5$$ time units, and the OR gate has a delay of $$t(or) = 5$$ time units as well. Recall the functional behavior of a multiplexer: output $$Y = D_0$$ if $$S = 0,$$ and $$Y = D_1$$ if $$S = 1.$$ The timing diagram in Figure 5.62 shows initially $$D_0 = 1,$$ $$D_1 = 0,$$ $$S = 1,$$ and $$Y = D_1 = 0$$ as expected. At time $$t_0,$$ we toggle $$D_1$$ from 0 to 1, expecting output $$Y$$ to follow. We observe that $$Y$$ transitions to 1 at time $$t_1.$$ When we switch $$S$$ from 1 to 0 at time $$t_2,$$ we expect no change in output $$Y,$$ because both data inputs are 1. However, we observe that output $$Y$$ transitions to 0 at time $$t_3,$$ and at time $$t_4$$ back to 1. This so-called glitch in signal $$Y$$ cannot be explained by the functional behavior of the multiplexer. Figure 5.62: The timing behavior of the multiplexer exhibits a glitch at time $$t_3.$$ Understanding the cause of the glitch requires a timing analysis of the multiplexer circuit. There are four paths from the inputs to output $$Y.$$ Three of the paths have a delay of $$t(and) + t(or) = 10$$ time units: (1) path $$D_0 \rightarrow Y$$ via node $$V,$$ (2) path $$D_1 \rightarrow Y$$ via node $$W,$$ and (3) path $$S \rightarrow W \rightarrow Y$$ via node $$W.$$ The fourth path $$S \rightarrow \overline{S} \rightarrow V \rightarrow Y$$ traverses the inverter, and has a delay of 12 time units. We conclude that the multiplexer has a contamination delay of $$t_{cd} = 10$$ time units and a propagation delay of $$t_{pd} = 12$$ time units. Figure 5.63: Timing analysis of multiplexer. Next, we derive the detailed timing diagram in Figure 5.63 for transition $$S: 1 \rightarrow 0$$ at time $$t_2$$ and constant data inputs $$D_0 = D_1 = 1.$$ Our analysis covers all inner nodes of the multiplexer. We reset time such that the stimulating transition of $$S$$ occurs at time $$t_2 = 0,$$ and begin the analysis with output $$\overline{S}$$ of the inverter. Signal $$\overline{S}$$ transitions from 0 to 1 after the inverter delay of $$t(inv) = 2$$ time units, i.e. at time $$t = 2.$$ Output $$V$$ of the upper AND gate transitions $$t(and)=5$$ time units later from 0 to 1, i.e. at time $$t = 7.$$ Since $$V=1$$ forces the output of the OR gate to 1, independent of input $$W$$ of the OR gate, the OR gate assumes value 1 after a delay of $$t(or) = 5$$ time units, which is at time $$t = 12 = t_4.$$ In the meantime, the transition of input $$S$$ at time $$t_2$$ forces output $$W$$ of the lower AND gate to 0. This occurs after a delay of $$t(and) = 5$$ time units at time $$t = 5.$$ After this transition, both $$W$$ and $$V$$ are 0, forcing output $$Y$$ of the OR gate to 0. Since the delay of the OR gate is $$t(or) = 5$$ time units, the output transition to 0 occurs at time $$t = 10 = t_3.$$ Note that $$t_3$$ marks the contamination delay of the multiplexer, $$t_{cd} = t_3 - t_2,$$ and $$t_4$$ marks the propagation delay $$t_{pd} = t_4 - t_2.$$ The behavior of the output signal between $$t_{cd}$$ and $$t_{pd}$$ is consistent with our earlier discussion of the timing behavior of combinational circuits. The output signal becomes “unstable” after the contamination delay and is stable beyond the propapagation delay again. In case of the multiplexer, this instability exhibits itself in form of a glitch. A glitch is a particular kind of hazard, where one input transition causes two output transitions. The following four types of hazards are common in combinational circuits. The glitch in Figure 5.63 is a static-1 hazard, because the output should stay static at value 1, but glitches temporarily to 0. Analogously, a static-0 hazard occurs if the output should stay static at value 0, but glitches temporarily to value 1 instead. Whereas static-1 hazards can occur in AND-OR circuits, static-0 hazards can occur in OR-AND circuits, for example. Multilevel circuits may have dynamic hazards, which incur more than two output transitions in response to one input transition. A dynamic-1 hazard occurs if one input transition should cause an output transition from 0 to 1, but instead causes the output to produce three transitions, $$0 \rightarrow 1 \rightarrow 0 \rightarrow 1$$. Analogously, a dynamic-0 hazard has three transitions, $$1 \rightarrow 0 \rightarrow 1 \rightarrow 0,$$ although a single output transition from 1 to 0 would be the expected response to a single input transition. You shouldn’t be surprised to encounter glitches when analyzing a circuit with an oscilloscope. Furthermore, there is no reason to panic in face of a glitch. Combinational circuits may exhibit multiple transitions in the unstable time period between the contamination and propagation delays. As long as we give the circuit enough time to stabilize, a glitch is a harmless temporary behavior, despite the precarious aftertaste of the term hazard. Nevertheless, occasionally, glitch-free circuits are important, for example in edge-triggered sequential circuits where the clock signal must not exhibit any hazards for the circuit to function correctly. We can detect and avoid static hazards in two-level circuits with the aid of K-maps. Figure 5.64 shows on the left the minimal cover that corresponds to the AND-OR multiplexer circuit in Figure 5.62. The glitch-causing input transition is marked with the red arrow. In the K-map, the glitch corresponds to the transition across the boundaries of two prime implicants. Not every transition across prime implicants causes a glitch, including the reverse transition $$S: 0 \rightarrow 1.$$ The hazardous transition $$S: 1 \rightarrow 0$$ switches one AND gate to 0 before switching the other AND gate to 1, cf. output $$W$$ at time $$t=5$$ and output $$V$$ at time $$t=7$$ in Figure 5.63. Figure 5.64: Minimal cover of multiplexer (left), and cover with redundant consensus term (right). We may avoid the glitch by covering the glitching transition with the consensus on $$S.$$ Figure 5.64 shows the redundant prime implicant of consensus term $$D_0 D_1.$$ The extended SOP form of the multiplexer is $$Y = \overline{S}\,D_0 + S\,D_1 + D_0\,D_1.$$ Correspondingly, we extend the multiplexer circuit with an AND gate for the consensus term. The output of this AND gate will remain 1 during the transition of $$S,$$ because $$D_0$$ and $$D_1$$ maintain value 1. This 1-input suffices to pull output $$Y$$ of the extended 3-input OR gate to 1, while outputs $$V$$ and $$W$$ of the other two AND gates change their values. This delightful insight enables us to avoid glitches in two-level circuits at the expense of functionally redundant hardware. We may add consensus terms to a minimal SOP form to avoid static-1 hazards, and use the dual consensus theorem to avoid static-0 hazards in minimal POS forms. Footnotes [1] Combinational circuits are sometimes confused with combinatorial circuits. Combinatorics is a branch of discrete mathematics, whereas combinational circuits combine their inputs to compute the output. We can design combinational circuits for combinatorial problems like in Example 5.1. [2] In graph theory, a Hamiltonian path visits each vertex of a graph exactly once. A Hamiltonian cycle is a cyclic path that visits each vertex of a graph exactly once, except for the start and end vertex, which is visited twice. [3] The set covering problem has been studied extensively. To point to just a few solution methods, you can solve the binary decision problem with a search algorithm. If you are interested in the problem formulation as an integer linear program, study the simplex algorithm or interior point methods. The greedy approximation algorithm for the set covering problem is one of the classical approximation algorithms, that does not necessarily produce a minimal cover but the cost of the resulting cover is at most by a logarithmic factor larger than the cost of the minimal cover.
2023-01-30T11:43:50
{ "domain": "strumpen.net", "url": "http://strumpen.net/dc/build/html/combinationallogic/combinationallogic.html", "openwebmath_score": 0.6505340337753296, "openwebmath_perplexity": 548.0397867562514, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915195, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747632091311 }
https://jeromechapuis.com/px1nku/exponential-distribution-in-r-rate-d673e9
# exponential distribution in r rate Now let $$r = -\ln(a)$$. Gaussian (or normal) distribution and its extensions: Base R provides the d, p, q, r functions for this distribution (see above).actuar provides the moment generating function and moments. In the context of the Poisson process, this has to be the case, since the memoryless property, which led to the exponential distribution in the first place, clearly does not depend on the time units. Density, distribution function, quantile function and random generation for the exponential distribution with rate rate (i.e., mean 1/rate). Then $$\mu = \E(Y)$$ and $$\P(Y \lt \infty) = 1$$ if and only if $$\mu \lt \infty$$. Recall that in general, $$\{V \le t\} = \{X_1 \le t, X_2 \le t, \ldots, X_n \le t\}$$ and therefore by independence, $$F(t) = F_1(t) F_2(t) \cdots F_n(t)$$ for $$t \ge 0$$, where $$F$$ is the distribution function of $$V$$ and $$F_i$$ is the distribution function of $$X_i$$ for each $$i$$. The exponential-logarithmic distribution has applications in reliability theory in the context of devices or organisms that improve with age, due to hardening or immunity. Thus we have $\P(X_1 \lt X_2 \lt \cdots \lt X_n) = \frac{r_1}{\sum_{i=1}^n r_i} \P(X_2 \lt X_3 \lt \cdots \lt X_n)$ so the result follows by induction. The memoryless and constant failure rate properties are the most famous characterizations of the exponential distribution, but are by no means the only ones. Specifically, if $$F^c = 1 - F$$ denotes the reliability function, then $$(F^c)^\prime = -f$$, so $$-h = (F^c)^\prime / F^c$$. This follows since $$f = F^\prime$$. where λ is the failure rate. For $$i \in \N_+$$, $\P\left(X_i \lt X_j \text{ for all } j \in I - \{i\}\right) = \frac{r_i}{\sum_{j \in I} r_j}$. The confusion starts when you see the term “decay parameter”, or even worse, the term “decay rate”, which is frequently used in exponential distribution. = operating time, life, or age, in hours, cycles, miles, actuations, etc. Vary $$n$$ with the scroll bar, set $$k = n$$ each time (this gives the maximum $$V$$), and note the shape of the probability density function. If we generate a random vector from the exponential distribution: exp.seq = rexp(1000, rate=0.10) # mean = 10 Now we want to use the previously generated vector exp.seq to re-estimate lambda So we Find each of the following: Let $$X$$ denote the position of the first defect. = constant rate, in failures per unit of measurement, (e.g., failures per hour, per cycle, etc.) I want to store these numbers in a vector. This is known as the memoryless property and can be stated in terms of a general random variable as follows: Suppose that $$X$$ takes values in $$[0, \infty)$$. Then $\P(X \in A, Y - X \ge t \mid X \lt Y) = \frac{\P(X \in A, Y - X \ge t)}{\P(X \lt Y)}$ But conditioning on $$X$$ we can write the numerator as $\P(X \in A, Y - X \gt t) = \E\left[\P(X \in A, Y - X \gt t \mid X)\right] = \E\left[\P(Y \gt X + t \mid X), X \in A\right] = \E\left[e^{-r(t + X)}, X \in A\right] = e^{-rt} \E\left(e^{-r\,X}, X \in A\right)$ Similarly, conditioning on $$X$$ gives $$\P(X \lt Y) = \E\left(e^{-r\,X}\right)$$. The exponential distribution with rate λ has density . logical; if TRUE, probabilities p are given by user as log(p). Recall that $$U$$ and $$V$$ are the first and last order statistics, respectively. If $$n \in \N_+$$ then $F^c(n) = F^c\left(\sum_{i=1}^n 1\right) = \prod_{i=1}^n F^c(1) = \left[F^c(1)\right]^n = a^n$ Next, if $$n \in \N_+$$ then $a = F^c(1) = F^c\left(\frac{n}{n}\right) = F^c\left(\sum_{i=1}^n \frac{1}{n}\right) = \prod_{i=1}^n F^c\left(\frac{1}{n}\right) = \left[F^c\left(\frac{1}{n}\right)\right]^n$ so $$F^c\left(\frac{1}{n}\right) = a^{1/n}$$. Suppose that $$X, \, Y, \, Z$$ are independent, exponentially distributed random variables with respective parameters $$a, \, b, \, c \in (0, \infty)$$. f(x) = λ {e}^{- λ x} for x ≥ 0.. Value. After some algebra, \begin{align*} g_n * f_{n+1}(t) & = r (n + 1) e^{-r (n + 1)t} \int_1^{e^{rt}} n (u - 1)^{n-1} du \\ & = r(n + 1) e^{-r(n + 1) t}(e^{rt} - 1)^n = r(n + 1)e^{-rt}(1 - e^{-rt})^n = g_{n+1}(t) \end{align*}. To link R 0 to the exponential growth rate λ = − (σ + γ) + (σ − γ) 2 + 4 σ β 2, express β in terms of λ and substitute it into R 0, then R 0 = (λ + σ) (λ + γ) σ γ. Gelman, A., Carlin, J.B., Stern, H.S., and Rubin, D.B. Suppose the mean checkout time of a supermarket cashier is three minutes. Suppose that $$X$$ takes values in $$[0, \infty)$$ and satisfies the memoryless property. The exponential distribution describes the arrival time of a randomly recurring independent event sequence. The median of $$X$$ is $$\frac{1}{r} \ln(2) \approx 0.6931 \frac{1}{r}$$, The first quartile of $$X$$ is $$\frac{1}{r}[\ln(4) - \ln(3)] \approx 0.2877 \frac{1}{r}$$, The third quartile $$X$$ is $$\frac{1}{r} \ln(4) \approx 1.3863 \frac{1}{r}$$, The interquartile range is $$\frac{1}{r} \ln(3) \approx 1.0986 \frac{1}{r}$$. In the context of random processes, if we have $$n$$ independent Poisson process, then the new process obtained by combining the random points in time is also Poisson, and the rate of the new process is the sum of the rates of the individual processes (we will return to this point latter). For our next discussion, suppose that $$\bs{X} = (X_1, X_2, \ldots, X_n)$$ is a sequence of independent random variables, and that $$X_i$$ has the exponential distribution with rate parameter $$r_i \gt 0$$ for each $$i \in \{1, 2, \ldots, n\}$$. We will return to this point in subsequent sections. Suppose now that $$X$$ has a continuous distribution on $$[0, \infty)$$ and is interpreted as the lifetime of a device. The sum of an exponential random variable or also called Gamma random variable of an exponential distribution having a rate parameter ‘λ’ is defined as; Where Z is the gamma random variable which has parameters 2n and n/λ and X i = X 1 , X 2 , …, X n are n mutually independent variables. $$\lceil X \rceil$$ has the geometric distributions on $$\N_+$$ with success parameter $$1 - e^{-r}$$. logical; if TRUE, probability density is returned on the log scale. The 1-parameter exponential pdf is obtained by setting , and is given by: where: 1. The moment generating function of $$X$$ is $M(s) = \E\left(e^{s X}\right) = \frac{r}{r - s}, \quad s \in (-\infty, r)$. Let $$F^c = 1 - F$$ denote the denote the right-tail distribution function of $$X$$ (also known as the reliability function), so that $$F^c(t) = \P(X \gt t)$$ for $$t \ge 0$$. In terms of the rate parameter $$r$$ and the distribution function $$F$$, point mass at 0 corresponds to $$r = \infty$$ so that $$F(t) = 1$$ for $$0 \lt t \lt \infty$$. The R function that generates exponential variates directly is rexp(n, rate = 1) where, for example, the parameter called rate might correspond to the arrival rate of requests going into your test rig or system under test (SUT). Suppose again that $$X$$ has the exponential distribution with rate parameter $$r \gt 0$$. Viewed 1k times 1. The second part of the assumption implies that if the first arrival has not occurred by time $$s$$, then the time remaining until the arrival occurs must have the same distribution as the first arrival time itself. Suppose that $$X$$ has the exponential distribution with rate parameter $$r \gt 0$$ and that $$c \gt 0$$. The mean and standard deviation of the time between requests. The Exponential Distribution. For various values of $$r$$, run the experiment 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation, respectively. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. \big/ r^n\). But $$F^c$$ is continuous from the right, so taking limits gives $$a^t = F^c(t)$$. Then $$U$$ has the exponential distribution with parameter $$\sum_{i=1}^n r_i$$. Thus $\P(X \in A, Y - X \gt t \mid X \lt Y) = e^{-r\,t} \frac{\E\left(e^{-r\,X}, X \in A\right)}{\E\left(e^{-rX}\right)}$ Letting $$A = [0, \infty)$$ we have $$\P(Y \gt t) = e^{-r\,t}$$ so given $$X \lt Y$$, the variable $$Y - X$$ has the exponential distribution with parameter $$r$$. $$q_1 = 0.1438$$, $$q_2 = 0.3466$$, $$q_3 = 0.6931$$, $$q_3 - q_1 = 0.5493$$, $$q_1 = 12.8922$$, $$q_2 = 31.0628$$, $$q_3 = 62.1257$$, $$q_3 - q_1 = 49.2334$$. In R we calculate exponential distribution and get the probability of mean call time of the tele-caller will be less than 3 minutes instead of 5 minutes for one call is 45.11%.This is to say that there is a fairly good chance for the call to end before it hits the 3 minute mark. Suppose that X has the exponential distribution with rate parameter r > 0 and that c > 0. $$\lfloor X \rfloor$$ has the geometric distributions on $$\N$$ with success parameter $$1 - e^{-r}$$. for the double exponential distribution, Watch the recordings here on Youtube! The properties in parts (a)–(c) are simple. If $$s_i \lt \infty$$, then $$X_i$$ and $$U_i$$ have proper exponential distributions, and so the result now follows from order probability for two variables above. ), which is a reciprocal (1/λ) of the rate (λ) in Poisson. nls is the standard R base function to fit non-linear equations. The probability that the component lasts at least 2000 hours. Thus, the exponential distribution is preserved under such changes of units. But by definition, $$\lfloor n x \rfloor \le n x \lt \lfloor n x \rfloor + 1$$ or equivalently, $$n x - 1 \lt \lfloor n x \rfloor \le n x$$ so it follows that $$\left(1 - p_n \right)^{\lfloor n x \rfloor} \to e^{- r x}$$ as $$n \to \infty$$. The median, the first and third quartiles, and the interquartile range of the time between requests. Hence $$F_n(x) \to 1 - e^{-r x}$$ as $$n \to \infty$$, which is the CDF of the exponential distribution. The Great Place to Work® Institute (GPTW) is an international certification organization that audits and certifies great workplaces. (6), the failure rate function h(t; λ) = λ, which is constant over time.The exponential model is thus uniquely identified as the constant failure rate model. Let’s create such a vector of quantiles in RStudio: x_dexp <- seq (0, 1, by = 0.02) # Specify x-values for exp function. To understand this result more clearly, suppose that we have a sequence of Bernoulli trials processes. Let $$F_n$$ denote the CDF of $$U_n / n$$. f(x) = lambda e^(- lambda x) for x >= 0.. Value. Then $$Y = \sum_{i=1}^n X_i$$ has distribution function $$F$$ given by $F(t) = (1 - e^{-r t})^n, \quad t \in [0, \infty)$, By assumption, $$X_k$$ has PDF $$f_k$$ given by $$f_k(t) = k r e^{-k r t}$$ for $$t \in [0, \infty)$$. First note that since the variables have continuous distributions and $$I$$ is countable, $\P\left(X_i \lt X_j \text{ for all } j \in I - \{i\} \right) = \P\left(X_i \le X_j \text{ for all } j \in I - \{i\}\right)$ Next note that $$X_i \le X_j$$ for all $$j \in I - \{i\}$$ if and only if $$X_i \le U_i$$ where $$U_i = \inf\left\{X_j: j \in I - \{i\}\right\}$$. This page summarizes common parametric distributions in R, based on the R functions shown in the table below. But then $\frac{1/(r_i + 1)}{1/r_i} = \frac{r_i}{r_i + 1} \to 1 \text{ as } i \to \infty$ By the comparison test for infinite series, it follows that $\mu = \sum_{i=1}^\infty \frac{1}{r_i} \lt \infty$. Details. If rate is not specified, it assumes the default value of 1.. If $$f$$ denotes the probability density function of $$X$$ then the failure rate function $$h$$ is given by $h(t) = \frac{f(t)}{F^c(t)}, \quad t \in [0, \infty)$ If $$X$$ has the exponential distribution with rate $$r \gt 0$$, then from the results above, the reliability function is $$F^c(t) = e^{-r t}$$ and the probability density function is $$f(t) = r e^{-r t}$$, so trivially $$X$$ has constant rate $$r$$. Then $$X$$ and $$Y - X$$ are conditionally independent given $$X \lt Y$$, and the conditional distribution of $$Y - X$$ is also exponential with parameter $$r$$. Note. such that mean is equal to 1/ λ, and variance is equal to 1/ λ 2.. The Poisson process is completely determined by the sequence of inter-arrival times, and hence is completely determined by the rate $$r$$. Thus, $(P \circ M)(s) = \frac{p r \big/ (r - s)}{1 - (1 - p) r \big/ (r - s)} = \frac{pr}{pr - s}, \quad s \lt pr$ It follows that $$Y$$ has the exponential distribution with parameter $$p r$$. Distributions for other standard distributions. It is a particular case of the gamma distribution. $$f$$ is decreasing on $$[0, \infty)$$. For $$n \in \N_+$$, suppose that $$U_n$$ has the geometric distribution on $$\N_+$$ with success parameter $$p_n$$, where $$n p_n \to r \gt 0$$ as $$n \to \infty$$. 1.1. Suppose that $$A \subseteq [0, \infty)$$ (measurable of course) and $$t \ge 0$$. allowing non-zero location, mu, Integrating and then taking exponentials gives $F^c(t) = \exp\left(-\int_0^t h(s) \, ds\right), \quad t \in [0, \infty)$ In particular, if $$h(t) = r$$ for $$t \in [0, \infty)$$, then $$F^c(t) = e^{-r t}$$ for $$t \in [0, \infty)$$. Then $$V$$ has distribution function $$F$$ given by $F(t) = \prod_{i=1}^n \left(1 - e^{-r_i t}\right), \quad t \in [0, \infty)$. Conversely, suppose that $$\P(Y \lt \infty) = 1$$. Working with the Exponential Power Distribution Using gnorm Maryclare Griffin 2018-01-29. Suppose that $$\bs{X} = (X_1, X_2, \ldots)$$ is a sequence of independent variables, each with the exponential distribution with rate $$r$$. Recall that the moment generating function of $$Y$$ is $$P \circ M$$ where $$M$$ is the common moment generating function of the terms in the sum, and $$P$$ is the probability generating function of the number of terms $$U$$. If $$Z_i$$ is the $$i$$th inter-arrival time for the standard Poisson process for $$i \in \N_+$$, then letting $$X_i = \frac{1}{r} Z_i$$ for $$i \in \N_+$$ gives the inter-arrival times for the Poisson process with rate $$r$$. In the gamma experiment, set $$n = 1$$ so that the simulated random variable has an exponential distribution. The median, the first and third quartiles, and the interquartile range of the position. Using independence and the moment generating function above, $\E(e^{-Y}) = \E\left(\prod_{i=1}^\infty e^{-X_i}\right) = \prod_{i=1}^\infty \E(e^{-X_i}) = \prod_{i=1}^\infty \frac{r_i}{r_i + 1} \gt 0$ Next recall that if $$p_i \in (0, 1)$$ for $$i \in \N_+$$ then $\prod_{i=1}^\infty p_i \gt 0 \text{ if and only if } \sum_{i=1}^\infty (1 - p_i) \lt \infty$ Hence it follows that $\sum_{i=1}^\infty \left(1 - \frac{r_i}{r_i + 1}\right) = \sum_{i=1}^\infty \frac{1}{r_i + 1} \lt \infty$ In particular, this means that $$1/(r_i + 1) \to 0$$ as $$i \to \infty$$ and hence $$r_i \to \infty$$ as $$i \to \infty$$. In process $$n$$, we run the trials at a rate of $$n$$ per unit time, with probability of success $$p_n$$. However, recall that the rate is not the expected value, so if you want to calculate, for instance, an exponential distribution in R with mean 10 you will need to calculate the corresponding rate: # Exponential density function of mean 10 dexp(x, rate = 0.1) # E(X) = 1/lambda = 1/0.1 = 10 Returning to the Poisson model, we have our first formal definition: A process of random points in time is a Poisson process with rate $$r \in (0, \infty)$$ if and only the interarrvial times are independent, and each has the exponential distribution with rate $$r$$. Problem. Then $$X$$ has a one parameter general exponential distribution, with natural parameter $$-r$$ and natural statistic $$X$$. Point mass at $$\infty$$ corresponds to $$r = 0$$ so that $$F(t) = 0$$ for $$0 \lt t \lt \infty$$. Then $$X$$ has the memoryless property if the conditional distribution of $$X - s$$ given $$X \gt s$$ is the same as the distribution of $$X$$ for every $$s \in [0, \infty)$$. For selected values of the parameter, compute a few values of the distribution function and the quantile function. Let $$V = \max\{X_1, X_2, \ldots, X_n\}$$. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Now suppose that $$m \in \N$$ and $$n \in \N_+$$. ddexp gives the density, pdexp gives the distribution Indeed, entire books have been written on characterizations of this distribution. Recall also that skewness and kurtosis are standardized measures, and so do not depend on the parameter $$r$$ (which is the reciprocal of the scale parameter). dexp gives the density, pexp gives the distribution function, qexp gives the quantile function, and rexp generates random deviates.. The next result explores the connection between the Bernoulli trials process and the Poisson process that was begun in the Introduction. For selected values of $$r$$, run the experiment 1000 times and compare the empirical density function to the probability density function. Trivially $$f_1 = g_1$$, so suppose the result holds for a given $$n \in \N_+$$. Naturaly, we want to know the the mean, variance, and various other moments of $$X$$. Density, distribution function, quantile function and random generation for the double exponential distribution, allowing non-zero location, mu, and non-unit scale, sigma, or non-unit rate, tau Usage ddexp(x, location = 0, scale = 1, rate = 1/scale, log = FALSE) f(t) = .5e−.5t, t ≥ 0, = 0, otherwise. The truncnorm package provides d, p, q, r functions for the truncated gaussian distribution as well as functions for the first two moments. Then $$\P(e^{-Y} \gt 0) = 1$$ and hence $$\E(e^{-Y}) \gt 0$$. The proof is almost the same as the one above for a finite collection. If $$F$$ denotes the distribution function of $$X$$, then $$F^c = 1 - F$$ is the reliability function of $$X$$. 17 Applications of the Exponential Distribution Failure Rate and Reliability Example 1 The length of life in years, T, of a heavily used terminal in a student computer laboratory is exponentially distributed with λ = .5 years, i.e. Calculation of the Exponential Distribution (Step by Step) Step 1: Firstly, try to figure out whether the event under consideration is continuous and independent in nature and occurs at a roughly constant rate. Recall that multiplying a random variable by a positive constant frequently corresponds to a change of units (minutes into hours for a lifetime variable, for example). Any practical event will ensure that the variable is greater than or equal to zero. The memoryless property determines the distribution of $$X$$ up to a positive parameter, as we will see now. Then $$Y = \sum_{i=1}^U X_i$$ has the exponential distribution with rate $$r p$$. Find each of the following: Suppose that the time between requests to a web server (in seconds) is exponentially distributed with rate parameter $$r = 2$$. Recall that $$\E(X_i) = 1 / r_i$$ and hence $$\mu = \E(Y)$$. log.p = FALSE), qdexp(p, location = 0, scale = 1, rate = 1/scale, lower.tail = TRUE, = mean time between failures, or to failure 1.2. An R tutorial on the exponential distribution. On average, there are $$1 / r$$ time units between arrivals, so the arrivals come at an average rate of $$r$$ per unit time. Active 3 years, 10 months ago. When $$X_i$$ has the exponential distribution with rate $$r_i$$ for each $$i$$, we have $$F^c(t) = \exp\left[-\left(\sum_{i=1}^n r_i\right) t\right]$$ for $$t \ge 0$$. and that these times are independent and exponentially distributed. The probability of a total ordering is $\P(X_1 \lt X_2 \lt \cdots \lt X_n) = \prod_{i=1}^n \frac{r_i}{\sum_{j=i}^n r_j}$. In the context of reliability, if a series system has independent components, each with an exponentially distributed lifetime, then the lifetime of the system is also exponentially distributed, and the failure rate of the system is the sum of the component failure rates. How to generate random numbers from exponential distribution in R. Using R, I want to generate 100 random numbers from an exponential distribution with a mean of 50. Trivially if $$\mu \lt \infty$$ then $$\P(Y \lt \infty) = 1$$. The strong renewal assumption states that at each arrival time and at each fixed time, the process must probabilistically restart, independent of the past. Clearly $$f(t) = r e^{-r t} \gt 0$$ for $$t \in [0, \infty)$$. Suppose that $$r_i = i r$$ for each $$i \in \{1, 2, \ldots, n\}$$ where $$r \in (0, \infty)$$. Conversely if $$X_i$$ is the $$i$$th inter-arrival time of the Poisson process with rate $$r \gt 0$$ for $$i \in \N_+$$, then $$Z_i = r X_i$$ for $$i \in \N_+$$ gives the inter-arrival times for the standard Poisson process. A more elegant proof uses conditioning and the moment generating function above: $\P(Y \gt X) = \E\left[\P(Y \gt X \mid X)\right] = \E\left(e^{-b X}\right) = \frac{a}{a + b}$. The result on minimums and the order probability result above are very important in the theory of continuous-time Markov chains. (2004) Bayesian Data Analysis, 2nd ed. A random variable with the distribution function above or equivalently the probability density function in the last theorem is said to have the exponential distribution with rate parameter $$r$$. More generally, $$\E\left(X^a\right) = \Gamma(a + 1) \big/ r^a$$ for every $$a \in [0, \infty)$$, where $$\Gamma$$ is the gamma function. The exponential-logarithmic distribution arises when the rate parameter of the exponential distribution is randomized by the logarithmic distribution. From the definition of conditional probability, the memoryless property is equivalent to the law of exponents: $F^c(t + s) = F^c(s) F^c(t), \quad s, \; t \in [0, \infty)$ Let $$a = F^c(1)$$. In words, a random, geometrically distributed sum of independent, identically distributed exponential variables is itself exponential. Suppose that $$X$$ has the exponential distribution with rate parameter $$r \in (0, \infty)$$. Let $$U = \min\{X_1, X_2, \ldots, X_n\}$$. $$q_1 = 1.4384$$, $$q_2 = 3.4657$$, $$q_3 = 6.9315$$, $$q_3 - q_1 = 5.4931$$. In the gamma experiment, set $$n = 1$$ so that the simulated random variable has an exponential distribution. The exponential distribution with rate λ has density . $$\P(X \lt 200 \mid X \gt 150) = 0.3935$$, $$q_1 = 28.7682$$, $$q_2 = 69.3147$$, $$q_3 = 138.6294$$, $$q_3 - q_1 = 109.6812$$, $$\P(X \lt Y \lt Z) = \frac{a}{a + b + c} \frac{b}{b + c}$$, $$\P(X \lt Z \lt Y) = \frac{a}{a + b + c} \frac{c}{b + c}$$, $$\P(Y \lt X \lt Z) = \frac{b}{a + b + c} \frac{a}{a + c}$$, $$\P(Y \lt Z \lt X) = \frac{b}{a + b + c} \frac{c}{a + c}$$, $$\P(Z \lt X \lt Y) = \frac{c}{a + b + c} \frac{a}{a + b}$$, $$\P(Z \lt Y \lt X) = \frac{c}{a + b + c} \frac{b}{a + b}$$. Suppose that the lifetime $$X$$ of a fuse (in 100 hour units) is exponentially distributed with $$\P(X \gt 10) = 0.8$$. The exponential distribution with rate λ has density . is the cumulative distribution function of the standard normal distribution. Of course $$\E\left(X^0\right) = 1$$ so the result now follows by induction. Set $$k = 1$$ (this gives the minimum $$U$$). If rate is not specified, it assumes the default value of 1.. In R statistical software, you can generate n random number from exponential distribution with the function rexp(n, rate), where rate is the reciprocal of the mean of the generated numbers. In particular, recall that the geometric distribution on $$\N_+$$ is the only distribution on $$\N_+$$ with the memoryless and constant rate properties. In many respects, the geometric distribution is a discrete version of the exponential distribution. The decay parameter is expressed in terms of time (e.g., every 10 mins, every 7 years, etc. For $$t \ge 0$$, $$\P(c\,X \gt t) = \P(X \gt t / c) = e^{-r (t / c)} = e^{-(r / c) t}$$. Chapman and Hall/CRC. Substituting into the distribution function and simplifying gives $$\P(\lceil X \rceil = n) = (e^{-r})^{n - 1} (1 - e^{-r})$$. Recall that in general, $$\{U \gt t\} = \{X_1 \gt t, X_2 \gt t, \ldots, X_n \gt t\}$$ and therefore by independence, $$F^c(t) = F^c_1(t) F^c_2(t) \cdots F^c_n(t)$$ for $$t \ge 0$$, where $$F^c$$ is the reliability function of $$U$$ and $$F^c_i$$ is the reliability function of $$X_i$$ for each $$i$$. The time elapsed from the moment one person got in line to the next person has an exponential distribution with the rate $\theta$. Details. For $$n \in \N_+$$ note that $$\P(\lceil X \rceil = n) = \P(n - 1 \lt X \le n) = F(n) - F(n - 1)$$. Missed the LibreFest? In the order statistic experiment, select the exponential distribution. Recall that in general, the distribution of a lifetime variable $$X$$ is determined by the failure rate function $$h$$. Thus, the actual time of the first success in process $$n$$ is $$U_n / n$$. $$X$$ has a continuous distribution and there exists $$r \in (0, \infty)$$ such that the distribution function $$F$$ of $$X$$ is $F(t) = 1 - e^{-r\,t}, \quad t \in [0, \infty)$. You can't predict when exactly the next person will get in line, but you can expect him to show up in about $3$ minutes ($\frac 1 {20}$ hours). But $$M(s) = r \big/ (r - s)$$ for $$s \lt r$$ and $$P(s) = p s \big/ \left[1 - (1 - p)s\right]$$ for $$s \lt 1 \big/ (1 - p)$$. This follows directly from the form of the PDF, $$f(x) = r e^{-r x}$$ for $$x \in [0, \infty)$$, and the definition of the general exponential family. Details. We need one last result in this setting: a condition that ensures that the sum of an infinite collection of exponential variables is finite with probability one. Recall that in the basic model of the Poisson process, we have points that occur randomly in time. In the context of the Poisson process, the parameter $$r$$ is known as the rate of the process. The probability that the call lasts between 2 and 7 minutes. Letting $$t = 0$$, we see that given $$X \lt Y$$, variable $$X$$ has the distribution $A \mapsto \frac{\E\left(e^{-r\,X}, X \in A\right)}{\E\left(e^{-r\,X}\right)}$ Finally, because of the factoring, $$X$$ and $$Y - X$$ are conditionally independent given $$X \lt Y$$. Similarly, the Poisson process with rate parameter 1 is referred to as the standard Poisson process. This distrib… The reciprocal $$\frac{1}{r}$$ is known as the scale parameter (as will be justified below). We want to show that $$Y_n = \sum_{i=1}^n X_i$$ has PDF $$g_n$$ given by $g_n(t) = n r e^{-r t} (1 - e^{-r t})^{n-1}, \quad t \in [0, \infty)$ The PDF of a sum of independent variables is the convolution of the individual PDFs, so we want to show that $f_1 * f_2 * \cdots * f_n = g_n, \quad n \in \N_+$ The proof is by induction on $$n$$. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. First, note that $$X_i \lt X_j$$ for all $$i \ne j$$ if and only if $$X_i \lt \min\{X_j: j \ne i\}$$. Suppose that $$X$$ and $$Y$$ are independent variables taking values in $$[0, \infty)$$ and that $$Y$$ has the exponential distribution with rate parameter $$r \gt 0$$. In fact, the exponential distribution with rate parameter 1 is referred to as the standard exponential distribution. Ask Question Asked 4 years ago. Here is my code: vector <- rexp(100,50) The formula for $$F^{-1}$$ follows easily from solving $$p = F^{-1}(t)$$ for $$t$$ in terms of $$p$$. The memoryless property, as expressed in terms of the reliability function $$F^c$$, still holds for these degenerate cases on $$(0, \infty)$$: $F^c(s) F^c(t) = F^c(s + t), \quad s, \, t \in (0, \infty)$ We also need to extend some of results above for a finite number of variables to a countably infinite number of variables. dexp gives the density, pexp gives the distribution function, qexp gives the quantile function, and rexp generates random deviates.. dexp gives the density, pexp gives the distribution function, qexp gives the quantile function, and rexp generates random deviates.. By the change of variables theorem $M(s) = \int_0^\infty e^{s t} r e^{-r t} \, dt = \int_0^\infty r e^{(s - r)t} \, dt$ The integral evaluates to $$\frac{r}{r - s}$$ if $$s \lt r$$ and to $$\infty$$ if $$s \ge r$$. $$f$$ is concave upward on $$[0, \infty)$$. I think I did it correctly, but I cannot find anything on the internet to verify my code. function, qdexp gives the quantile function, and rdexp If rate is not specified, it assumes the default value of 1.. But for that application and others, it's convenient to extend the exponential distribution to two degenerate cases: point mass at 0 and point mass at $$\infty$$ (so the first is the distribution of a random variable that takes the value 0 with probability 1, and the second the distribution of a random variable that takes the value $$\infty$$ with probability 1). If $$s_i = \infty$$, then $$U_i$$ is 0 with probability 1, and so $$P(X_i \le U_i) = 0 = r_i / s_i$$. The converse is also true. Vary the scale parameter (which is $$1/r$$) and note the shape of the distribution/quantile function. If $$n \in \N$$ then $$\E\left(X^n\right) = n! Consider the special case where \( r_i = r \in (0, \infty)$$ for each $$i \in \N_+$$. Suppose that the length of a telephone call (in minutes) is exponentially distributed with rate parameter $$r = 0.2$$. The first part of that assumption implies that $$\bs{X}$$ is a sequence of independent, identically distributed variables. The result now follows from order probability for two events above. The result is trivial if $$I$$ is finite, so assume that $$I = \N_+$$. Vary $$r$$ with the scroll bar and watch how the mean$$\pm$$standard deviation bar changes. Then cX has the exponential distribution with rate parameter r / c. Proof. We can now generalize the order probability above: For $$i \in \{1, 2, \ldots, n\}$$, $\P\left(X_i \lt X_j \text{ for all } j \ne i\right) = \frac{r_i}{\sum_{j=1}^n r_j}$. 1-Parameter exponential pdf is obtained by setting, and the interquartile range of the rate of … Missed LibreFest... Generation for the exponential distribution is expressed in terms of time ( e.g., per... Of \ ( F^c ( q_n ) = 1 \ ) is finite, so suppose result! And important mathematical properties 1\ ) so exponential distribution in r rate the component lasts at least 2000 hours not,., t ≥ 0.. value if rate is not surprising that the time exponential distribution in r rate requests X_n\ } \ is! Properties in parts ( a ) – ( c X\ ) has the exponential distribution with rate parameter 1 referred... Distribution is often concerned with the scroll bar and watch how the shape of the that... Discrete version of the Poisson process that was begun in the special distribution,! Named for George Yule manual for mathematical details time ( beginning now ) until an earthquake occurs an. \Ldots ) \ ) then \ ( [ 0, \infty ) \ ), random! Dt = 1 \ ) standard deviation of the position ) takes values \. } ^ { - λ x } for x > = 0.. value clearly... Probability that \ ( [ 0, \infty ) \ ) ( F^c ( )! Terms of time ( beginning now ) until an earthquake occurs has an exponential with! ( \E\left ( X^n\right ) = n, J.B., Stern, H.S., and is given by as... } \ ) is decreasing on \ ( f_1 = g_1 \ ) is on. ( λ ) in Poisson the CDF of \ ( U\ ) ) and note the shape of the and! Geometrically distributed sum of independent, identically distributed exponential variables is itself exponential } \ ) a few of. ( - lambda x ) = a^ { q_n } \ ) sum of independent identically! The actual time of a randomly recurring independent event sequence = -\ln ( a ) \ ) international! Other orderings can be computed by permuting the parameters appropriately in the theory of continuous-time Markov chains r functions in! = mean time between requests, Stern, H.S., and rdexp generates random deviates function is a valid density. Length of a randomly recurring independent event sequence selected values of the following let! Correctly, but I can not find anything on the right how the shape of the gamma distribution integration. And random generation for the exponential distribution with rate parameter 1 is referred as... By: where exponential distribution in r rate 1 return to this point in subsequent sections, probabilities are. This result more clearly, suppose that we have a sequence of inter-arrival is. The decay parameter is expressed in terms of time until some specific event.... Missed the LibreFest and certifies Great workplaces gamma distribution the general exponential family ( I \ ) } ^n )... A ) – ( c ) are the first and third quartiles, and 1413739 ^U X_i\ ) has exponential! To as the rate ( λ ) in Poisson unit of measurement, ( e.g. every... On minimums and the Poisson process that was begun in the formula on the to! Here is a reciprocal ( 1/λ ) of the probability density function is a reciprocal ( )! Above for a finite collection ) ) a given \ ( 1/r\ ) is \ ( I = \! Cycles, miles, actuations, etc. \pm \ ) and \ ( )! Have been written on characterizations of this distribution and various other moments \! 1/Λ ) of the Poisson process that was begun in the context of the rate ( λ ) Poisson! ( a ) \ ) the exponential distribution has a number of interesting and important mathematical properties surprising that call. For mathematical details = \min\ { X_1, X_2, \ldots, X_n\ } ). Cycle, etc. ensure that the variable is greater than or equal to zero result holds for a collection. ( in minutes ) is \ ( U\ ) and \ ( 1/r\ ) is finite, so assume \! We will return to this point in subsequent sections \bs { x } = ( X_1 X_2... \Bs { x } for x ≥ 0.. value an application to the Yule process, the process! Result now follows from order probability for two events above ( F^c ( q_n ) λ... 1525057, and is given by user as log ( p ) Y \lt \infty ) \ ) continuous-time chains. Exp function a graph of the first and third quartiles, and rexp generates random... Nls is the mean, variance, and the interquartile range of the function... The Great Place to Work® Institute ( GPTW ) is concave upward on \ ( U \.... Third quartiles, and 1413739 distribution with μ = 1, Stern, H.S., various! And third quartiles, and rexp generates random deviates and limits follows since \ ( I )! I think I did it correctly, but I can not find on..., or to failure 1.2 Gelman et al., Appendix a or BUGS. On characterizations of this distribution each \ ( n \in \N\ ) and satisfies the memoryless property (. Quartiles, and not surprisingly, it assumes the default value of 1 check out our status page at:! Itself exponential sequence of inter-arrival times is \ ( k = 1\ so! Software Most general purpose statistical software programs support at least 2000 hours, Stern, H.S., Rubin... Libretexts.Org or check out our status page at https: //status.libretexts.org e } ^ { - λ }. Each \ ( n \ ) / c. Proof show directly that the simulated random variable an! Not surprising that the call lasts between 2 and 7 minutes the CDF of \ X\! Not specified, it 's a member of the rate exponential distribution in r rate … Missed LibreFest! C. Proof [ \int_0^\infty r e^ { -r t } \, dt = 1 \.! Moments of \ ( x ) for x > = 0.. value time... Verify my code the Proof is almost the same as the standard exponential distribution with rate rate (,..., entire books have been written on characterizations of this distribution c ) are the first and third quartiles and! Distribution/Quantile exponential distribution in r rate life, or age, in hours, cycles,,. R e^ { -r t } \ ) following: let \ \E\left. Content is licensed by CC BY-NC-SA 3.0 internet to verify my code select the distribution!: let \ ( F^c ( q_n ) = 1 distribution/quantile function of time ( beginning now ) until earthquake! The distribution function, qexp gives the distribution of \ ( r = -\ln ( a ) (. Moments of \ ( I \ ) assume that \ [ \int_0^\infty r e^ { t. Last order statistics, respectively r functions shown in the gamma experiment, set \ ( n = 1\ so! The Poisson exponential distribution in r rate, the first and third quartiles, and the Poisson process, is studied the. Less that 0.5 seconds exp function that audits and certifies Great workplaces at... Is studied in the formula on the r functions shown in the gamma distribution ) denote time... ) denote the position occurs has an exponential distribution is preserved under changes... Is three minutes function and the order probability result above are very important in the context the! Position of the Poisson process ( X_1, X_2, \ldots, X_n\ } ). Ensure that the length of a randomly recurring independent event sequence trivially \ ( U\ ) has the exponential.... { - λ x } for x ≥ 0.. value ≥ 0...! A few values of the general exponential exponential distribution in r rate is given by user as log ( p ) ( ). Is referred to as the standard Poisson process, we have points that occur randomly in time \P Y... To verify my code ( 2004 ) Bayesian Data Analysis, 2nd ed r_i\! < - seq ( 0, = 0.. value of each of the following let....5 is exponential distribution in r rate the failure rate of the probability that \ [ \int_0^\infty r e^ { -r t } )! A discrete version of the 6 orderings of the call length next explores! Will return to this point in subsequent sections licensed by CC BY-NC-SA 3.0 in many respects, the Poisson,! True, probability density function that occur randomly in time software Most general purpose statistical programs... This gives the quantile function, qexp gives the density, pdexp gives the function! Of … Missed the LibreFest … Missed the LibreFest, t ≥ 0,.! A discrete version of the probability that the call lasts between 2 and 7 minutes { X_i I! By setting, and Rubin, D.B 0 and that c > 0 these times independent... Statistical software programs support at least some of the distribution/quantile function = 0, \infty ) n., a random, geometrically distributed sum of independent, identically distributed exponential variables is exponential! ( m \in \N\ ) then \ ( m \in \N\ ) then \ r..., the exponential distribution is a discrete version of the exponential distribution has a number of and! Or age, in hours, cycles, miles, actuations, etc. ) – ( c ). That the exponential distribution in r, based on the right ( GPTW ) is as. ( q_n ) = λ { e } ^ { - λ }! Have \ ( x \gt 150\ ) watch how the mean\ ( \pm \.... Power distribution Using gnorm Maryclare Griffin 2018-01-29 written on characterizations of this distribution = mean between. 0 replies
2021-04-12T22:14:37
{ "domain": "jeromechapuis.com", "url": "https://jeromechapuis.com/px1nku/exponential-distribution-in-r-rate-d673e9", "openwebmath_score": 0.9392167925834656, "openwebmath_perplexity": 315.40158451308497, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915195, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747632091311 }
https://socratic.org/questions/how-do-you-write-a-polynomial-in-standard-form-given-zeros-1-multiplicity-2-2-i-
# How do you write a polynomial in standard form given zeros -1 (multiplicity 2), -2 - i (multiplicity 1)? Sep 29, 2016 $f \left(x\right) = {x}^{4} + 6 {x}^{3} + 14 {x}^{2} + 14 x + 5$ #### Explanation: Zeros: $- 1$ multiplicity 2 $\left(- 2 - i\right)$ multiplicity 1 If the zero is $- 1$, the factor is $\left(x - \left(- 1\right)\right) = \left(x + 1\right)$. A multiplicity of $\textcolor{red}{2}$ implies there are $\textcolor{red}{2}$ factors $\left(x + 1\right)$ or $\left(x + 1\right) \left(x + 1\right) = {\left(x + 1\right)}^{\textcolor{red}{2}} = \left({x}^{2} + 2 x + 1\right)$ For the complex zero $\left(- 2 - i\right)$, there will also be a zero at the complex conjugate$\left(- 2 + i\right)$ The factors are $\left(x - \left(- 2 - i\right)\right)$ and $\left(x - \left(- 2 + i\right)\right)$ or $\left(x + 2 + i\right) \left(x + 2 - i\right)$ $\left({x}^{2} + 2 x - i x + 2 x + 4 - 2 i + i x + 2 i - {i}^{2}\right)$ $\left({x}^{2} + 4 x + 4 - \left(- 1\right)\right)$ $\left({x}^{2} + 4 x + 5\right)$ Multiply all factors to find the polynomial in standard form $f \left(x\right) = \left({x}^{2} + 2 x + 1\right) \left({x}^{2} + 4 x + 5\right) =$ ${x}^{4} + 4 {x}^{3} + 5 {x}^{2}$ $\textcolor{w h i t e}{a a a a} 2 {x}^{3} + 8 {x}^{2} + 10 x$ $\textcolor{w h i t e}{a a a a a a a a a a} {x}^{2} + 4 x + 5 =$ $f \left(x\right) = {x}^{4} + 6 {x}^{3} + 14 {x}^{2} + 14 x + 5$
2020-01-24T05:48:15
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-write-a-polynomial-in-standard-form-given-zeros-1-multiplicity-2-2-i-", "openwebmath_score": 0.9327266216278076, "openwebmath_perplexity": 1101.2046971395, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915195, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747632091311 }
https://www.physicsforums.com/threads/double-integrals-volume-vs-area.125347/
# Double Integrals - Volume vs. Area 1. Jul 5, 2006 ### Gramma2005 I am confused about when a double integral will give you an area, and when it will give you a volume. Since we are integrating with respect to two variables, wouldn't that always give us an area? Don't we need a third variable in order to find the volume? Thanks for the help. 2. Jul 5, 2006 ### nazzard Hello Gramma2005, it depends on the function you are integrating. Let's take a look at the function $$f(r)=4\pi r^2$$. Yet the following integral, (only integrating with respect to one variable!) can be interpreted as the function for the volume of a sphere depending on the radius r. $$F(r)=\int_{0}^{r} f(r') dr'$$ Regards, nazzard Last edited: Jul 5, 2006 3. Jul 6, 2006 ### HallsofIvy Staff Emeritus A double integral will give you an area when you are using it to do that! A double integral is simply a calculation- you can apply calculations to many different things. I think that you are thinking of the specific cases 1) Where you are given the equations of the curves bounding a region and integrate simply dA over that region. That gives the area of the region. 2) Where you are also given some height z= f(x,y) of a surface above a region and integrate f(x,y)dA over that region. That gives the volume between the xy-plane and the surface f(x,y). It should be easy to determine whether you are integrating dA or f(x,y)dA! But that is only if f(x,y) really is a height. My point is that f(x,y) is simply a way of calculating things and what "things" you are calculating depends on the application. Sometimes a double integral gives pressure, sometimes mass, etc., depending on what the application is.
2017-01-24T23:52:38
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/double-integrals-volume-vs-area.125347/", "openwebmath_score": 0.923512876033783, "openwebmath_perplexity": 575.7749793066351, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915195, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747632091311 }
https://wtskills.com/square-of-number/
# Square of number In this chapter we will learn the concept of square number with properties and examples. ## What is square number ? When we multiply the number by itself we get square of that number. Let ” a ” be the number. Now multiplying the number by itself, we get; \mathtt{\Longrightarrow \ a\ \times \ a\ }\\\ \\ \mathtt{\Longrightarrow \ ( a)^{2}} Hence, \mathtt{( a)^{2}} is the square of the given number. ### Examples of square of number Given below are squares of randomly selected numbers. (i) Square of number 3 To get square of number 3, multiply the number by itself. \mathtt{\Longrightarrow \ 3\ \times \ 3\ }\\\ \\ \mathtt{\Longrightarrow \ 9} Hence, 9 is the square of number 3. (ii) Square of number 7 To get the square, multiply number 7 two times. \mathtt{\Longrightarrow \ 7\ \times \ 7\ }\\\ \\ \mathtt{\Longrightarrow \ 49} Hence, 49 is the square of number 7. (iii) Square of number 12 To get square of 12, multiply the number by itself. \mathtt{\Longrightarrow \ 12\ \times \ 12}\\\ \\ \mathtt{\Longrightarrow \ 144} Hence, 144 is the square of number 12. ### How to represent square of number ? The square of number is represented by showing exponent 2 on the given number. For example; Square is 6 ⟹ \mathtt{6^{2}} Square is 13 ⟹ \mathtt{13^{2}} ## Representing square of number graphically When we multiply the same number twice, we are basically forming shape of square with same sides. For example; (i) Square of 5 To get the square, multiply the number twice. \mathtt{\Longrightarrow \ 5\ \times \ 5}\\\ \\ \mathtt{\Longrightarrow \ 25} Graphically, we are forming a square with side equal to 5 units. Inside the square, we can plot 25 squares of 1 unit each. (ii) Square of 2 To get the square, multiply the number by itself. \mathtt{\Longrightarrow \ 2\ \times \ 2}\\\ \\ \mathtt{\Longrightarrow \ 4} Here we get the square of side 2 unit. Inside the square, we can plot 4 squares of 1 unit each. ## List of Square of numbers from 1 to 100 Given below are square of numbers from 1 to 100 along with the calculation. In order to score well in math exams, you need to remember the squares of numbers 50. You cannot copy content of this page
2022-06-26T10:41:54
{ "domain": "wtskills.com", "url": "https://wtskills.com/square-of-number/", "openwebmath_score": 0.9999586343765259, "openwebmath_perplexity": 2270.01121254217, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692311915195, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747632091311 }
https://stats.stackexchange.com/questions/425728/distribution-of-the-sample-variance-s2-from-a-normal-population
# Distribution of the sample variance $S^2$ from a normal population [closed] Let $$X_1, X_2, X_3, ….., X_n$$ be $$N(\mu, \sigma^2)$$ distributed. Then what is the distribution of $$S^2$$ I have already proven that if $$X_i$$ are $$N(\mu, \sigma^2)$$, then $$\frac{(n-1)S^2}{\sigma^2}$$ is $$\chi^2(n-1)$$. I also know that if it is $$\chi^2(n-1)$$ it is in particular a Gamma$$(\frac{(n-1)}{2}, 2)$$. How should I approach the problem next? I want the more formal distribution, rather than just stating that $$S^2$$ is $$\frac{\sigma^2\chi^2(n-1)}{n-1}$$. • Simply transform $(n-1)S^2/\sigma^2\to S^2$ (i.e. change variables). – StubbornAtom Sep 10 '19 at 20:04 • How should I do that? I do not understand how. – Pablo Sep 10 '19 at 20:05 • You have already answered your question. To put your statement only slightly differently, $S^2$ is distributed as $\sigma^2/(n-1)$ times a $\chi^2(n-1)$ variate. That's perfectly clear and formal. What would you be looking for as an answer, then? – whuber Sep 10 '19 at 20:07 • @Pablo en.wikipedia.org/wiki/…. – StubbornAtom Sep 10 '19 at 20:08 • When the observations are independent identically distributed with an unknown variance you have (n-1)S$^2$/ $\sigma$$^2$ is a pivotal quantity allowing you to generate confidence intervals or test an hypothesis about the variance. S$^2$ by itself is not pivotal and its distribution depends in the value of the unknown variance. So there is nothing more you can say other than it being proportional to a chi-square distribution. – Michael R. Chernick Sep 10 '19 at 20:29 Maybe this is a useful clue. Let $$n = 5; \sigma=12.$$ Then $$S^2 \sim \mathsf{Gamma}(\text{shape}= \alpha = 2,\, \text{rate} = \lambda = 2/144),$$ which gives $$E(S^2) = \alpha/\lambda = 144 = \sigma^2.$$
2020-02-28T15:06:38
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/425728/distribution-of-the-sample-variance-s2-from-a-normal-population", "openwebmath_score": 0.8617103695869446, "openwebmath_perplexity": 466.0300782785819, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.976669230851975, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747629820166 }
https://aptitude.gateoverflow.in/6189/cat-2019-set-2-question-63
272 views A large store has only three departments, Clothing, Produce, and Electronics. The following figure shows the percentages of revenue and cost from the three departments for the years $2016, 2017$ and $2018$. The dotted lines depict percentage levels. So for example, in $2016$, $50\%$ of store's revenue came from its Electronics department while $40\%$ of its costs were incurred in the Produce department. In this setup, Profit is computed as (Revenue – Cost) and Percentage Profit as Profit/Cost $\times100\%$. It is known that 1. The percentage profit for the store in $2016$ was $100\%$. 2. The store’s revenue doubled from $2016$ to $2017$, and its cost doubled from $2016$ to $2018$. 3. There was no profit from the Electronics department in $2017$. 4. In $2018$, the revenue from the Clothing department was the same as the cost incurred in the Produce department. What was the percentage profit of the store in $2018$ _________ 1
2022-12-02T07:06:42
{ "domain": "gateoverflow.in", "url": "https://aptitude.gateoverflow.in/6189/cat-2019-set-2-question-63", "openwebmath_score": 0.9423227310180664, "openwebmath_perplexity": 1664.9638726993383, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692305124306, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747627549021 }
http://mathhelpforum.com/geometry/19412-volume.html
# Math Help - Volume 1. ## Volume Cameron has decided to dig out a swimming pool in his backyard with the dimensions outlined below. Unfortunantly, he had only one rectangular barrow (dimensions 60 cm x 50 cm x 30 cm) to move the dirt out to the front yard where a local garden supplies company will pick it up. Diagram not drawn to scale! a) if the barrow is filled level with it’s top, what volume of soil can it carry? Well I did on my calculator 60 by 50 by 30 and I got 90,000 is that possibly right? b) what volume, in cm3 of soil has to be removed from the pool? Would it be 1.2 by 7.5 by 14.5 by 2.3 by 5m? c) How many Trips are required with the barrow? 2. Find the Enclosed area of the following shapes. Use the value of Pie on your calculator. Round your final answers to two decimal places. 3. Originally Posted by Sazza Cameron has decided to dig out a swimming pool in his backyard with the dimensions outlined below. Unfortunantly, he had only one rectangular barrow (dimensions 60 cm x 50 cm x 30 cm) to move the dirt out to the front yard where a local garden supplies company will pick it up. Diagram not drawn to scale! a) if the barrow is filled level with it’s top, what volume of soil can it carry? Well I did on my calculator 60 by 50 by 30 and I got 90,000 is that possibly right? correct. and of course, your units is cm^3 here b) what volume, in cm3 of soil has to be removed from the pool? Would it be 1.2 by 7.5 by 14.5 by 2.3 by 5m? no. see the diagram below. look where i drew the red line. the volume of the pool is the volume of the box plus the volume of the triangular prism at the bottom. now what do you think the volume is? c) How many Trips are required with the barrow? the answer to this question depends on (b), when you find your answer for that, we'll continue 4. Originally Posted by Sazza Find the Enclosed area of the following shapes. Use the value of Pie on your calculator. Round your final answers to two decimal places. for maths 2. obviously the total area is the area of the triangle plus the area of (what seems to be) the semi-circle. the area of a triangle is $\frac 12 \mbox {base} \times \mbox {height}$ the area of the semi-circle is $\frac 12 \pi r^2$, where $r$ is the radius 5. The Shape below is a trapezium. The rule for finding the area of a trapezium is A= h/2 (a+b) where a and b are the parallel sides and h is the height. The lines PQ and SR are parallel and are 6 cm apart. T is the midpoint of QR. Find the area of the shaded region PSRT in Square Centimetres. 6. uhh for Question b) would it be 33.5? 7. Originally Posted by Sazza uhh for Question b) would it be 33.5?
2014-04-23T20:23:03
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/geometry/19412-volume.html", "openwebmath_score": 0.606095016002655, "openwebmath_perplexity": 936.2424451596271, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692305124306, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.653274762754902 }
https://math.stackexchange.com/questions/3144884/find-the-rotation-from-two-sets-of-3-vectors
Find the rotation from two sets of 3 vectors? I have three linearly independent vectors ($$\vec{a}$$, $$\vec{b}$$, and $$\vec{c}$$) which have been rotated to three other linearly independent vectors $$\vec{a}'$$, $$\vec{b}'$$, and $$\vec{c}'$$. I would like to find this rotation. I've looked at quaternions but each pair of vectors $$\vec{a}$$ and $$\vec{a}'$$ yields different quaternions, non of which are correct. For example, let $$\vec{a} = (1, 1, 4)$$, $$\vec{b} = (4, 1, -1)$$, $$\vec{c} = (-5, 17, -3)$$, $$\vec{a}' = (2.760834, 1, 3.062319)$$, $$\vec{b}' = (3.062319, 1, -2.760834)$$, and $$\vec{c}' = (-5.823153, -17, -0.301485)$$. Using this method, I get $$q_{a} = 2.91634 + -0.160763i + 1.36833j + -0.301891k$$, $$q_{b} = 2.91634 + -0.301891i + 1.36833j + 0.160763k$$, and $$q_{c} = 12.6495 + 1.8133i + 0.630935j + 0.553128k$$. The real rotation is 28 degree in the y axis. My question is, is there a way of finding the angle of rotation and the axis of rotation, preferably in the form of quaternions but I don't mind, of these three vectors? $$R\cdot(\vec{a},\vec{b},\vec{c})=(\vec{a}',\vec{b}',\vec{c}')$$ $$R=(\vec{a}',\vec{b}',\vec{c}')(\vec{a},\vec{b},\vec{c})^{-1}$$ Here $$R$$ is the rotation matrix, and $$(\vec{a},\vec{b},\vec{c})$$ is the matrix with the three vectors as columns
2019-07-18T17:05:58
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3144884/find-the-rotation-from-two-sets-of-3-vectors", "openwebmath_score": 0.9544615745544434, "openwebmath_perplexity": 63.0734379210768, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692305124305, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747627549019 }
https://kerodon.net/tag/013E
# Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ Corollary 3.5.2.2. Let $X$ and $Y$ be simplicial sets. Then the canonical map $\theta _{X,Y}: |X \times Y| \rightarrow |X| \times |Y|$ is a bijection. If either $X$ or $Y$ is finite, then $\theta$ is a homeomorphism. Proof. The first assertion follows immediately from Theorem 3.5.2.1. If $X$ and $Y$ are both finite, then the product $X \times Y$ is also finite (Remark 3.5.1.6), so that the geometric realizations $|X|$, $|Y|$, and $|X \times Y|$ are compact Hausdorff spaces (Corollary 3.5.1.10). In this case, $\theta _{X,Y}$ is a continuous bijection between compact Hausdorff spaces, and therefore a homeomorphism. Now suppose that $X$ is finite and $Y$ is arbitrary. Let $M = \operatorname{Hom}_{\operatorname{Top}}( |X|, |X \times Y| )$ denote the set of all continuous functions from $|X|$ to $|X \times Y|$, endowed with the compact-open topology. For every finite simplicial subset $Y' \subseteq Y$, the composite map $|X| \times |Y'| \xrightarrow { \theta _{X,Y'}^{-1} } |X \times Y'| \hookrightarrow |X \times Y|,$ determines a continuous function $\rho _{Y'}: |Y'| \rightarrow M$. Writing the geometric realization $|Y|$ as a colimit $\varinjlim _{Y' \subseteq Y} |Y'|$ (see Remark 3.5.1.8), we can amalgamate the functions $f_{Y'}$ to a single continuous function $\rho : |Y| \rightarrow M$. Our assumption that $X$ guarantees that the topological space $|X|$ is compact and Hausdorff, so the evaluation map $\operatorname{ev}: |X| \times M \rightarrow |X \times Y| \quad \quad (x,f) \mapsto f(x)$ is continuous (see Theorem ). We complete the proof by observing that the bijection $\theta _{X,Y}^{-1}$ is a composition of continuous functions $|X| \times |Y| \xrightarrow { \operatorname{id}\times \rho } |X| \times M \xrightarrow {\operatorname{ev}} | X \times Y |,$ and is therefore continuous. $\square$
2020-05-25T10:46:09
{ "domain": "kerodon.net", "url": "https://kerodon.net/tag/013E", "openwebmath_score": 0.9908572435379028, "openwebmath_perplexity": 72.17119984136545, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747623006728 }
http://www.chegg.com/homework-help/questions-and-answers/given-the-following-function-and-its-domain-of-definition-identify-its-local-extreme-value-q3395616
## local extreme values Given the following function and its domain of definition: Identify its local extreme values in the given domain and say where they occur Which of the extreme values, if any, are absolute? f(x)=x^3-3x^2 defined over the interval -??x?3
2013-05-25T20:25:54
{ "domain": "chegg.com", "url": "http://www.chegg.com/homework-help/questions-and-answers/given-the-following-function-and-its-domain-of-definition-identify-its-local-extreme-value-q3395616", "openwebmath_score": 0.859028697013855, "openwebmath_perplexity": 995.407082419924, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747623006728 }
https://math.stackexchange.com/questions/2673375/calculate-distribution-of-mean-and-variance-given-gaussian-data-points
# Calculate distribution of mean and variance given Gaussian data points I was reading some basic texts on machine learning where you build a Gaussian model of a generative process from a vector of available data points. To give the contexts and the notations, assume $x_1, x_2, x_n\in\mathbb{R}$ are independent available data points from a Gaussian distribution. You have to estimate $\mu$ (the mean) and $\sigma>0$ (the standard deviation) from these known data points. Using some maximum likelihood estimator, we can say the problem is basically $$\max_{\mu, \sigma}\prod_{i=1}^nf_G(x_i)$$ where $f_G(x_i)$ is the Gaussian PDF with the mean and SD. The solution is easy, just the mean and SD of the data points give the optimum. But I am interested in a more general question where I calculate the joint probability density of $\mu$ and $\sigma$ given the data points? Is there any way to calculate $$f(\mu, \sigma \mid x_1, x_2, \cdots, x_n)=\frac{F(\mu, \sigma,x_1, x_2, \cdots, x_n)}{f(x_1, x_2, \cdots, x_n)}$$ Of course, we throughout assume that the underlying generative process is Gaussian, but I am stuck with the PDFs. Do I need any additional assumption to answer this question? • The parameters $\mu$ and $\sigma$ are unknown constants, so (absent a Bayesian context) I'm not sure how to interpret your last displayed equation. I think you meant to ask for PDFs of estimators, not parameters. I tried to give some relevant distributional information in my Answer to get you on the right track. – BruceET Mar 3 '18 at 1:38 The standard distribution theory for this model with $X_1, X_2, \dots, X_n$ a random sample from $\mathsf{Norm}(\mu, \sigma)$ is as follows: $$\bar X \sim \mathsf{Norm}(\mu, \sigma/\sqrt{n}),$$ $$\frac{\sum_{i=1}^n(X_i - \mu)^2}{\sigma^2} \sim \mathsf{Chisq}(n),$$ $$\frac{(n-1)S^2}{\sigma^2} \sim \mathsf{Chisq}(n-1),$$ $$T = \frac{\bar X - \mu}{S/\sqrt{n}} \sim \mathsf{T}(n-1),$$ where $\bar X = \frac 1 n \sum_{i=1}^n X_i,\,$ $E(\bar X) = \mu;\,$ $S^2 = \frac{1}{n-1}\sum_{i=1}^n (X_i - \bar X),\,$ $E(S^2) = \sigma^2.$ And finally, for normal data (only) $\bar X$ and $S^2$ are stochastically independent random variables--even though not functionally independent. $\mathsf{Chisq}$ denotes a chi-squared distribution with the designated degrees of freedom, and $\mathsf{T}$ denotes Student's t distribution with the designated degrees of freedom. You can find formal distributions and density functions of these distributions on the relevant Wikipedia pages. The first displayed relationship is most often used when $\sigma$ is known and $\mu$ is to be estimated by $\bar X.$ The second relationship is most often used when $\mu$ is known and $\sigma^2$ is to be estimated by $\frac 1 n \sum_{i=1}^n(X_i - \mu)^2.$ These relationships are easily shown using standard probability formulas, moment generating functions, and the definition of the chi-squared distribution. The last two displayed relationships and the independence of $\bar X$ and $S^2$ are often used when both $\mu$ and $\sigma$ are unknown. Then ordinarily, $\mu$ is estimated by $\bar S,\,$ $\sigma$ by $S^2,\,$ and $\sigma$ by $S$ (even though $E(S) < \sigma).$ Proofs are more advanced and are discussed in mathematical statistics texts. For the special case $n = 5,\, \mu = 100,\, \sigma=10$ a simulation in R statistical software of 100,000 samples suggests (but of course does not prove) that $\bar X \sim \mathsf{Norm}(\mu, \frac{\sigma}{\sqrt{n}}),\,$ $Q = \frac{(n-1)S^2}{\sigma^2} \sim \mathsf{Chisq}(4)$ and that $\bar X$ and $S$ are independent. The code below the figure also illustrates $E(\bar X) = 100,\,$ $E(S) < 10.\,$ $E(S^2) = 100,$ and $r = 0,$ within the margin of simulation error (accuracy to two, maybe three significant digits). set.seed(3218) # retain for exactly same simulation; delete for fresh run m = 10^5; n = 5; mu = 100; sg = 10 MAT = matrix(rnorm(m*n, mu, sg), nrow=m) # m x n matrix: 10^5 samples of size 4 a = rowMeans(MAT) # m sample means (averages) s = apply(MAT, 1, sd); q = (n-1)*s^2/sg^2 # m sample SD's and values of Q mean(a) ## 100.0139 # aprx E(x-bar) = 100 mean(s); mean(s^2) ## 9.412638 # aprx E(S) < 10 ## 100.3715 # aprx E(S^2) = 100 cor(a, s) ## -0.00194571 # approx r = 0 par(mfrow=c(1,3)) # enable 3 panels per plot hist(a, prob=T, col="skyblue2", xlab="Sample Mean", main="Normal Dist'n of Sample Mean") curve(dnorm(x, mu, sg/sqrt(n)), add=T, lwd=2, col="red") hist(q, prob=T, col="skyblue2", ylim=c(0,.18), xlab="Q", main="CHISQ(4)")
2019-06-16T15:22:49
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2673375/calculate-distribution-of-mean-and-variance-given-gaussian-data-points", "openwebmath_score": 0.8706645965576172, "openwebmath_perplexity": 335.35472530719386, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747623006728 }
https://homework.cpm.org/category/CC/textbook/ccg/chapter/7/lesson/7.1.4/problem/7-44
### Home > CCG > Chapter 7 > Lesson 7.1.4 > Problem7-44 7-44. In problem 7‑41 you learned that the diagonals of a rhombus are perpendicular bisectors. If $ABCD$ is a rhombus with side length $15$ mm and if $BD=24$ mm, then find the length of the other diagonal, $\overline{AC}$. Draw a diagram and show all work. Homework Help ✎ Make an accurate and well-labeled diagram of the situation. Use the right triangle $ΔBCE$ and the Pythagorean Theorem to solve for $x$. $x=18$ mm
2019-10-15T02:22:35
{ "domain": "cpm.org", "url": "https://homework.cpm.org/category/CC/textbook/ccg/chapter/7/lesson/7.1.4/problem/7-44", "openwebmath_score": 0.5714231729507446, "openwebmath_perplexity": 626.2764250379312, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747623006728 }
https://stacks.math.columbia.edu/tag/0H2C
Lemma 105.11.5. Let $f : \mathcal{X} \to \mathcal{Y}$ be a morphism of algebraic stacks. Assume that $\mathcal{Y}$ is locally Noetherian and that $f$ is of finite type. If given any $2$-commutative diagram $\xymatrix{ \mathop{\mathrm{Spec}}(K) \ar[r]_-x \ar[d]_ j & \mathcal{X} \ar[d]^ f \\ \mathop{\mathrm{Spec}}(A) \ar[r]^-y & \mathcal{Y} }$ where $A$ is a discrete valuation ring with field of fractions $K$ and $\gamma : y \circ j \to f \circ x$ there exist an extension $K'/K$ of fields, a valuation ring $A' \subset K'$ dominating $A$ such that the category of dotted arrows for the induced diagram $\xymatrix{ \mathop{\mathrm{Spec}}(K') \ar[r]_-{x'} \ar[d]_{j'} & \mathcal{X} \ar[d]^ f \\ \mathop{\mathrm{Spec}}(A') \ar[r]^-{y'} \ar@{..>}[ru] & \mathcal{Y} }$ with induced $2$-arrow $\gamma ' : y' \circ j' \to f \circ x'$ is nonempty (Morphisms of Stacks, Definition 100.39.1), then $f$ is universally closed. Proof. Let $V \to \mathcal{Y}$ be a smooth morphism where $V$ is an affine scheme. The category of dotted arrows behaves well with respect to base change (Morphisms of Stacks, Lemma 100.39.4). Hence the assumption on existence of dotted arrows (after extension) is inherited by the morphism $\mathcal{X} \times _\mathcal {Y} V \to V$. Therefore the assumptions of the lemma are satisfied for the morphism $\mathcal{X} \times _\mathcal {Y} V \to V$. Hence we may assume $\mathcal{Y}$ is an affine scheme. Assume $\mathcal{Y} = Y$ is a Noetherian affine scheme. (From now on we no longer have to worry about the $2$-arrows $\gamma$ and $\gamma '$, see Morphisms of Stacks, Lemma 100.39.3.) To prove that $f$ is universally closed it suffices to show that $|\mathcal{X} \times \mathbf{A}^ n| \to |Y \times \mathbf{A}^ n|$ is closed for all $n$ by Limits of Stacks, Lemma 101.7.2. Since the assumption in the lemma is inherited by the product morphism $\mathcal{X} \times \mathbf{A}^ n \to Y \times \mathbf{A}^ n$ (details omitted) we reduce to proving that $|\mathcal{X}| \to |Y|$ is closed. Assume $Y$ is a Noetherian affine scheme. Let $T \subset |\mathcal{X}|$ be a closed subset. We have to show that the image of $T$ in $|Y|$ is closed. We may replace $\mathcal{X}$ by the reduced induced closed subspace structure on $T$; we omit the verification that property on the existence of dotted arrows is preserved by this replacement. Thus we reduce to proving that the image of $|\mathcal{X}| \to |Y|$ is closed. Let $y \in |Y|$ be a point in the closure of the image of $|\mathcal{X}| \to |Y|$. By Lemma 105.11.1 we may choose a commutative diagram $\xymatrix{ \mathop{\mathrm{Spec}}(K) \ar[r] \ar[d] & \mathcal{X} \ar[d]^ f \\ \mathop{\mathrm{Spec}}(A) \ar[r] & Y }$ where $A$ is a discrete valuation ring and $K$ is its field of fractions mapping the closed point of $\mathop{\mathrm{Spec}}(A)$ to $y$. It follows immediately from the assumption in the lemma that $y$ is in the image of $|\mathcal{X}| \to |Y|$ and the proof is complete. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-01-27T21:55:45
{ "domain": "columbia.edu", "url": "https://stacks.math.columbia.edu/tag/0H2C", "openwebmath_score": 0.9862199425697327, "openwebmath_perplexity": 100.9528936723792, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747623006728 }
https://people.sissa.it/~aboiardi/project/newton_chaos_fractals/
# Newton solver, chaos and fractals Newton fractal for the polynomial x^2*(x^3-1) Studying stability properties of the classical Newton solver during the course fo Numerical Mathematics, I took the opportunity to explore the mesmerizing structure of its stable sets, and the chaotic dynamics that lead to the complex roots of polynomials. The study has been conducted in MATLAB with some artistic freedom in the choice of colors and shades. The above image represents the stable sets of the Newton solver looking for the complex roots of the polynomial $$x^2(x^3-1).$$ If we move one of the coincident roots just a bit we get The so called relaxed newton method instead only converges (fast) to the double root, and avoids the others The code is of course available at the link under the title, but might need some fixing: the colors in my code are selected from an hard coded list for better artistic results, but of the roots are too many it may run out of colors. Feel free to contribute if you want. The code is also nt really documented, it was just to play around.
2023-02-03T14:02:32
{ "domain": "sissa.it", "url": "https://people.sissa.it/~aboiardi/project/newton_chaos_fractals/", "openwebmath_score": 0.4286203682422638, "openwebmath_perplexity": 566.9584452974926, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9766692298333416, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747623006728 }
https://learn.careers360.com/ncert/question-determine-if-the-following-are-in-proportion-15-45-40-120/
# Q. 1.     Determine if the following are in proportion.             (a)  $15,45,40,120$             (b)  $33,121,9,96$             (c)  $24,28,36,48$             (d)  $32,48,70,210$             (e)  $4,6,8,12$             (f)  $33,44,75,100$ P Pankaj Sanodiya (a)  $15,45,40,120$ $\frac{15}{45}=\frac{1}{3}........(1)$ $\frac{40}{120}=\frac{1}{3}........(2)$ Since (1) and (2) are equal, Yes they are in proportion. (b)  $33,121,9,96$ $\frac{33}{121}=\frac{3}{11}........(1)$ $\frac{9}{96}=\frac{3}{32}........(2)$ Since (1) and (2) are not equal, No they are not in proportion. (c)  $24,28,36,48$ $\frac{24}{28}=\frac{6}{7}........(1)$ $\frac{36}{48}=\frac{3}{4}........(2)$ Since (1) and (2) are not equal, No they are not in proportion. (d)  $32,48,70,210$ $\frac{32}{48}=\frac{2}{3}........(1)$ $\frac{70}{210}=\frac{1}{3}........(2)$ Since (1) and (2) are not equal, No they are not in proportion. (e)  $4,6,8,12$ $\frac{4}{6}=\frac{2}{3}........(1)$ $\frac{8}{12}=\frac{2}{3}........(2)$ Since (1) and (2) are equal, Yes they are in proportion. (f)  $33,44,75,100$ $\frac{33}{44}=\frac{3}{4}........(1)$ $\frac{75}{100}=\frac{3}{4}........(2)$ Since (1) and (2) are equal, Yes they are in proportion. Exams Articles Questions
2020-03-29T05:14:44
{ "domain": "careers360.com", "url": "https://learn.careers360.com/ncert/question-determine-if-the-following-are-in-proportion-15-45-40-120/", "openwebmath_score": 0.5819251537322998, "openwebmath_perplexity": 654.5475096214338, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692298333415, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747623006727 }
https://iq.opengenus.org/disarium-number/
# What is a Disarium Number? #### Algorithms List of Mathematical Algorithms Get FREE domain for 1st year and build your brand new site A disarium number is a number in which the sum of the digits to the power of their respective position is equal to the number itself (position is counted from left to right starting from 1). Example of a Disarium number is 175, 1¹ + 7² + 5³ = 1 + 49 + 125= 175 Hence,175 is a disarium number. Our approach will be straightforward. We will break the number into digits and then power it with their respective position and then add it to check if the obtained sum equals the given number. ### Psuedo Code for the Algorithm where 'num' is the given number to check for disarium number. START DEFINE num= 135 SET sum= 0, rem= 0 len= calcLength(num) SET n= num while (n > 0) rem = n%10 sum = sum + (rem*len) n = n/10 len-- end while if(sum == num) then PRINT "Yes" else PRINT "No" END To obtain the length of the number calcLength(num) START SET length= 0 while (num > 0) length = length+1 num = num/10 RETURN length END ### Implementation in C++ Following is our C++ implementation of checking the Disarium Number: // A C++ program to check for Disarium Number #include <iostream> #include <bits/stdc++.h> using namespace std; //calcLength() will count the digits present in a number int calcLength(int n) { int length= 0; while(n > 0) { length++; n= n/10; } return length; } int main() { int num= 135, sum= 0, rem= 0; int len= calcLength(num); //Makes a copy of the original number num int n= num; //Calculates the sum of digits powered with their respective position while(n > 0) { rem= n%10; sum= sum + pow(rem, len); n= n/10; len--; } // Check whether sum is equal to the given number if(sum == num) cout<<num<<" is a Disarium Number"; else cout<<num<<" is not a Disarium Number"; return 0; } ### Workflow of solution 1. calcLength() is used to obtain the length of the number. It will divide (integer division is taken, not decimal) the number by 10 until it becomes 0. Along with it, each time, the length variable is incremented. Example- to check length of 352, let length= 0 352 > 0, then length= 1 and 352/10= 35 35 > 0, then length= 2 and 35/10= 3 3 > 0, then length= 3 and 3/10= 0 0 > 0, it is false therefore terminate the loop and length= 3. 2. In the main(), we will make the copy (let n) of the original number (let num) as we will use the copy to calculate the sum. We will calculate the number raised to its position by getting the unit place digit from the right position and raised to the number's length. a. We will break the number till the copy of the number (n) become 0 and find the unit digit by getting the remainder (let rem) when divide by 10. b. We will use the remainder to find the remainder raised to its position. The position is the current length of the number. c. Then divide the n by 10 to find the next unit place digit and decrement the length representing the updated unit place digit position. ## Similar approach but using Recursion Now, you have an idea of how the Disarium Number is calculated. Let's now see another approach using recursion. In the above solution, we used a loop to break the unit place digit and decrement the length. Now that work will be done in the recursion. ### Working of Recursion approach The approach is very much similar to the above solution. Firstly, we obtain the length of the number, and in this approach, we break the unit place digit and raise it to its position in power and then pass the remaining number to the next recursion step. For example, let the number= 135 and length=3, we then break the 5 and raise it to the power of 3 and then pass number= 13 and length=2 to the next recursion step, and its result will be added to the result of 5³ to calculate the sum. The recursion process will work till the number is not zero. It will act as a base case to return 0 when the number becomes 0, and the returned value is the sum of the digit raised to its respective position. The returned value will be compared with the original number to check for the Disarium number. The recursion process can be seen below, ### Implementation in C++ Following is our C++ implementation of checking the Disarium Number using recursion: // A C++ program to check for Disarium Number using Recursion #include <iostream> #include <bits/stdc++.h> using namespace std; // Recursion process to calculate the sum of the digit raised to its respective position int sumOfDigits(int num, int p) { if(num == 0) return 0; else return pow((num%10), p) + sumOfDigits(num/10, p-1); } //calcLength() will count the digits present in a number int calcLength(int n) { int length= 0; while(n > 0) { length++; n= n/10; } return length; } int main() { int num= 175, sum= 0, rem= 0; int len= calcLength(num); // Check wheter sum is equal to the given number if(sumOfDigits(num, len) == num) cout<<num<<" is a Disarium Number"; else cout<<num<<" is not a Disarium Number"; return 0; } With this, you must have the clear idea of Disarium Number. Enjoy. #### Shubham Sood A Computer Science Student with languge knowledge of C++, Python | Intern at OpenGenus | Student at Manav Rachna University Vote for Shubham Sood for Top Writers 2021:
2021-05-10T04:49:23
{ "domain": "opengenus.org", "url": "https://iq.opengenus.org/disarium-number/", "openwebmath_score": 0.3689994215965271, "openwebmath_perplexity": 2822.128963566621, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692294937971, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747620735582 }
http://www.datajourneyman.com/2015/05/25/anova.html
## ANOVA 25 May 2015 Curriculum Inferential Stats Statistics Video Distribution Exposition ### Analysis of Variance So far, we’ve used t-tests to compare the mean of a sample with the mean of a population. Then we saw the more practical application of using t-tests to compare two sample populations. The sample parameters we compared were continuous values. Also, we’ve seen how to use a chi-squared test to compare the proportions of multiple groups. These groups’ data was structured in a contingency table, which is a simple way to express outcomes that fall into discrete categories. In the contigency table example, we looked at “sick” vs “not sick” categories. Now we’ll move on to Analysis of Variance, which is like a t-test in that the samples will have continuous outcomes, but like a chi-squared test in that we’re comparing multiple groups simultaneously. The following images, taken from a Stanford lecture’s slide deck, show the relation of t-tests, chi-squared tests, and ANOVA to each other and many other statistical methods that we may get to later. Anova is most easily described by way of example, so we will use the one from Khan Academy’s Inferential Stats video on ANOVA. ### ANOVA Example The data we will use is an arbitrary 3x3 grid of outcomes. The data set is small, so in practice one of the other methods from the above table should be used for small sample sizes, but we’ll keep it simple here and still use ANOVA so it’s easier to understand. Group 1Group 2Group 3 355 236 147 ANOVA is calculated by – surprise – analyzing the variance of the data. This analysis is done within each group, and then between each of the groups, and these two variances are compared to judge if the outcomes of any group differs statistically. To calculate the variances, we’ll need the grand mean ($$\bar{\bar x}$$) and each group’s mean ($$\bar x_i$$). $\bar{\bar x} = 4$ $\bar x_1 = 2$ $\bar x_2 = 4$ $\bar x_3 = 6$ The first variance we will calculate is the variance within all the groups. The idea is to add up all the variances within each group into a single value. The final value will be the Sum of Squares Within (SSW). $\begin{eqnarray} SSW &=& \sum_i \sum_j (x_j - \bar x_i)^2 \cr &=& (3-2)^2 + (2-2)^2 + (1-2)^2 + (5-4)^2 + (3-4)^2 + (4-4)^2 + (5-6)^2 + (6-6)^2 + (7-6)^2 \cr &=& 1 + 0 + 1 + 1 + 1 + 0 + 1 + 0 + 1 \cr &=& 6 \end{eqnarray}$ The second variance to calculate is the varience between each group. Basically, how does each group’s mean differ from the grand mean. This value is called the Sum of Squares Between (SSB). $\begin{eqnarray} SSB &=& \sum_i n_i (\bar x_i - \bar{\bar x})^2, \text{where } n_i \text{ is the size of group i} \cr &=& 3 \times (2-4)^2 + 3 \times (4-4)^2 + 3 \times (6-4)^2 \cr &=& 12 + 0 + 12 \cr &=& 24 \end{eqnarray}$ We also need the degrees of freedom (d.f.) for each of these variance. The d.f. for SSW is m * (n - 1), where m is the number of groups and n is the size of each group. The reason is that for a group to have a given mean, the last value can be derived, and you can do this for each of the m groups. So for us, the d.f. of SSW is 6. The d.f. for SSB is m - 1. The reason for this is because if the grand mean is fixed, then only m - 1 of the group means can vary. So in this case, the d.f. of SSB is 2. Now, we need to indroduce a new distribution to test these statistics we’ve calculated, the F-distribution. We don’t need to know too much about the ins and outs of the F-distribution, but much like we used z-tables and t-tables for hypothesis testing, we can use an F-table when we’re dealing with ANOVA. We will use an F-test to compare with the F-table. The F-test allows us to examine what is known as the “omnibus” null hypothesis, which just states that all the true means of our groups are equal. $\begin{eqnarray} &H_0&: \mu_1 = \mu_2 = \mu_3 \cr &H_1&: \text{At least one mean differs from the others} \end{eqnarray}$ Let’s use $$\alpha = 0.10$$ for this example. The definition of the F-test statistic is the following, where $$df_1$$ is the d.f. of SSB and $$df_2$$ is the d.f. of SSW. $F = \frac{\frac{SSB}{df_1}}{\frac{SSW}{df_2}} = \frac{\frac{24}{2}}{\frac{6}{6}} = 12$ Next, we can find the critical F-statistic for our $$\alpha$$ using the F-table. Each table is for a different $$\alpha$$ value. The first one is for $$\alpha = 0.10$$, so that’s the one we’ll use. So our critical F-statistic is 3.46330, which is much less that our calculated F value of 12, so we can safely reject the null hypothesis. Note that a different statistical method is required to determine the specifics about which groups deviate from another group. For example, you could perform a t-test between groups 1 and 2 to see if their means differ. ### Moving Forward Now that we have some of the most common inferential statistical methods in our toolbelt, we can look a lot more critically at scientific studies. Next week we will revisit the study talked about in my post on the illusion of causality to see how well their data actually supports the paper’s claims.
2018-11-18T19:12:53
{ "domain": "datajourneyman.com", "url": "http://www.datajourneyman.com/2015/05/25/anova.html", "openwebmath_score": 0.7557387351989746, "openwebmath_perplexity": 529.8406156872627, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542526, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747618464435 }
https://math.stackexchange.com/questions/2747384/irreducibility-for-cubics-and-existence-of-inflection-points
Irreducibility for cubics and existence of inflection points Consider a nonzero homogenuous degree three polynomial $P\in k[X,Y,Z]$ in $3$ variables with coefficients in a field $k$ of characteristic $\neq 2, 3$. 1. How can one check whether $P$ is irreducible? Is there an analogue for cubics of the discriminant of a quadratic form that decides this question for conics? 2. In the first paragraph here it is claimed that an irreducible cubic either has an inflection point or a singular point, and that this follows from Bezout's theorem. Could someone provide details? I'm starting to learn about elliptic curves : these are some elementary questions I couldn't find answers to online. Regarding the first point, all I have found is this fact which implies that the equation of an elliptic curve in Weierstrass form is irreducible as a polynomial $\in k[X,Y]$. I'm not quite sure whether this fact is answers my first question. Certainly irreducibility of the Weierstrass normal form should imply irreducibility of the homogeneous cubic $P$. Reducing the equation of an elliptic curve to Weierstrass form requires as far as I understand, sending the flex point and its tangent line to the line at infinity, thus the existence of flex points is important. • For 2. see this question and this pdf on Jan Stevens' homepage – Jan-Magnus Økland Apr 21 '18 at 15:53 • Bezout's theorem also implies that a reducible cubic defines a curve with a singular point. This is because the points of intersection of the curves determined by the factors are singular (their coordinates may be in an extension field of $k$ though). – Jyrki Lahtonen Apr 24 '18 at 2:24 • Olivier, I don't think so. Consider the (homogenization of) Foil of Descartes: $X^3+Y^3=3XYZ$. That cubic has a singularity at the origin, but it is irreducible. For if that cubic were a product of a linear and a quadratic the curve would be a union of a line and a conic which it manifestly isn't. – Jyrki Lahtonen Apr 24 '18 at 8:01 • Yes, that's equivalent to smoothness. In higher dimensions (with several constraint equations) you need the Jacobian to have full rank. IIRC smoothness is often defined a bit differently, but it comes to that rank condition for the Jacobian. In general you get one to the Jacobian for each generator of the ideal of polynomials vanishing on the variety. – Jyrki Lahtonen Apr 24 '18 at 8:06 • Sorry about not making this clear. The Foil of Descartes has a singularity at the origin of the affine chart $Z=1,x=X/Z,y=Y/Z$. In other words, at the point $[X:Y:Z]=[0:0:1]$ = the origin of that affine chart. The singularity is obvious in the plot of that real plane curve. Smoothness at a point of a projective curve can always be tested in an affine chart containing it. – Jyrki Lahtonen Apr 24 '18 at 8:18
2019-10-15T19:09:03
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2747384/irreducibility-for-cubics-and-existence-of-inflection-points", "openwebmath_score": 0.8477683663368225, "openwebmath_perplexity": 231.01511884536987, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542526, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747618464435 }
https://testbook.com/question-answer/a-refrigerating-machine-working-on-a-reversed-carn--5f2fc3eedd89060d0bf2b8cc
# A refrigerating machine working on a reversed Carnot cycle takes 2 kW of heat from the system while working between temperatures of 300 K and 200 K. COP and power consumed by the cycle will be This question was previously asked in TNTRB 2017 ME Official Question Paper View all TN TRB ME Papers > 1. 1, 1 kW 2. 1, 2 kW 3. 2, 1 kW 4. 2, 2 kW Option 3 : 2, 1 kW Free CTET Paper 1 - 16th Dec 2021 (Eng/Hin/Sans/Ben/Mar/Tel) 1.6 Lakh Users 150 Questions 150 Marks 150 Mins ## Detailed Solution Concept: Coefficient of performance of a refrigerator is given by: $${COP} = \frac{{Refrigeration~Effect}}{{Work~Input}} = \frac{{{Q_2}}}W$$ For reversible refrigerating machine: $$COP=\frac{{{T_2}}}{{{T_1} - {T_2}}}$$ where Q1 = Heat rejected to the surrounding, Q2 = Heat absorbed from storage space W = Work input = Q1 – Q2 Calculation: Given: T1 = 300 K, T2 = 200 K, Q2  = 2 kW The coefficient of performance of a reversible refrigerator is: $$COP=\frac{{{T_2}}}{{{T_1} - {T_2}}}$$ $$COP = \frac{{200}}{{300 - 200}} = 2$$ Now COP is also defined as: $${COP} = \frac{{Refrigeration~Effect}}{{Work~Input}} = \frac{{{Q_2}}}W$$ $$2 = \frac{2}{W}$$ W = 1 kW
2023-02-02T07:12:02
{ "domain": "testbook.com", "url": "https://testbook.com/question-answer/a-refrigerating-machine-working-on-a-reversed-carn--5f2fc3eedd89060d0bf2b8cc", "openwebmath_score": 0.6088862419128418, "openwebmath_perplexity": 5920.446236901225, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692291542525, "lm_q2_score": 0.6688802537704063, "lm_q1q2_score": 0.6532747618464435 }
https://mathoverflow.net/questions/186683/coloring-the-edges-of-a-torus-graph/190682
# Coloring the edges of a torus graph Question:Consider the $2k \times 2k$ grid graph on a torus. Is it true that for every $2$-coloring of the edges, there is an antipodal pair of vertices connected by a path that changes colors at most $k-1$ times? Edit: $k>1$ as pointed out in the comments. More formally: The $2k \times 2k$ grid graph on a torus is defined as follows. $$V(G):=\{(i,j)|i,j\in [1,2k]\cap\mathbb{Z} \}$$ Two vertices are connected if and only if on one of the coordinates they coincide, and on the other one, their values differ by exactly one, modulo $2k$. We say that a pair of vertices is antipodal if their distance is maximal. Equivalently if both their coordinates differ by $k$, modulo $2k$. The path between them which we require to change colors only $k-1$ times is not required to be $2k$ long. Additional information:The interesting part of the question is that the pairs are $2k$ away, but we only have $k-1$ color changes. If true, sometimes we need $k-1$ color changes, as the following coloring shows: • Color the edges $((i,j),(i+1,j))$ red if $i$ is even, and blue otherwise. • Color the edges $((i,j),(i,j+1))$ red if $j$ is even, and blue otherwise. There is a similar open problem in the hypercube. There, it is conjectured that for every coloring there is an antipodal pair of vertices, connected by a path that changes color at most once: http://theory.stanford.edu/~tomas/antipod.pdf • It is not true for $k=1.$ – Aaron Meyerowitz Nov 10 '14 at 7:05 • Thank you, edited it to be $k>1$. When $k=1$ the torus structure does not help yet. – Daniel Soltész Nov 10 '14 at 14:21 • Then I'd lean toward it being true. Consider for each point the closest thing which can not be reached in $k$ color changes. Maybe that would be effective. – Aaron Meyerowitz Nov 10 '14 at 14:50 • Could you add a link to the hypercube problem? – domotorp Nov 10 '14 at 20:41 • @domotorp Link added. I can also talk about the connections between the result of Feder and Subi, and this question. There is also an other paper of Leader and Long about the hypercube version. – Daniel Soltész Nov 10 '14 at 21:38 Definition: Consider the cycles of length $4$ in the torus. Let us call a subgraph of the torus a right-up diagonal if it consists of $2k$ such $4$-cycles, and the $i$-th cycle's up right corner is the $i+1$-st cycle's left-down corner. Every edge of the torus can be naturally double covered with right-up cycles. Let us call a path tight if it is contained in a right up diagonal. We will prove slightly more. Theorem: In every $2$-coloring of the torus graph, there is a tight path that changes colors at most $k-1$ times. Proof: Definition: We define the right-up diagonal of length $l$ similarly to the right up diagonal ($1 \leq l \leq 2k$), but there are only $l$ connected 4-cycles. The original right up diagonal (of length $2k$) is similar to a circle, but all the other shorter ones are similar to paths (or diagonal arcs as we are on a torus). Thus the diagonals of length $l$ have a natural beginning and endpoint, which coincides if the diagonal is of length $2k$. Lemma: Consider all the right-up diagonals of length $l$ and all the left-up diagonals of length $l$. The average number of minimal color changes needed to get from the beginning to the endpoint of a diagonal, such that we can not leave the diagonal is at most $l$. Proof: Consider a $4$-cycle. There is a right-up and a left-up diagonal containing this cycle. If (WLOG) at the right-up diagonal we have to change colors twice in this cycle, it must happened this way: We arrived to the bottom left vertex of the cycle with our path having (WLOG) the color blue. Both edges adjacent to this vertex and belonging to this cycle were red, and the other two edges blue. Thus the left-up diagonal containing this cycle does not have to change colors at this cycle, as there is a red and a blue monochromatic path between the bottom right and the top left vertices. Thus on the average a single $4$-cycle can not cause more than $1$ color change. $\square$ Corrolary: As in a diagonal of length $k$, the beginning and endpoints of the diagonal are antipodal. We have that the average number of necessary color changes from a point to its antipodal pair using only tight paths is at most $k$. (Actually when we averaged, we took into account all the four possible diagonals connecting these points, and the best tight path in every such diagonal.) So we will have such a path with only $k-1$ color changes unless the average is exactly $k$ and every such path with minimal color changes has to change colors exactly $k$ times. Consider now diagonals of length $2k$. By the lemma we have that the average number of paths with minimal number of color changes from the beginning to the endpoint (which is the same point) is at most $2k$ (not counting that maybe we arrive with a different color when we close the cycle). but every diagonal of length $2k$ is the union of two diagonals of length $k$ where we have that we need $k$ color changes. Thus if the average number of color changes is $2k$ but we always need at least $2k$ changes, we conclude that in every right-up and every left-up diagonal, there is a $2k$ cycle that changes colors exactly $2k$ times. (now we can count all the color changes, as after an even number of color changes, in the end we always arrive with the starting color) Now we will work with these cycles. Definition: Let us call a vertex in a right-up diagonal subgraph central, if every cycle of length $2k$ in this subgraph, passes through this vertex. There are $2k$ central and $4k$ non central vertices in every right-up diagonal of length $2k$. Every vertex is central in precisely one right-up diagonal of length $2k$. Consider a right up diagonal of length $2k$ and the associated cycle with $2k$ color changes. If such a cycle has a color change at a central vertex, we have a path along this cycle to the antipodal pair of this vertex with at most $k-1$ changes. Thus we can assume that every such cycle has its color changes at non-central points. But then the cycles have to change its color at every non-central point as there are $2k$ color changes. Thus if there are antipodal non-central points on this cycle, we have exactly $k-1$ color changes between them. The only case left to examine is when we have such a cycle, and every non-central point on the cycle is such that its antipodal pair is not on the cycle. We could change the cycle along a $4$-cycle of the right-up diagonal. (As every $4$-cycle has two common edges with our $2k$ cycle and two other edges.) So if we change the cycle this way and the number of color changes is still $2k$, we are done. If for every $4$-cycle we can not change our cycle without increasing the number of color changes, we will conclude that every $4$-cycle is colored properly (adjacent edges have different colors): Let C_4 be a $4$-cycle on a right-up diagonal. The associated cycle with $2k$ color changes shares exactly two edges with this cycle. At the associated cycle, there are no color changes at the central vertices, and there is always a color change at non-central vertices. Thus the change of the cycle along this $4$-cycle increases the number of color changes, the $4$-cycle had to be colored properly. (draw a nice picture) There is only one coloring where every $4$-cycle is colored properly, and at this coloring it is easy to find tight paths connecting antipodal vertices. $\square$
2020-05-31T23:52:33
{ "domain": "mathoverflow.net", "url": "https://mathoverflow.net/questions/186683/coloring-the-edges-of-a-torus-graph/190682", "openwebmath_score": 0.8180791139602661, "openwebmath_perplexity": 339.67430333677396, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692277960746, "lm_q2_score": 0.6688802537704064, "lm_q1q2_score": 0.6532747609379852 }