Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Solve the inequality $|x^2-2|<1$ I'm trying to solve inequalities that include absolute values. The way we solved these questions in class is different than what I am used to. The teacher would take the positive and negative possibilities of the inequality and treat them separately. I am trying to solve this as a union of intervals.
$$|x^2-2|<1$$
Normally I would just write this as:
$$-\sqrt{3} < x < -1 \text{ and } 1 < x < \sqrt{3}$$
However this is not correct I don't believe.
| $$|x^2-2|<1\iff-1< x^2-2 <1$$
$$-1< x^2-2 <1\iff1<x^2<3$$
$$1<x^2<3\iff x^2-1>0\space \&\space x^2-3<0$$
$$x^2-1>0\space \&\space x^2-3<0 \iff x\in((-\infty,-1)\cup(1,\infty))\cap(-\sqrt 3,\sqrt 3)$$
$$ x\in((-\infty,-1)\cup(1,\infty))\cap(-\sqrt 3,\sqrt 3)\iff x\in(-\sqrt3,-1)\cup(1,\sqrt3)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/513889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Suppose the gcd (a,b) = 1 and c divides a + b. Prove that gcd (a,c) = 1 = gcd (b,c) I am lost.
So far...
If $\gcd (a,b) = 1$, by Bezout's Formula
$ax + by = 1$
If $c|(a+b)$, then
$cf = a+b$
Then,
$a (x-y) + cfy = 1$
$b(yx) + cfx = 1$
Am I on the right track? Any suggestions?
| If by contradiction, $(a,c)=(b,c)=m>1$ then $m|a,m|b$ and $m|c$, which means that there exist an $m>1$ that simultaneously divides both $a$ and $b$ which then implies that $gcd(a,b)\geq m$ which contradicts our assumption.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/513965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 5
} |
For which $a$ does the system of linear equations have a root Choose a possible $a$ such that the linear equations have a root
$$\begin{matrix} x+2y+3z=a \\
4x+5y+6z=a^2 \\
7x+8y+9z=a^3 \end{matrix}$$
Do I begin by finding the possible values of $a$ such that the system is consistent?
| Then first equation plus the third equation minus twice the second yields $a^3-2a^2+a=a(a-1)^2=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/514126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Geometric Brownian motion problem Here's the question:
Let $S(t)$, $t \geq 0$ be a Geometric Brownian motion process with drift parameter $\mu = 0.1$ and volatility parameter $\sigma = 0.2$. Find $P(S(3) < S(1) > S(0)).$
Is there something wrong with the following reasoning:
$P(S(3) < S(1) > S(0))=P(S(1)>S(3) \geq S(0)) + P(S(1) > S(0) \geq S(3))$, where
$P(S(1)>S(3) \geq S(0))=P(S(1) > S(3))-P(S(3) \leq S(0)$ and
$P(S(1) > S(0) \geq S(3))=P(S(1) > S(0)) - P(S(0) \leq S(3))$.
| Tiny point: You are missing a "$)$".
Major point: $P(S(1)\gt S(3) \geq S(0))=P(S(1) \gt S(3))-P(S(3) \leq S(0))$ is wrong and similarly with the following equality. You should have something like $P(S(1)\gt S(3) \geq S(0)) =P(S(1) \gt S(3))-P(S(3) \leq S(0)) + P(S(0) \gt S(1) \geq S(3))$
Critical point: $S(0)$, $S(1)$ and $S(3)$ are not independent, so this is the wrong approach. But $S(1)-S(0)$ and $S(3)-S(1)$ are independent, so their probabilities multiply, and you will probably find it easier to deal with the rather more obvious and more easily handled $$P(S(3) \lt S(1) \gt S(0)) = P(S(3) \lt S(1)) \times P(S(1) \gt S(0)).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/514192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How does adding big O notations works can someone please explain how adding big O works.
i.e. $O(n^3)+O(n) = O(n^3)$
why does the answer turn out this way? is it because $O(n^3)$ dominates the whole expression thus the answer is still $ O(n^3)$
| The formal definition for equalities which contain $\;O(\cdot)\;$ and related notations, is that these notations are sets, and that such an equality holds if it holds for every function in each left hand side set, and for some function in each right hand side set.
(Anyone: Feel free to insert a reference; I don't have one at hand.)
Therefore $$
O(n^3)+O(n) = O(n^3)
$$ is formally an abbreviation for $$
\langle \forall f,g : f \in O(n^3) \land g \in O(n) : \langle \exists h : h \in O(n^3) : \langle \forall n :: f(n) + g(n) = h(n) \rangle \rangle \rangle
$$ (I won't go into the proof of this statement here; the first answer should help you there.)
Note how this general definition applies to the special case of $$
f(n) = O(g(n))
$$ where it gives us $$
\langle \exists h : h \in O(g(n)) : \langle \forall n :: f(n) = h(n) \rangle \rangle
$$ which can quicky be simplified to the usual $$
f \in O(g(n))
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/514263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Cardinality of Sets Proof I have a question as such:
Let $|A|$ denote the number of elements in A. For finite sets $A,B$,
prove that $|A\backslash B| = |A| - |A\cap B|$, where $\backslash$
stands for set difference.
Could someone show me how to prove it, please? I'm having difficulty working out a proof that feels rigorous enough - it seems too obvious!
| Notice that $A\setminus B=A\cap B^c$ and we have $A=A\cap(B\cup B^c)=(A\cap B)\cup(A\cap B^c)$ and the sets in the last union are disjoint so
$$|A|=|A\cap B|+|A\cap B^c|$$
and we can conclude.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/514371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Question about compactly supported distribuions Let u be a distribution with compact support and let f be a Schwartz function: Is it true that the convolution of f with u is a Schwartz function?
| Yes. A distribution with compact support has finite order: it's just a finite collection of compactly supported Radon measures that get integrated against various derivatives of test functions. So, if you fix a Schwartz function $f$ and feed the translate $\tau_x f$ into the distribution, the value you get decays faster than any power of $|x|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/514499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solutions of triangles $AA_1$, $BB_1$, $CC_1$ are the medians of triangle $ABC$ whose centroid is $G$. If points $A, C_1, G, B_1$ are concylic then prove that $2a^2= b^2 + c^2$.
Thanks
My try:-
$ar(GBC)=1/3ar(ABC)$
$\frac{1}{2}(GB.GC.\sin(\pi-A))=\frac{1}{3}(\frac{1}{2}bc\sin A)$
$GB.GC=\frac{1}{3}bc$
Now I can't think any further. Here is my diagram:
| It is really very simple. Using power of point property
$$BC_{1}.BA=BG.BB_{1}$$
$$\frac{c}{2}.c=\frac{2}{3}m_{b}.m_{b}$$
$$3c^2=4m^2_{b}$$
$$3c^2=2a^2+2c^2-b^2$$
$$2a^2=b^2+c^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/514596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Circular orientation of $n$ identical red balls and $n + 1$ identical black balls I encountered a question as follows:
In how many ways may $n$ identical red balls and $n + 1$
identical black balls be arranged in a circle (This number is called a Catalan number)?
While trying to analyze it
$1^{st}$ I considered linear arrangement of $\underline {n\ \textit{identical red balls}}$ and $\underline {n\ \textit{identical black balls}}$. It's same as writing $n$ 'R's & $n$ 'B's in a row. So from those $2n$ positions; the task is to choose $n$ positions to put the 'R's (or the 'B's) in. This can be done in $2n\choose n$ ways.
E.g : for $n = 2$ the arrangements are $(RRBB), (BBRR), (RBRB), (BRBR), (RBBR)\ \&\ (BRRB)$
Here Catalan Number [which would be $\frac{1}{n+1}\times{2n\choose n}$]comes into picture if there $\underline {doesn't\ exist}$ any sequence where the number of 'R's (or 'B's) $\underline {in\ any\ prefix\ of\ the\ sequence}$ is greater than that of 'B's (or 'R's).
Please correct me if I'm wrong.
Now for the original problem we are given a $\underline{circular\ arrangement}$ & $\underline{1\ extra\ black\ ball}$.
So taking that extra ball as fixed reference in circular orientation, we again have the same question of finding $n$ places out of $2n$ places to put the red (or black) balls in. This according to me is again results into $2n \choose n$, with only difference being the equivalent clock-wise & counter clock-wise orientation. So I've come up with $\frac{1}{2}\times {2n\choose n}$.
I've not been able to understand why in the question they've mentioned the answer to be the Catalan Number i.e $\frac{1}{n+1}\times{2n\choose n}$.
Can anyone help me figure out what I'm missing ?
| The simple explanation: In a linear arrangement of $n$ red and $n+1$ black balls, there are ${2n+1 \choose n} = \frac{(2n+1)!}{n!(n+1)!}$ possibilities. Now consider put this line into a circle: $2n+1$ line patterns will produce the same circle since the line can start at any of the $2n+1$ points of the circle, so there are $\frac{1}{2n+1}{2n+1 \choose n} = \frac{(2n)!}{n!(n+1)!} = \frac{1}{n+1}{2n \choose n}$ possible circles.
In fact the difficult part is showing that at least (and so exactly) $2n+1$ different lines can be produced from any circle: the equivalent statement for $2n$ lines would not always be true of circles with $n$ red and $n$ black balls, for example where they interleaved perfectly.
You also need to take care to maintain direction around the circle and along the line: this is the difference between a necklace and a bracelet.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/514666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Graph Entropy - What is it? I am having trouble getting some intuition as to what graph entropy measures. The definition that I have is that given a graph $G$, $H(G) = \min_{X,Y}I(X ;Y)$, where $X$ is a uniformly random vertex in $G$ and $Y$ is an independent set containing $X$. Also, $I(X; Y)$ is the mutual information between $X$ and $Y$ defined by: $I(X; Y) = H(X) - H(X|Y)$, where $H$ is the regular entropy function.
First, could anyone provide some intuition as to what this is actually measuring? Also, if we are taking the minimum of both $X$ and $Y$, why does $X$ have to be a uniformly random vertex? If we are minimizing $I$, then there is some fixed vertex $X$ and independent set $Y$ such that $I$ is minimized. So why is there a notion of uniformly random vertex?
Thanks!
EDIT: I am using these notes as a reference for a reading group: http://homes.cs.washington.edu/~anuprao/pubs/CSE533Autumn2010/lecture4.pdf
| $X$ is the source with maximal entropy (uniform distribution), and $Y$ is the set of distinguishable symbols (distinguishability is given by the edges). Graph entropy is trying to quantify the encoding capacity of such system for an arbitrary $Y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/514764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Definition of the outer measure Let $X$ be a set. By definition, for every sequence of sets (disjoint or not), an outer measure $\theta:\mathcal{P}X\rightarrow [0,+\infty]$ is a monotic, countably subadditive (hence subadditive) function which vanishes at $0$.
We then have four possibilities:
*
*$A\cap B=\emptyset$ implies $\theta(A\cup B)=\theta A+\theta B$ (additivity)
*$A\cap B=\emptyset$ implies $\theta(A\cup B)<\theta A+\theta B$ (e.g. Bernstein set)
*$A\cap B\not=\emptyset$ implies $\theta(A\cup B)<\theta A+\theta B$ (quite intuitive)
My question is how to interpret the last possibility:
*
*$A\cap B\not=\emptyset$ implies $\theta(A\cup B)=\theta A+\theta B$
Is it as well a case of nonmeasurability?
If not, could anybody provide me with a simple example?
| It is not a case of non measurability: take the lebesgue outer measure and take $A = [0,1]$ and $B = [1,2]$.
$A \cup B = [0,2]$ and $A \cap B = \{1\} \neq \emptyset$ but clearly the outer measure of the union is the sum of the outer measure and both $A$ and $B$ will be measurable in the sense of Caratheodory (or in any sense, they are intervals! :D )
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/514893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Cantor’s diagonal proof revisited In his diagonal argument (although I believe he originally presented another proof to the same end) Cantor allows himself to manipulate the number he is checking for (as opposed to check for a fixed number such as $\pi$), and I wonder if that involves some meta-mathematical issues.
Let me similarly check whether a number I define is among the natural numbers. The number is $n+1$ and it is clear that $1$, $2$, $3,\ldots,\ n$ are not among these numbers. This “proves” that $n+1$ is not a natural number.
I have here, just like Cantor, a formula for a number, rather than a given number. What is the difference between our proofs?
It seems to me that the answer is that Cantor’s number (as opposed to mine) is being successively better bounded; the process of moving forward in his enumeration describes a converging series, such as we use to define the real numbers, using Cauchy limits. You don’t hear this added comment in the proof. Don’t you think it belongs to the proof?
| In the diagonal argument, a function $f$ from the set of sequences of real numbers to $\mathbb{R}$ is defined. We start from any sequence $S$ of real numebrs. Then it is shown that $f(S)$ is not an element in $S$. The formula is not "changing during the process"; the number we are searching for, $f(S)$, is well-defined if $S$ is given. Now we have shown that for any sequence $S$ of real numbers, there is a real number which is an element of $S$.
The $n+1$ in your proof is not a definition of a number you are searching.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/514991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Foundations of Forcing I am currently studying Forcing methods in order to understand some independence results and model's constructions.
Now I am interested on formalizing the main notions around forcing such as consistency, completeness, transitive models, well-founded relations, absoluteness, reflection principle, etc. in the logical point of view.
I have heard that Shoenfield has a good work on that, I think I want something related to his approach or something improved (if it uses classical logic).
People said to me that Shoenfield's book Mathematical Logic is a good reference for what I am looking for.
Allerting that I have no trainning in basic logic (but I intent to start it next semester), can someone help me with a study guide with a few reference texts? Which topics should I see in order to have a solid understanding of Forcing in the logical approach?
Thank you
| Try A beginner's guide to forcing by Timothy Chow.
http://arxiv.org/abs/0712.1320
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/515053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
Working out the difference in earnings I'm mathematically impaired/ignorant and trying to figure out the difference in earnings between my partner and I to work out a fair split of the bills.
So; I earn £2060 per month and partner earns £1650. As a percentage, how much more than her do I earn?
Therefore; If we had a mortgage payment of £850, by what percentage should we split the figure by so it’s proportionate to the difference in our earnings.
Any help sincerely appreciated,
M.
| You earn more than her : (2060-1650)/1650 * 100% = 24.85 % ,
You should pay for the mortgage : 850*2060/(2060+1650)=850*2060/3710 = 471.96 ,
Your gf should pay the rest
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/515158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is a coset? Honestly. I just don't understand the word and keep running into highly technical explanations (wikipedia, I'm looking at you!). If somebody would be so awesome as to explain the concept assuming basic knowledge of group theory and high school algebra I would be delighted.
| If $H$ is a subgroup of $G$, then you can define a relation on $G$ by setting
$$
a\sim_H b\qquad\text{if and only if}\qquad ab^{-1}\in H
$$
It's a useful exercise in applying group laws proving that $\sim_H$ is an equivalence relation and that the equivalence class of $1\in G$ is
$$
[1]_{\sim_H}=H
$$
This equivalence relation has very pleasant properties; for instance, if $a\in G$, then there is a bijection $\varphi_a\colon H\to [a]_{\sim_H}$ given by
$$
\varphi_a(x)=xa
$$
(prove it). In particular, all equivalence classes are equipotent so, in the finite case, they all have the same number of elements. This bijection shows also that
$$
[a]_{\sim_H}=\{\,ha:h\in H\,\}=Ha
$$
the right coset (somebody calls this a left coset, check with your textbook) determined by $H$ and $a$.
This readily proves Lagrange's theorem: if we denote by $[G:H]$ the number of equivalence classes and $G$ is finite, we have
$$
|G|=|H|\,[G:H]
$$
because we can just count the number of elements in one equivalence class ($H$ or anyone else) and multiply by the number of classes, since they share the number of elements.
There is of course no preference for the right side; one can define
$$
a\mathrel{_H{\sim}} b\qquad\text{if and only if}\qquad a^{-1}b\in H
$$
and prove the same results as before, with the only difference that the map will be
$$
\psi_a\colon H\to[a]_{_H{\sim}},\quad \psi_a(x)=ax
$$
and the equality
$$
[a]_{_H{\sim}}=aH.
$$
In general the two equivalence relations are distinct. They are the same precisely when the subgroup $H$ is normal or, equivalently, one of them is a congruence, that is, for $\sim_H$, from $a\sim_H b$ and $c\sim_H d$ we can deduce $ac\sim_H bd$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/515241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 7,
"answer_id": 5
} |
Find all primes of the form $n^n + 1$ less than $10^{19}$ Find all primes of the form $n^n + 1$ less than $10^{19}$
The first two primes are obvious: $n = 1, 2$ yields the primes $2, 5$. After that, it is clear that $n$ has to be even to yield an odd number.
So, $n = 2k \implies p = (2k)^{2k} + 1 \implies p-1 = (2k)^{k^2} = 2^{k^2}k^{k^2}$. All of these transformations don't seem to help. Is there any theorem I can use? Or is there something I'm missing?
| in $n^n+1$, it is algebraicly composite if n is not a power of 2. So you're left with which powers of 2. work.
In the case where n is not $1$or of the form $2^{2^m}$, one sees the power is a composite over two or more primes, and is thus algebraicly composite.
Thus, you just have to consider $1$ and those that come to fermat numbers. ie $1,2, 4$. All others are composite.
The algebraic magic is that there a divisor of $a^b-1$, for each $m\mid b$. Since $(a^b-1) (a^b+1) =a^{2b}-1$, one is then searching for any $2n$, where exactly one number divides it, but not $n$. One can write 8, etc as 2^3, and apply the same rule, and replace a by 2, and b by 3b. This is enough to show that the only examples to work are when n is of the form $2^{2^m}$, eg m=0 gives n=2, m=1 gives n=4, m=2 gives n=16, and m=3 gives n=256. and m=4 gives n=65536.
Now, it is known that 16^{16}+1 has a factor, but it took many years to find it: you use the algorithm a=4, a <= a^2-2, for a certian number of terms, until you pass n in 2^{2^n}+1, by which time, a ought come to '2'. If this does not happen, it's composite. In fact, there are no known fermat numbers greater than 65537, but this is not of the form $n^n+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/515315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
What's the probability of drawing every card at least from 82 cards, with replacement? What is the probability that if I draw 82 cards at random with replacemnt from a standard deck, every card is drawn at least once?
I've been banging my head against a wall for hours now, any help please.
I tried a smaller scale problem, so if I draw 52 cards, then it would be 1/52*1/52*1/52....52 times, so (1/52)^52. I think...
| I found the answer: You use Stirling numbers of the Second Kind. So the answer ends up being 52!*S(82,52). Refer to this page: http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/515384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Sum of closed and compact set in a TVS I am trying to prove: $A$ compact, $B$ closed $\Rightarrow A+B = \{a+b | a\in A, b\in B\}$ closed (exercise in Rudin's Functional Analysis), where $A$ and $B$ are subsets of a topological vector space $X$. In case $X=\mathbb{R}$ this is easy, using sequences. However, since I was told that using sequences in topology is "dangerous" (don't know why though), I am trying to prove this without using sequences (or nets, which I am not familiar with). Is this possible?
My attempt was to show that if $x\notin A+B$, then $x \notin \overline{A+B}$. In some way, assuming $x\in\overline{A+B}$ should then contradict $A$ being compact. I'm not sure how to fill in the details here though. Any suggestions on this, or am I thinking in the wrong direction here?
| If $x\notin (A+B)$, then $A\cap(x-B)=\varnothing$. Since $(x-B)$ is closed, it follows from Theorem 1.10 in Rudin's book that there exists a neighborhood $V$ of $0$ such that $(A+V)\cap(x-B+V)=\varnothing$. Therefore $(A+B+V)\cap(x+V)=\varnothing$ and, in particular, $(A+B)\cap(x+V)=\varnothing.$ As $(x+V)$ is a neighborhood of $x$, this shows that $x\notin \overline{(A+B)}$. (Proof taken from Berge's book.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/515496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 4,
"answer_id": 2
} |
Problem in Algebra and Geometric sequence I need help on this one question which is in Algebra and on Geometric progression.
The question is as follows:
In a geometric sequence prove that:
$(b-c)^2 + (c-a)^2 + (d-b)^2 = (d-a)^2$.
Thanks,
Sudeep
| Let $$\frac dc=\frac cb=\frac ba=k\implies b=ak,c=bk=ak^2,d=ck=ak^3$$
$$(b-c)^2+(c-a)^2+(d-b)^2=a^2\{(k-k^2)^2+(k^2-1)^2+(k^3-k)^2\}=a^2(k^6-2k^3+1)=\{a(1-k^3)\}^2=(a-d)^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/515577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Find an angle of an isosceles triangle $\triangle ABC$ is an isosceles triangle such that $AB=AC$ and $\angle BAC$=$20^\circ$. And a point D is on $\overline{AC}$ so that AD=BC, , How to find $\angle{DBC}$?
I could not get how to use the condition $AD=BC$ , How do I use the condition to find $\angle{DBC}$?
EDIT 1: With MvG's observation, we can prove the following fact.
If we set on a point $O$ in $\triangle{ABC}$ such that $\triangle{OBC}$ is a regular triangle, then $O$ is the circumcenter of $\triangle{BCD}$.
First, we will show if we set a point $E$ on the segment $AC$ such that $OE=OB=OC=BC$, then $D=E$.
Becuase $\triangle{ABC}$ is a isosceles triangle, the point $O$ is on the bisecting line of $\angle{BAC}$. $\angle{OAE}=20^\circ/2=10^\circ$.
And because $OE=OC$, $\angle{OCE}=\angle{OEC}=20^\circ$, $\angle{EOA}=20^\circ-10^\circ=10^\circ=\angle{EAO}$.
Therefore $\triangle{AOE}$ is an isosceles triangle such that $EA=EO$. so $AD=BC=AE$, $D=E$.
Now we can see the point $O$ is a circumcenter of the $\triangle{DBC}$ because $OB=OC=OD.$
By using this fact, we can find $\angle{DBC}=70^\circ$,
| I saw the following solution may years ago:
On side $AD$ construct in exterior equilateral triangle $ADE$. Connect $BE$.
Then $AB=AC, AE=BC, \angle BAE=\angle ABC$ gives $\Delta BAE =\Delta ABC$ and hence $AB=BE$.
But then
$$AB=BE, BD=BD, DA=DE \Rightarrow ADB =EDB$$
Hence $\angle ADB=\angle EDB$. Since the two angles add to $300^\circ$ they are each $150^\circ$. Then $\angle ABD + \angle ADB+ \angle BAD=180^\circ$ gives $ABD=10^\circ$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/515684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 1
} |
For the Compactness Theorem for Propositional Logic, show that the extension is not unique.
During the proof of the compactness theorem, from an arbitrary finitely satisfiable set $\Sigma$ of WFFs, we construct a finitely satisfiable set $\Delta\supseteq \sigma$ such that for every WFF $\alpha$, either $\alpha\in\Delta$ or $\lnot\alpha \in\Delta$. Show that $\Delta$ need not be unique by describing an infinite, finitely satisable set $\Sigma$ of WFFs such that there is more than one possible extension $\Delta$.
Could someone please give me some guidance in answering this question? Much appreciated. Thanks.
| Hint. Suppose the language contains some unary predicate that is not mentioned in $\Sigma$ at all ...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/515776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Cohomology of finite groups with finite coefficients I'm wondering if the group cohomology of a finite group $G$ can be made nontrivial with a nice choice of a finite $G$-module M. In other words, given a finite group $G$ and a number $n$, does there exist a finite $G$-module $M$ such that $H^n(G,M)$ is non-zero?
I would also be interested in the special case that $G$ is a finite $p$-group and n = 2. Can I always get $H^2(G,M) \ne 0$ for some finite $M$?
Thanks for your help.
| Yes, for each $n\ge 0$ there is a $G$-module $M$ (depending on $n$) such that $H^n(G,M)\neq 0$ (provided $G\neq 1$ finite). .
Such an $M$ can be constructed by induction:
*
*First note that $H^i(G,F)=0$ for each free $\mathbb{Z}G$-module $F$ and all $i>0$ by Shapiro's lemma and Brown, VIII, 5.2.
*Next, show $H^1(G,I_G)=\mathbb{Z}/|G|$ where $I_G \trianglelefteq \mathbb{Z}G$ is the augmentation ideal (and $G$-action is given by multiplication in $\mathbb{Z}G$).
*Let $n\ge 2$ and suppose $N$ is a $G$-module such that $H^{n-1}(G,N) \neq 0$. Choose a short exact sequence $0 \to M \to F \to N\to 0$ of $G$-modules with $F$ free. Then, by the long exact sequence in cohomology and 1. we obtain the exact sequence
$$0=H^{n-1}(G,F) \to H^{n-1}(G,N) \to H^n(G,M) \to H^n(G,F)=0,$$
i.e. $H^n(G,M)\cong H^{n-1}(G,N)\neq 0$.
Note: By starting with $I_G$ you can even arrange $H^n(G,M)=\mathbb{Z}/|G|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/515868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Hard Integral $\frac{1}{(1+x^2+y^2+z^2)^2}$ Prove that $\displaystyle\int_{-\infty}^{\infty}\displaystyle\int_{-\infty}^{\infty}\displaystyle\int_{-\infty}^{\infty} \frac{1}{(1+x^2+y^2+z^2)^2}\, dx \, dy \, dz = \pi^2$
I tried substitution, trigonometric substitution, and partial fraction decomposition, but I can't solve this problem, I only know that
$\frac{1}{(1+x^2+y^2+z^2)^2}$ is a even function :( then
$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \frac{1}{(1+x^2+y^2+z^2)^2}\, dx \, dy \, dz = $$
$$ 2 \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{0}^{\infty} \frac{1}{(1+x^2+y^2+z^2)^2}\, dx \, dy \, dz $$
| An alternative is to overkill it with some measure theory. Unfortunately I don't know the names of the theorems and objects used (not in my language and not in english). If someone does, please edit my answer as you see fit.
Firstly note that $$\displaystyle\int \limits_{-\infty}^{\infty}\displaystyle\int \limits_{-\infty}^{\infty}\displaystyle\int \limits_{-\infty}^{\infty} \dfrac{1}{(1+x^2+y^2+z^2)^2}\mathrm dx \,\mathrm dy \,\mathrm dz=\iiint \limits_{\Bbb R^3\setminus\{0_{\Bbb R^3}\}}\dfrac{1}{(1+x^2+y^2+z^2)^2}\mathrm dx \,\mathrm dy \,\mathrm dz,$$
then use change of variables and something which translates to generalized polar coordinates to get
$$\iiint \limits_{\Bbb R^3\setminus\{0_{\Bbb R^3}\}}\dfrac{1}{(1+x^2+y^2+z^2)^2}\mathrm dx \,\mathrm dy \,\mathrm dz=\iint \limits_{]0,+\infty[\times S_2} \dfrac{t^2}{\left(1+(tx)^2+(ty)^2+(tz)^2\right)^2}\mathrm dt \,\mathrm d\mu_{S_2}(x,y,z).$$
Now $$\displaystyle \begin{align} \iint \limits_{]0,+\infty[\times S_2} \dfrac{t^2\mathrm dt \,\mathrm d\mu_{S_2}(x,y,z)}{\left(1+(tx)^2+(ty)^2+(tz)^2\right)^2}&=\iint \limits_{]0,+\infty[\times S_2} \dfrac{t^2\mathrm dt \,\mathrm d\mu_{S_2}(x,y,z)}{\left(1+t^2(x^2+y^2+z)^2\right)^2}\\
&=\iint \limits_{]0,+\infty[\times S_2} \dfrac{t^2\mathrm dt \,\mathrm d\mu_{S_2}(x,y,z)}{\left(1+t^2\right)^2}\\
&=\underbrace{\mu _{S_2}(S_2)}_{\large 4\pi}\int \limits_{]0,+\infty[} \dfrac{t^2}{\left(1+t^2\right)^2}\mathrm dt\\
&=4\pi\cdot \dfrac \pi 4=\pi ^2\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/515948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
What is the most surprising result that you have personally discovered? This question is inspired by
my answer to this one:
Surprising identities / equations
In that question, people were asked about
the most surprising result
that they knew.
Almost all of them quoted someone
else's result.
I was one of the only ones
to reply about a result of mine
that greatly surprised me.
So, I have decided to make that
a question on its own:
What is your own mathematical result
that surprised you the most?
Here is mine.
Consider the diophantine equation
$$x(x+1)...(x+n-1) -y^n = k$$
where $x, y, n,$ and $k$ are integers,
$x \ge 1$,
$y \ge 1$,
and
$n \ge 3$.
I was led to consider considering this
by trying to generalize the
Erdos-Selfridge result
that the product of consecutive integers
could never be a power.
I phrased this as
"How close and how often
can the product of
$n$ consecutive integers
be to an $n$-th power?"
Looking at this equation,
it seemed reasonable to think that,
for fixed $k$ and $n$,
there were only a finite number of
$x$ and $y$ that satisfied it.
This was not too hard to prove.
What greatly surprised me was that
I was able to prove that
for any fixed $k$,
there were only a finite number of
$n$, $x$, and $y$ that satisfied it.
The proof went like this:
I first showed that
any solution must have
$y \le |k|$.
This was moderately straightforward,
and involved considering the three cases
$y < x$, $x \le y \le x+n-1$,
and $y \ge x+n$.
Note: The proof that
$y \le |k|$
has been added at the end.
The next step really surprised me.
I showed that
$n < e|k|$,
where $e$ is the good old base of natural logarithms.
The proof was amazingly (to me) simple.
Since $y \le |k|$
and
$2(n/e)^n < n!$,
$\begin{align}
2(n/e)^n
&< n!\\
&\le x(x+1)...(x+n-1)\\
&= y^n+k\\
&\le |k|^n+|k|\\
&\le |k|^n+|k|^n\\
&= 2|k|^n\\
\end{align}
$
so
$n < e |k|$.
I still remember staring at this
in disbelief,
over forty years later.
I was asked to show my proof
that $y \le |k|$.
For brevity,
I will write
$x(x+1)...(x+n-1)$
as $x!n$,
because this is a generalization of
factorial.
The basic inequality is
$$(x^2+(n-1)x)^{n/2} \le x!n \le (x+(n-1)/2)^n$$
I also use two lemmas:
(L1) If $0 < a < b$ and $n > 1$
then
$n(b-a)a^{n-1} < b^n-a^n < n(b-a)b^{n-1}$.
(L2) If $a^m \leq b^m+c$ where
$a \geq 0$,
$b >0$, $c \geq 0$,
and $m \geq 1$,
then
$a \leq b + c/(m\,b^{m-1})$.
The basic idea is simple:
either $x < y < x+n-1$ or $y$ is outside this range.
If $y$ is inside the range,
then $y$ divides both $x!n$ and $y^n$,
so $y$ divides their difference,
which is $k$.
If $y$ is outside the range,
then we can use
the basic inequality
and the lemmas
to derive very strong inequalities
on $x$ and $y$.
Here are all the cases.
If $k=0$, so
$x!n = y^n$,
then $x < y < x+n-1$, or
$x+1 < y+1 \leq x+n-1$,
so that $y+1 | x!n$ or
$y+1 | y^n$, which is impossible.
If $k > 0$,
$x!n > y^n$, so that,
$y < x+(n-1)/2$.
If $ y > x$, then,
as stated above, $y | k$.
If $y \leq x$, then
$(x^2 + (n-1)x)^{n/2} \le x!n
= y^n + k
\le x^n + k
$
or, by L2,
$x^2 + (n-1)x \le
x^2 + 2k/\left(n\,x^{n-2}\right)
$
so that
$ x^{n-1} \leq 2k/n(n-1). $
Therefore
$y \le x \le \left(\frac{2k}{n(n-1)}\right)^{1/(n-1)}$.
If $k < 0$,
$x!n < y^n$, so that,
$y^2 > x^2+(n-1)x$,
which implies that $y > x$.
If $ y < x+n-1$, then,
as stated above, $y | |k|$.
If $y \geq x+n-1$, then
$(x+n-1)^n \leq y^n
= x!n - k
= x!n + |k|
\leq (x+(n-1)/2)^n + |k|
$
or, by L2,
$x+n-1 \leq
x+(n-1)/2 +
\frac{|k|}{
n(x+(n-1)/2)^{n-1}
}$
or
$(n-1)/2
\leq \frac{|k|}
{ n(x+(n-1)/2)^{n-1}} $
so that
$\left(x+(n-1)/2\right)^{n-1} \leq \frac{2|k|}{n(n-1)}.
$
Since
$y^n \leq (x + (n-1)/2))^n + |k|
\leq \left(\frac{2|k|}{n(n-1)}\right)^{n/(n-1)} + |k|
\leq |k|^{n/(n-1)} + |k|,
$
$ y \leq |k|^{1/(n-1)} + 1/n.$
In all the cases,
$y \le |k|$.
When $y < x$ or $y \ge x+n-1$,
$y$ is significantly smaller.
| Well, "a long time ago" (1970s) it was not so clear that integrating restrictions of Eisenstein series on big groups against cuspforms on smaller reductive groups, or oppositely, etc., would do anything interesting... much less produce $L$-functions. The Rankin-Selberg example from 1939 was not necessarily clearly advocating thinking in such terms, and Langlands' 1967/76 observations that constant terms of Eisenstein series involved $L$-functions was also easy-enough to rationalize as a thing-in-itself. So when I noticed (out of idle curiosity, being aware of somewhat-vaguer results of H. Klingen from 1962 and a general pattern of qualitative results of G. Shimura in the early 1970s) that various Euler products arose in this way (e.g., by $Sp(m)\times Sp(n)\to Sp(m+n)$), and triple-product $L$-functions (from the related $SL_2\times SL_2\times SL_2\to Sp_3$), it was quite a surprise to me. There were no similar results at the time, so there was nothing to compare to.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/516001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38",
"answer_count": 21,
"answer_id": 10
} |
Why are they called orbits? When we study actions in group theory, we consider sets of the form
$$\text{Orb}_G(x)=\{gx\mid g\in G\} $$
that are called orbits. Although, the only reason I find convincing for that name is that in some sense the action of group over a set can be viewed as a dynamical system and thus the name orbit has the usual physical "interpretation" and justification. Is this explanation correct or only a funny coincidence? In the second case, which is the origin of the term?
| You can think of the group action allowing you to move from one point to the next. In a "physical" sense, we are looking at where we can go in the set so we are looking at what elements we pass through on our way through.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/516057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
centenes of $7^{999999}$
What is the value of the hundreds digit of the number $7^{999999}$?
Equivalent to finding the value of $a$ for the congruence $$7^{999999}\equiv a\pmod{1000}$$
| Use Euler's theorem: $7^{\phi (1000)} ≡ 1 \mod 1000 $.
By Euler's product formula: $\phi(1000) = 1000\cdot(1-\frac{1}{2})\cdot(1-\frac{1}{5})=400$
So $7^{400}≡1 \mod 1000 $.
$999999=400\cdot 2500-1$. So it suffices to find $7^{399}\mod 1000$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/516186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Basic examples of monoids? What are some (simple/elementary) examples of noncommutative monoids with no additional structure? I'm having a hard time thinking of examples of "pure" monoids that aren't monoids simply because they are groups...
I've read this and this and some of this, but would like more examples that presuppose little to no algebra.
| A generic answer is the monoid of all functions from a set $E$ into itself under the composition of functions. This example is generic since every monoid is isomorphic to a submonoid of such a monoid. In particular, take any set of functions from $E$ to $E$ and close under composition: you will get a monoid. See this link. This type of example occurs frequently in automata theory.
You could also consider partial functions or relations on $E$, still under composition. The monoid of $n \times n$ matrices over a ring under the usual multiplication of matrices is also a quite natural example. If you have a monoid $M$, the set $\mathcal{P}(M)$ of all subsets of $M$ is also a monoid under multiplication defined by
$XY = \{ xy \mid x \in X, y \in Y \}$ (where $X, Y \in \mathcal{P}(M)$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/516250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
How many n square can fit into a square of side N Suppose we have n small squares of equal sizes that has area w.
Suppose we have a fix square S of area A such that for area A, one area w < area A.
If square S's area A, length, and width are given,
and if the n small square length, width and area w is given,
and if n small square has to be placed in area A such that n small square has to be space equally,
then I want to know how many n small square I can placed into square S?
A simple formula.
This type of problem came from an engineer question.
Space equally means:
Suppose I have small square A, B, C, D.
If I place A and B inside square S, the distance between them is d.
If I place C inside square S, I want the distance between B and C to be e such that e=d.
If I place D inside square S and it is underneath A, I want the distance between A and D to be f, such that e=d=f.
I will repeat pattern into Square S is filled.
Furthermore, What if I replaced the n small square, with n small circle or rectangle.
But for now let just focus on n small squares.
IS there a area in mathematics that explore this problem?
| I'm going to assume that all the squares are aligned (that is, each side of each square is vertical or horizontal), that the small squares are not to overlap (except possibly at their boundaries), and that the centers of the small squares are meant to form a square lattice (this takes care of the "equal spacing" requirements.
The small squares have area $w$, so side length $\sqrt w$. The big square has area $A$, so side length $\sqrt A$. So the number of small squares we can put in a row is $[\sqrt A/\sqrt w]$, where $[x]$ means the greatest integer not exceeding $x$ (e.g., $[\pi]=[3.14159\dots]=3$). The number of rows we can get is also $[\sqrt A/\sqrt w]$, so the number of small squares is $$[\sqrt A/\sqrt w]^2$$
For example: if $A=10$ and $w=2$, then $\sqrt{10}/\sqrt2$ is between 2 and 3, so $[\sqrt{10}/\sqrt2]=2$, and we can fit in $2^2=4$ squares.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/516324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is Dedekind completion of ${}^{\ast}\Bbb R$ a Archimedean field? Here's Theorem 1.2 on page 6, Martin Andreas Väth's Nonstandard Analysis(See here on googlebooks)
The Dedekind completion $\overline{X}$ of a totally ordered field $X$
is a complete Archimedean field with $\Bbb{Q}_{\overline{X}}$ as the canonical copy of $\Bbb{Q}_{X}$.
$X$ has the Archimedean property. For each $x \in X$ there is some $n \in \Bbb{N}_{X}$
such that $n > x$.
Each totally ordered field X contains a “canonical copy” of the set $\Bbb{N}_{X}$, namely $\{1_X, 1_X +1_X, 1_X +1_X +1_X, \ldots\}$.
${}^{\ast}\Bbb R$ is a totally ordered field without Archimedean property. Isn't it the case that its Dedekind completion doesn't have Archimedean property?
| The Dedekind-completion is an order completion, and if the field is non-Arcihmedean then its Dedekind-completion is not a field at all.
To see this simply note that in the completion, there is a point $t$ which is the realization of the cut $R=\{x\mid\exists n\in\Bbb N. x<n\}$. And $t-1$ cannot exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/516434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proving a subset is closed
Let $X$ be a metric space and let $A \subset X$ be an arbitrary subset. We define
$$A_r=\{x \in X : B_r(x) \subseteq A\}.$$ Prove that $A_r$ is closed for every radius $r$.
Maybe this is easy but I am totally stuck. First I've tried to prove it directly, i.e., take a convergent sequence in $A_r$ and prove that its limit is in $A_r$. Well, I didn't got anywhere. Then I've tried to prove that $A_r$'s complement is open. $A_r$'s complement is the set $$X \setminus A_r=\{x \in X : B_r(x) \not\subseteq A\}$$ Given $x \in X \setminus A_r$, I have to take $\epsilon$ such that $B(x,\epsilon) \subseteq X \setminus A_r$. Can anyone give me a hint?
I add the proof:
To prove $A_r$ is closed is equivalent to prove its complement is open. So let $X \setminus A_r$={$y\in X\mid \exists u\notin A, y\in B_r(u)$}. Take $\epsilon=r-d(y,u)$ and consider $B(y,\epsilon)$. Let $z \in B(y,\epsilon)$, $d(z,u)\le d(z,y)+d(y,u)<r-d(y,u)+d(y,u)=r$. It follows that $z \in B_r(u)$, so $B_r(u)$ is open. $X \setminus A_r=\bigcup_{u\in X-U}B_r(u)$, as union of open sets gives an open set, $X \setminus A_r$ is open, which implies $A_r$ is closed.
| The set $A_r$ is the complement of the set $\{y\in X\mid \exists u\notin A, y\in B_r(u) \}$. Can you show that this set is open?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/516655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The transform of a Hermitian matrix by a unitary matrix is Hermitian In the following document, p91 (or p4 of the PDF) , section 6.03, it is stated that "The transform of a hermitian matrix by a unitary matrix is hermitian." Apparently the proof is obvious, but not to me... could someone elaborate?
| A matrix is hermitian if $ A^\ast = A$ where $A^\ast$ is the conjugated and transposed of $A$.
Unitary matrices have the property that $U\cdot U^\ast= Id$ where Id is the identity. So in special we have $U^\ast =U^{-1}$.
Now we look at the transformed hermitian:
$$ U^{-1} A U=U^\ast A U$$
if conjugate and transpose this we have
$$ (U^\ast A U)^\ast = U^\ast A^\ast (U^\ast)^\ast= U^\ast A^\ast U=U^\ast A U$$
which says that the transformed still hermitian.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/516707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What square does not contain the middle? Consider the square $S = [-1,1]\times[-1,1]$. Suppose we put a smaller square inside it, which is rotated with an angle $\alpha$ relative to the large square. What is the largest such square that does not contain the origin in its interior?
When $\alpha=0$, the answer is obvious: the maximal side-length is $1$. Every square with side-length larger than 1 must contain the origin in its interior. But when $\alpha>0$, it seems that the maximal side-length is smaller, and I don't know how to calculate it.
| OK, this is my current attempt of an answer:
Consider a square that is pushed towards to top-right corner of $S$. The coordinates of this square are (where $c:=cos(\alpha)$ and $s:=sin(\alpha)$):
*
*T (top): $(1-dc,1)$
*R (right): $(1,1-ds)$
*B (bottom): $(1-ds,1-dc-ds)$
*L (left): $(1-dc-ds,1-dc)$
Due to symmetry, it is sufficient to study the range: $0 < \alpha < \frac{\pi}{4} $ , where $0<s<c<1$.
Now, an method for deciding if a point is in polygon can be used to decide if $(0,0)$ is in the given square, as a function of $d$.
Using the ray-crossing method, we have to consider a ray from the origin, and check how many times this ray crosses the sides of the above square. If the number is odd, then the origin is inside the square.
Consider, as a ray, the negative y axis.
Assume that $d<1$. In this case, T R and L are all above the origin, therefore the ray cannot cross the lines LT and TR. Additionally, T R and B are all to the right of the origin, therefore the ray also cannot cross the lines TR and RB.
It remains to check whether the negative y axis crosses the side LB, i.e., the y coordinate of the origin is above the line LB. The equation of the side LB is:
$$ cy(x) = (c+s-d-dsc)-sx $$
$$ where: x \in [1-ds-dc,1-ds] $$
If we substitute $x=0$, we get:
$$ cy(0) = c+s-d-dsc $$
If this number is negative, then the origin is above the side LB, and the origin is inside the square. The condition for this is:
$$ c+s-dc^2-dsc-ds^2 < 0 $$
$$ d > \frac{c+s}{1+sc} $$
(An alternative way to reach at the same solution is described in Robert Israel's answer).
You can plot that function here, using this line:
a0=2&a1=(cos(x)+sin(x))/(1+sin(x)*cos(x))&a2=1/(sin(x)+cos(x))&a3=(cos(x)-sin(x))/(cos(x)^2)&a4=1&a5=4&a6=8&a7=1&a8=1&a9=1&b0=500&b1=500&b2=0&b3=0.79&b4=0&b5=2&b6=10&b7=10&b8=5&b9=5&c0=3&c1=0&c2=1&c3=1&c4=1&c5=1&c6=1&c7=0&c8=0&c9=0&d0=1&d1=20&d2=20&d3=0&d4=&d5=&d6=&d7=&d8=&d9=&e0=&e1=&e2=&e3=&e4=14&e5=14&e6=13&e7=12&e8=0&e9=0&f0=0&f1=1&f2=1&f3=0&f4=0&f5=&f6=&f7=&f8=&f9=&g0=&g1=1&g2=1&g3=0&g4=0&g5=0&g6=Y&g7=ffffff&g8=a0b0c0&g9=6080a0&h0=1&z
The minimum is at: $\alpha=\pi/4$, where the lower bound is approximately: $d>0.943$.
To conclude: a square with $d\geq 1$ always contains the origin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/516799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
An arithmetic sequence of numbers without certain prime factors I just attended a lecture at my school about prime numbers and the idea of being coprime and what kind of applications that has, and this question popped up in my head and I thought about it for a while and asked a few friends and wasn't quite sure what to think. One of my friends told me about this website so I thought I would give it a shot. So here's the question:
What is the largest arithmetic sequence (most amount of terms) such that none of its terms are divisible by the first $n$ prime numbers, and which all of its terms are less than $n\#$ (where $\#$ denotes the primorial function)?
For example, for $n = 2$ the largest such sequence would be $1, 5$ and for $n = 3$ the largest such sequence would be $1, 7, 13, 19$ or $11, 17, 23, 29$.
| In addition to what Patrick said, the only available terms that are the products of two primes $p, q$ (or $1$) where $p, q > p_n$. So for $4\#$ you would have $1, 11, 11 \times 11, 11 \times 13, 11 \times 17, 11 \times 19, 13, 13 \times 13, 17, 19, 23, 27,$ (along with the rest of primes less than $210$).
In its current state, it looks like these arithmetic progressions are going to be awfully low in number, and depend on arithmetic progressions found in prime numbers, which is pretty low as well. More about primes in arithmetic progression: http://en.wikipedia.org/wiki/Primes_in_arithmetic_progression http://mathworld.wolfram.com/PrimeArithmeticProgression.html
Not a full answer by any means, but some guiding tips.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/516875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to solve these series? Can anyone help me understand how to solve these two series? More than the solution I'm interested in understanding which process I should follow.
*
*Series 1:
$$
\sum_{i = 3}^{\infty} i * a^{i-1}, 0 < a < 1.
$$
*
*Series 2:
$$
\sum_{i = 3}^{\infty} i\sum_{k = 2}^{i-1} a^{i-k} * b^{k-2} , 0 < a < 1, 0 < b < 1.
$$
These two series come as part of a long mathematical proof which I omitted for brevity, if you think it is relevant I will post it.
| Hint: For the first one if $\sum_{i = 3}^{\infty} x^{i}=f(x)$ then $\sum_{i = 3}^{\infty} i \times x^{i-1}=f'(x)$.
For the second one consider that:
$$
\sum_{k = 2}^{i-1} a^{i-k} b^{k-2}=a^{i-2}\frac{1-\frac{b^{i-2}}{a^{i-2}}}{1-\frac{b}{a}}= \frac{a^{i-2}-{b^{i-2}}}{1-\frac{b}{a}}
$$
Then you can decompose the series into two one and use the previous step.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/516934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Vector equation for the tangent line of the intersection of $x^2 + y^2 = 25$ and $y^2 + z^2 = 20$ What is the vector equation for the tangent line of the intersection of $x^2 + y^2 = 25$ and $y^2 + z^2 = 20$ at the point $(3,4,2)$?
I think I should find a vector
$$
\gamma(t) = (x(t),y(t),z(t))
$$
that represents the intersection. Because then, I can find easily the tangent line.
But I do not know a general technique to find such a $\gamma$.
Any suggestion please?
| Hint: Maybe use $y=t,\ x=(25-t^2)^{1/2},\ z=(20-t^2)^{1/2}$ and restrict $t$ to make the radicals defined. I think the tangent line would then be possible except at the endpoints, for specific values of $t$ chosen.
ADDED: Actually $x$ and/or $z$ could be chosen as the negatives of the above radicals, and that would also be part of the intersection. So one has to choose specific signs for these at each given point. (This means for a fixed $t_0$ to choose either positive or negative radicals for the $x,z$ coordinates to get both the point on the intersection and the value of the derivative vector at that point.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/517036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Predicates and Quantifiers? suppose that the domain of variable x is the set of people, and f(x) = "x is friendly" , t(x)= "x is tall" and a(x) = "x is angry". Write the statement using these predicates and any needed quantifiers.
1) some people are not angry
2) all tall people are friendly
3) No friendly people are angry
My solutions:
1) $∃x\sim A(x)$
2) $∀xF(x)$
3) $\sim ∀x A(x)$
I'd like to know if my answers are right or wrong.
| As others have said, you second and third answers are wrong -- but more worryingly, they are quite fundamentally wrong, not mere slips. So this suggests that you ought to be looking at some good text book that tells you about translation into predicate calculus notation. Lots of intro logic books do this (P-t-r Sm-th's Introduction to Formal Logic is ok, I'm told!). For something freely available online which is very lucid, try Paul Teller's Modern Formal Logic Primer. Look at the first four (short!) chapters of Vol II.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/517144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
If a sequence of summable sequences converges to a sequence, then that sequence is summable. Let $(a_i)^n$ be a sequence of complex sequences each of which are summable (they converge). Then if they have a limit, the limit sequence $(b_i)$ is also summable. All under the sup norm for sequences.
Let $(a_i)^n$ sum to $c_n$. I.e. for all $\epsilon \gt 0$, there's $N$ such that $m \gt N \implies |\sum_{i=1}^m a_i^n - c_n| \lt \epsilon$. I want to show that there's $b$ such that $|\sum_{i=1}^m b_i - b| \lt \epsilon$ similarly. Let $b = \lim c_n$. Where to?
| (edited) This doesn't work, the example is pointwise convergent: Consider the alternating sequence $x=(-1,1,-1,1,\ldots)$ and let $x_n$ take the first $n$ terms of $x$ and be zero afterwards. $x_n$ converges pointwise to $x$, and is summable, but the (pointwise) limit $x$ is not summable.
Here is a uniform convergence counterexample: Take the sequence $x(i) = 1/i$, which is not summable. Now take $x_n$ to match $x$ for the first $n$ terms and be zero after. $x_n$ converges to $x$ uniformly, but the limit is not summable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/517217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
In graph theory, what is the difference between a "trail" and a "path"? I'm reading Combinatorics and Graph Theory, 2nd Ed., and am beginning to think the terms used in the book might be outdated. Check out the following passage:
If the vertices in a walk are distinct, then the walk is called a path. If the edges in a walk are distinct, then the walk is called a trail. In this way, every path is a trail, but not every trail is a path. Got it?
On the other hand, Wikipedia's glossary of graph theory terms defines trails and paths in the following manner:
A trail is a walk in which all the edges are distinct. A closed trail has been called a tour or circuit, but these are not universal, and the latter is often reserved for a regular subgraph of degree two.
Traditionally, a path referred to what is now usually known as an open walk. Nowadays, when stated without any qualification, a path is usually understood to be simple, meaning that no vertices (and thus no edges) are repeated.
Am I to understand that Combinatorics and Graph Theory, 2nd Ed. is using a now outdated definition of path, referring to what is now referred to as an open walk? What are the canonical definitions for the terms "walk", "path", and "trail"?
| You seem to have misunderstood something, probably the definitions in the book: they’re actually the same as the definitions that Wikipedia describes as the current ones.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/517297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 5,
"answer_id": 0
} |
Big O estimate of simple while loop Give a big-O estimate for the number of operations, where an operation is an addition or a multiplication, used in this segment of an algorithm (ignoring comparisons used to test the conditions in the while loop).
i := 1
t := 0
while i ≤ n
t := t + i
i := 2i
My attempt:
n = 1 i=2
n = 2 i=4
n = 3 i=8
n = 4 i=16
relationship of i to iteration is i = 2^n
How many iterations(n’) until 2^(n’) > n (basically solving for n')
n’ > log2(n) thus the big O estimate is:
O(log_2(n)) (read as log base 2 of n)
However, the book says it's O(log(n)) - why isn't it base 2?
| $O(\log_2(n))$ and $O(\ln{n})$ are the same thing, since $\log_2$ and $\ln$ are related by the formula
$$\log_2{n} = \frac{\ln{n}}{\ln{2}} \approx 1.44 \ln{n}$$
The multiplicative constant is irrelevant for the Big O notation.
More precisely, we have the relations
$$1.44 \ln{n} \le \log_2{n} \le 1.45 \ln{n}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/517397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
approximating a maximum function by a differentiable function Is it possible to approximate the $max\{x,y\}$ by a differentiable function?
$f(x,y)=max \{x,y\} ;\ x,y>0$
| Yes it is. One possibility is the following: Note that $\def\abs#1{\left|#1\right|}$
$$ \max\{x,y\} = \frac 12 \bigl( x+ y + \abs{x-y}\bigr), $$
take a differentiable approximation of $\abs\cdot$, for example $\def\abe{\mathop{\rm abs}\nolimits_\epsilon}$$\abe \colon \mathbb R \to \mathbb R$ for $\epsilon > 0$ given by
$$ \abe(x) := \sqrt{x^2 + \epsilon}, \quad x \in \mathbb R $$
and define $\max_\epsilon \colon \mathbb R^2 \to \mathbb R$ by
$$ \max\nolimits_\epsilon(x,y) := \frac 12 \bigl( x+y+\abe(x-y)\bigr). $$
Another possibility is to take a smooth mollifier $\phi_\epsilon$ and let $\max'_\epsilon :=\mathord\max * \phi_\epsilon$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/517482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 1
} |
Find an efficient algorithm to calculate $\sin(x) $ Suggest an efficient algorithm to determine the value of the
function $ \sin(x) $ for $ x \in [-4\pi, 4\pi] $.
You can use only Taylor series and $ +, -, *, /$.
I know, that $$\sin(x)=\sum_{n=0}^{\infty}(-1)^n\frac{x^{2n+1}}{(2n + 1)!}$$
but I can't find an efficient algorithm.
Thank for your help.
| Hint: The coefficients of the series are related to one another by the simple relation:
$$
t_{n+1}=\frac{x^{2(n+1)+1}}{(2(n+1)+1)!}=\frac{x^2}{(2n+2)(2n+3)}\frac{x^{2n+1}}{(2n+1)!}=
\frac{x^2}{(2n+2)(2n+3)}t_n
$$
(Here $t_n$ represents the coefficient of $x^{2n+1}$, multiplied by $x^{2n+1}$. So $\sin x=\sum_{n=1}^\infty t_n$.)
This means that you don't need to work out each coefficient separately: once you've worked out $t_n$, you've done most of the work you need to do to work out $t_{n+1}$.
The other thing you'll need to do is to work out how many terms of the series you'll need to ge a good enough approximation on $[-4\pi,4\pi]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/517570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Are complete intersection prime ideals of regular rings regular ideals? Let $(R, \mathfrak{m})$ be a regular local ring and let $\mathfrak{p}$ be a prime ideal of $R$ which is a complete intersection, i.e. the minimal number of generators of $\mathfrak{p}$ equals its height $h$. Then by Macaulays theorem there is a system of parameters (or equivalently - a regular sequence) $\{a_{1},\dots, a_{h}\}$ which generates $\mathfrak{p}$.
Is it then also true that $\mathfrak{p}$ can be generated by elements $\{b_{1}, \dots, b_{h}\}$ which can be extended to a regular system of parameters for $R$?
Phrased differently, I am asking whether every complete intersection prime ideal in $R$ is regular (in the sense that $R/ \mathfrak{p}$ is regular).
I am asking this question being interested in the situation where $R = \mathbb{C}\{x_{1},\dots, x_{n}\}$ is the ring of convergent power series.
| Take $R=\mathbb{C}[x,y]_{(x,y)}$, and take $\mathfrak{p}=(x^2-y^3)$. $R/\mathfrak{p}$ is not a regular local ring, since it isn't integrally closed in its field of fractions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/517642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Congruences doubt!
What the rest of the division $2^{100}$ by $11$?
$$2^5=32\equiv10\equiv-1\pmod{11}\\(2^5)^{20}=2^{100}\equiv-1^{20}\;\text{or}\; (-1)^{20}$$??
| We have
\begin{align*}
2^{10} &= 2^5 \cdot 2^5 &\equiv (-1) \cdot (-1) &= (-1)^2 \pmod{11}\\
&\vdots\\
2^{100} &=2^5 \cdots 2^5 &\equiv (-1) \cdots (-1) &= (-1)^{20} \pmod{11}
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/517736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
$\lim\inf a_n = 0$ as $n$ goes to infinity Let $(a_n)$ be a sequence of positive numbers such that for every $m$ in the natural numbers there is $n$ in the natural numbers such that $a_n = \frac{1}{m}$.
Prove $\lim\inf a_n = 0$ as $n$ goes to infinity.
I want to some suggestions on how to "approach" this problem because as it is all that i am seeing is that infact there is no n for the expresion a_n i double checked with other peers and there is no miscopy this leaves me kind of distraught.
i need to understand how to approach this in general if there is any advice for this i would greatly appreciate it!
| Hint: Prove that there is a subsequence converging to $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/517860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What the rest of the division $1^6+2^6+...+100^6$ by $7$?
What the rest of the division $1^6+2^6+...+100^6$ by $7$?
$1^6\equiv1\pmod7\\2^6\equiv64\equiv1\pmod7\\3^6\equiv729\equiv1\pmod7$
Apparently all the leftovers are $one$, I thought of using Fermat's Little Theorem, however the $(7,7 k) = 7$, so you can not generalize, I think. help please.
| The number of multiples of $7$ from $1$ to $100$ is $\left\lfloor\frac{100}{7}\right\rfloor = 14$ so...
By Fermat's Little Theorem :
$7\mid 1^6+2^6+…+100^6 - 86 \implies 1^6+2^6+…+100^6 \equiv 86 \equiv 2 \pmod7 $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Method of characteristics. Small question about initial conditions. Okay, so we're given a PDE
$$x \frac {\partial u} {\partial x} + (x+y) \frac{\partial u} {\partial y} = 1$$
with initial condition: $u(x=1,y)=y$
So $a=x, b=x+y, c=1$
$\Rightarrow$ characteristic equations: $$\frac{dx}{dt}=x, \frac{dy}{dt}=x+y, \frac{du}{dt}=1$$
This next part is my trouble:
Initial Conditions:
$$x_0(0,s)=1,$$ $$y_0(0,s)=s,$$ $$u_0(0,s)=y=s.$$
So I can see that the $u(0,s)=s$ is coming from the original IC, but where are $x_0,$ and $y_0$ coming from? Many thanks in advance!
| $$
{{\rm d}y \over {\rm d}x} = 1 + {y \over x}
\tag{1}
$$
With the scaling $\tilde{x} = \mu x$ and $\tilde{y} = \nu\, y$, Eq. $\left(1\right)$
does not change its form whenever $\mu = \nu$ which is equivalent to
$\tilde{y}/\tilde{x} = y/x$. It means Ec. $\left(1\right)$ is simplified with the choice $y/x \equiv \phi\left(x\right)$:
$$
x\phi´\left(x\right) = 1
\quad\Longrightarrow\quad
\phi\left(x\right) = \ln\left(x\right) + \overbrace{\alpha}^{\mbox{constant}}
\quad\Longrightarrow\quad
y = x\ln\left(x\right) + \alpha\, x
\tag{2}
$$
In addition, ${\rm d}{\rm u}\left(x,y\left(x\right)\right)/{\rm d}x = 1/x$ leads to
${\rm u}\left(x,y\left(x\right)\right)
=
\ln\left(x\right)\ +\ \overbrace{\beta}^{\mbox{constant}}$. It is reduced, with Eq. $\left(2\right)$, to
$$
{\rm u}\left(x,y\right) = \left({y \over x} - \alpha\right) + \beta
$$
$$
{\rm u}\left(1, y\right) = y\,, \quad\Longrightarrow\quad -\alpha + \beta = 0.
\quad\Longrightarrow\quad
\color{#ff0000}{\large{\rm u}\left(x, y\right) \color{#000000}{\ =\ }{y \over x}}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
For $f:\mathbb R^{<0}\to\mathbb R$, $f(x)=2x^2-3$, find the values of a for which $f(a)=f^{-1}(a)$ Okay, i've got the answer for this with some luck I guess, however i'm still left wondering specifically what this part of the question means:
"find the values of a for which $f(a)=f^{-1}(a)$"
My understanding of this is, that the question is asking me to find a value of a where the output and input given by the function $f$ are equal?
Could someone do a better job of explaining this to me please, thank you.
| The values for which $f(x)=f^{-1}(x)$ must lie on the line $y=x$, since by the sheer definition of inverse function this is an axis of symmetry. This means that if a function and its inverse intersect, the points of intersection must lie on that line. Hence, you have to solve for which values of $x$ we have $f(x)=x$. So $2x^2-3=x$ and this leads to $x=-1$ or $x=1\frac{1}{2}$. Since your domain is the negative reals, you are left with $x=-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Locus of a point where two normals meet? Another exam question,
"Find the locus of a the point such that two of the normals drawn through it to the parabola $y^2=4ax$ are perpendicular to each other."
Does the locus mean the point of intersection of the two normals? I attempted to try to this by using the implicit derivative of the parabola and the locus as (x1,y1). Since its given as they meet but I can't get points of intersection.
Can someone help me out please?
| Find the locus of the point of intersection of two normals to a parabola which are at right angles to one another.
Solution:
The equation of the normal to the parabola y^2 = 4ax is
y = -tx + 2at + at^3. (t is parameter)
It passes through the point (h, k) if
k = -th + 2at + at^3 => at^3 + t(2a – h) - k = 0. … (1)
Let the roots of the above equation be m1, m2and m3. Let the perpendicular normals correspond to the values of m1 and m2 so that m1 m2 = –1.
From equation (1), m1 m2 m3 = k/a. Since m1 m2 = –1, m3 = -k/a.
Since m3 is a root of (1), we have a(-k/a)^3-k/a (2a – h) - k = 0.
⇒ k^2 = a(h – 3a).
Hence the locus of (h, k) is y^2 = a(x – 3a).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Quick method to find $|H \cap \overline{B} \cap \overline{K}|$ starting from $|H \cap B \cap K|$? Suppose sets $S, H, K, B$. Let
$$\begin{align}
&|S| = 100 \\
&|H| = 57 \\
&|K| = 77 \\
&|B| = 66 \\
&|H\cap B| = 30 \\
&|H \cap K| = 40 \\
&|B \cap K| = 50
\end{align}$$.
The question asks to find
*
*$|H \cap \overline{B} \cap \overline{K}|$
*$|K \cap \overline{B} \cap \overline{H}|$
*$|B \cap \overline{H} \cap \overline{K}|$
By inclusion-exclusion,
$$\begin{align}
&|H \cup B \cup K| = |H| + |B| + |K| - (|H \cap B| + |H \cap K| + |B \cap K) + |H \cap B \cap K| \\
&|H \cap B \cap K| =|H \cup B \cup K| - (|H| + |B| + |K|) + (|H \cap B| + |H \cap K| + |B \cap K) \\
\end{align}$$
Is there a short way to find, for example, $|H \cap \overline{B} \cap \overline{K}|$, starting from $|H \cap B \cap K|$?
I did the following work to find $|H \cap \overline{B} \cap \overline{K}|$, but
I am looking for a quicker way to do this.
$$\begin{align}
&H = H \cap ((\overline{B} \cap \overline{K}) \cup (B \cup K)) \\
&H = (H \cap (\overline{B} \cap \overline{K})) \cup (H \cap (B \cup K)) \\
& (H \cap (\overline{B} \cap \overline{K})) = H - (H \cap (B \cup K)) \\
& H \cap \overline{B} \cap \overline{K} = H - (H \cap B) \cup (H \cap K) \\
& |H \cap \overline{B} \cap \overline{K}| = |H| - (|(H \cap B)| + |(H \cap K)|) + |H \cap K \cap B| \\
& = 57 - (30 + 40) + 20 = 7 \\
\end{align}$$
| I think that the quickest way is to draw a Venn diagram, calculate $|H\cap K\cap B|=20$, and fill in the cardinalities of the eight regions into which the diagram divides $S$. I very quickly get this:
I can now read off the answers; for instance, $|H\cap\overline{B}\cap\overline{K}|=7$. (In fact all three are $7$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is the modulus of a complex number $a^2+b^2$? Why is the modulus not $\sqrt{a^2-b^2}$? Carrying out standard multiplication this would be the result-why is this not the case? I know viewing the complex plane you can easily define the sum as being the distance to the points, but what meaning does $\sqrt{a^2-b^2}$ have?
| I think you are not dealing with $i$ correctly in your multiplication. Note that $$(a + bi)(a-bi) = a^2 - (bi)^2 = a^2 - b^2i^2 = a^2 - b^2(-1) = a^2 + b^2.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Probability of 4 or fewer errors in 100,000 messages The probability of an error occurring in a message is 10^-5. The probability is independent for different messages. There are 100,000 messages sent. What is the probability that 4 or fewer errors occur?
| In principle, the number $X$ of errors in $100000$ messages has binomial distribution. But in this kind of situation (probability $p$ of an "error" small, number $n$ of trials large, $np$ of moderate size) it is standard to approximate the distribution of $X$ by using the Poisson distribution with parameter $\lambda=np$.
In our case we have $\lambda=np=(10^{-5})(100000)=1$. The probability of $4$ or fewer errors is approximately
$$\sum_{k=0}^4 e^{-1} \frac{1^k}{k!}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Geometric meaning of line equation in homogeneous coordinate In Euclidean space, a line's equation is $$ax + by + c = 0.$$ While in homogeneous coordinates,it can be represented with $$\begin{pmatrix}x &y &1\end{pmatrix}\begin{pmatrix}a\\ b\\ c\end{pmatrix} = 0.$$ I think the meaning of the homogeneous representation is that if a point is on the line, then the inner product of two vectors goes to $0$, $$X = \begin{pmatrix}x_1\\ y_1\\ 1\end{pmatrix}, V = \begin{pmatrix}a\\ b\\ c\end{pmatrix},$$ then the meaning of line equation is that $X$'s projection onto $V$ is $0$, which means $X\perp V$. Is that right?
But my intuitive understanding is that, if a point $X$ is on a line $V$, then the projection of $X$ onto $V$ should not be $0$.
| *
*No. There is no notion of $\perp$ between a line and a point.
*No. Here the inner product is not a projection, but a measure ($\propto$) of the minimum distance between the objects. When coincident the minimum distance is zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that the sequence $(a_n)$ defined by $a_0 = 1$, $a_{n+1} = 1 + \frac 1{a_n}$ is convergent in $\mathbb{R}$ I will post the exercise below:
Prove that the sequence $(a_n)$ defined by $a_0 = 1$, $a_{n+1} = 1 + \frac 1{a_n}$ for $n \in \mathbb N$ is convergent in $\mathbb R$ with the Euclidean metric, and determine afterwards is limit. Can you intepret the limit geometrically (hint: Golden ratio)?
So I need to prove that the sequence is convergent in $\mathbb{R}$ with the Euclidean metric, and how do I prove that? The limit must be $1$, but how to interpret it geometrically?
| Hint: If the limit $L$ exists, it must satisfy $L = 1 + \frac{1}{L}$, and so it cannot be 1. The solutions are the roots of the equation $L^2 - L - 1 = 0$, and so $L \in \{\frac{1+\sqrt{5}}{2}, \frac{1-\sqrt{5}}{2} \}$. That's where the golden ratio comes into play. Note also that the limit cannot be $\frac{1-\sqrt{5}}{2}$ since $a_n > 0 $ for all $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Closed form for $\int \frac{1}{x^7 -1} dx$? I want to calculate:
$$\int \frac{1}{x^7 -1} dx$$
Since $\displaystyle \frac{1}{x^7 -1} = - \sum_{i=0}^\infty x^{7i} $, we have $\displaystyle(-x)\sum_{i=0}^\infty \frac{x^{7i}}{7i +1} $.
Is there another solution? That is, can this integral be written in terms of elementary functions?
| Let u=x^2 and solve using u substitution.
=> 1/6 ln((x^6-1)/x^6)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Integral points on a circle Given radius $r$ which is an integer and center $(0,0)$, find the number of integral points on the circumference of the circle.
| You are looking for solutions to $m^2 + n^2 = r^2$ for a given $r$. Clearly $(\pm r, 0), (0, \pm r)$ are four solutions. For others, this is equivalent to finding Pythagorean triples with the same hypotenuse. You should be able to find a lot of references on this online.
In fact you can derive that, if the prime factorisation of $r = 2^a \prod p_i^{b_i} \prod q_j^{c_j}$ where $p_i \equiv 1\pmod 4$ and $q_i \equiv 3 \pmod 4$, then $f(r) =\dfrac{1}{2}\left(\prod (2b_i + 1) - 1 \right)$ is the number of such triplets.
Each such triple has corresponding solutions in the other three quadrants, so in total we have $4f(r)+4$ integer points on the circle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Homework: Problem concerning first fundamental form Here's a strange problem in our differential geometry textbook.
At a point on surface $\mathbf{r}=\mathbf{r}(u,v)$, the equation $Pdudu+2Qdudv+Rdvdv=0$ determines two tangential directions. Prove that these two tangential directions are normal iff $$ER-2FQ+GP=0$$
where $E=\langle\mathbf{r}_u,\mathbf{r}_u\rangle,F=\langle\mathbf{r}_u,\mathbf{r}_v\rangle,G=\langle\mathbf{r}_v,\mathbf{r}_v\rangle$
I don't understand what this problem wants me to do. Since the statement :"the equation $Pdudu+2Qdudv+Rdvdv=0$ determines two tangential directions" is a bit ambiguous. Hope to find some good understanding of this problem, no need for solutions, thanks!
| Consider $(1,a),\ (1,b)$ vectors on $uv$-plane. And ${\bf x}$ is a parametrication.
$$ d{\bf x}\ (1,a) \perp d{\bf x}\ (1,b) \Leftrightarrow ({\bf x}_u+a{\bf x}_v )\cdot ({\bf x}_u+b{\bf x}_v)=0\Leftrightarrow E+(a+b)F+abG =0 $$
And if $R=1,\ Q=-1/2(a+b),\ P=ab$, then note that $$ ( Pdudu +2 Qdudv+ Rdvdv )((1,a),(1,a)) = P+aQ+a^2R=0$$ and $$ ( Pdudu +2 Qdudv+ Rdvdv )((1,b),(1,b)) = P+bQ+b^2R=0$$
So with these observations, we have the desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/518965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Show that convolution of two measurable functions is well-defined Question:
Recall the definition of the convolution of $f$ and $g$ given by
$$(f*g)(x)=\int_{\mathbb{R}^d}f(x-y)g(y)dy.$$ If we only know that
$f$ and $g$ are measurable, can we show that $f*g$ is well defined for
a.e. $x$, that is, $f(x-y)g(y)$ is integrable?
(Exercise 2.5.21(c) in 'Real Analysis', by Stein and Shakarchi)
Actually, the book writes like this:
Suppose that $f$ and $g$ are measurable functions on $\mathbb{R}^d$.
(a)Prove that $f(x-y)g(y)$ is measurable on $\mathbb{R}^{2d}$.
(b)Show that if $f$ and $g$ are integrable on $\mathbb{R}^d$, then
$f(x-y)g(y)$ is integrable on $\mathbb{R}^{2d}$.
(c)Recall the definition of the convolution of $f$ and $g$ given by
$$(f*g)(x)=\int_{\mathbb{R}^d}f(x-y)g(y)dy$$ Show that $f*g$ is well
defined for a.e. $x$, that is, $f(x-y)g(y)$ is integrable.
Can we use the assumption that $f,g$ are integrable in (c)?
| Hint: do that first when $f, g \ge 0$. Recall that if the integrand function has a sign, you can safely change the order of integration in a double integral. This is sometimes known as Tonelli's theorem.
Tonelli's theorem is easier than the closely related Fubini's theorem, which regards integrand functions which possibly change sign. In the latter, you need to check that the integrand function is absolutely integrable with respect to both variables before you can do anything.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/519034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
algebra, equivalence relation regarding associates If f(x) ~ g(x) if and only if f and g are associates,
prove this is an equivalence relation
have tried to prove this both ways, struggling
| Well you need to show 3 things :
*
*Reflexivity : Take $u=1$
*Symmetry : If $u$ works in one direction, then $u^{-1}$ works in the other.
*Transitivity : If $f(x) = ug(x)$ and $g(x) = vh(x)$, then $f(x) = (uv)h(x)$, and $uv$ is a unit if $u$ and $v$ are.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/519109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the general solution of the given second order differential equation. Find the general solution of the given second order differential equation. $$4y''+y'=0$$
This was my procedure to solving this problem:
$\chi(r)=4r^2+r=0$
$r(4r+1)=0$
$r=0, -\frac14$
$y_1=e^{0x}, y_2=e^{-\frac14x}$
And this led to get the answer,
$y=C_1+C_2e^{-\frac14x}$
I don't really have a question unless I solved this problem incorrectly. If someone could kindly check over my work to see if I did it right, that would be great!
| hint you can reduce the order by putting
$$y'=w$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/519210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$E[X^4]$ for binomial random variable For a binomial random variable $X$ with parameters $n,p$, the expectations $E[X]$ and $E[X^2]$ are given be $np$ and $n(n-1)p^2+np$, respectively.
What about $E[X^4]$? Is there a table where I can look it up? Calculating it using the definition of expectation looks like a lot of work. Or is there a good way to calculate it?
| Well, you can create a table if you know the moment generating function of $X$ i.e. $$M_X(t)=E[e^{tX}]$$
because $\frac{d^n}{dt^n}M_X(t)|_{t=0}=E[X^n].$
Hint: Show that $M_X(t)=(e^tp+(1-p))^n$ for binomial $X$ with parameters $n,p.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/519337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Find all couple $(x,y)$ for satisfy $\frac{x+iy}{x-iy}=(x-iy)$ I have a problem to solve this exercise, I hope someone help me.
Find all couple $(x,y)$ for satisfy $\frac{x+iy}{x-iy}=(x-iy)$
| $$\frac{z}{\bar{z}} = \bar{z}$$
Take a look at $|\cdot|$ of the two sides and you get $|z| = 1 (=x^2+y^2)$ for free.
Now expand with $z$ (since $z \neq 0$):
$$z^2 = \bar{z}$$
Now consider $\Re z^2 = x^2 - y^2$ and $\Re \bar z = x$ to get
$$x^2 - x =y^2$$
so
$$2x^2 - x = 1$$
which has solutions $x = \pm 1$ implying $y = 0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/519423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
} |
How to solve the equation $x^2=a\bmod p^2$ What is the standard approach to solve $x^2=a\bmod p^2$ or more general $x^n = a\bmod p^n$ ?
| The usual method for solving polynomial equations modulo $p^n$ is to solve it mod $p$, then use some method to extend a solution from mod $p$ to mod $p^2$, then to mod $p^3$, and so forth.
This can be done easily in an ad-hoc fashion: if you know that $f(a) = 0 \bmod p$, then you can make a new equation $f(a+px) = 0 \bmod p^2$ and solve it for $x$. If $f$ is a polynomial, we usually have
$$ f(a+px) = f(a) + px f'(a) \pmod{p^2}$$
so, as you can see, it's just solving a linear equation in this typical case. But you don't have to memorize differential approximation: just plug $a+px$ into $f$ and simplify it. This will result in something correct even when the above formula isn't true.
Sometimes, you have to solve an equation modulo $p^2$ (or worse) before you start getting unique extensions, and there can be other subtleties. But these problems manifest themselves clearly when you try to use the ad-hoc method. (e.g. $f'(a)$ will be zero modulo $p^2$)
A more systematic way to carry out this method is to use Hensel's lemma. This is essentially equivalent to use Newton's method for finding the roots of an equation, and is closely related to the $p$-adic numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/519535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Explain why perpendicular lines have negative reciprocal slopes I am not sure how to explain this. I just know they have negative reciprocals because one one line will have a positive slope while the other negative.
| Assuming experience with algebra without calculus background. So, I would suggest keeping to the idea that slope, $m$, is equal to "rise over run." Given a line with slope, lets say $\frac{a}{b}$, that means it rises $a$ in the $y$ direction for every $b$ it goes in the positive $x$ direction in the plane. (I would also encourage to always keep $b>0$ so we always go in the positive $x$ direction.) The way to make the slope the "most opposite" is to flip it and make it negative. Now, that is nowhere close to a proof, but I found one that only uses the fact that the Pythagorean theorem is true when you have a right angle and high school algebra.
Claim: If two lines in the plane, $f(x)=mx+b$ and $g(x)=nx+c$, are perpendicular, then $n=\frac{-1}{m}$.
Two lines $f(x)=mx+b$ and $g(x)=nx+c$ are not parallel, so they intersect. Assume that they do not intersect on the $y$-axis, i.e. $c \neq b$. Then the triangle formed by the graphs of these two lines and the $y$-axis is a right triangle if the Pythagorean theorem holds. WLOG, assume that $c>b$. The lengths of the side on the $y$-axis will be $c-b$. To find the other two sides, we do a little algebra. The intersection of these lines is $mx+b=nx+c$ and solving for $x$, we find that $x=\frac{c-b}{m-n}$. Thus, the point of intersection is
$$\left(\frac{c-b}{m-n},\frac{m(c-b)}{m-n}+b\right)=\left(\frac{c-b}{m-n},\frac{n(c-b)}{m-n}+c\right)$$
which we get by plugging in our $x$ for $f$ and $g$. To find the distance of each of the two legs of our triangle, we just use the distance formula, and find the the distance from
$$\left(\frac{c-b}{m-n},\frac{m(c-b)}{m-n}+b\right) \text{ to } (0,b) $$
is $\frac{(c-b)\sqrt{1+m^2}}{m-n}$. For the other side, we use the distance from
$$\left(\frac{c-b}{m-n},\frac{n(c-b)}{m-n}+c\right) \text{ to } (0,c) $$
which is $\frac{(c-b)\sqrt{1+n^2}}{m-n}$. Now, we set up the Pythagorean theorem, which we can use since the angle between the lines is right, and see that
$$ (c-b)^2 = \left[\frac{(c-b)\sqrt{1+m^2}}{m-n}\right]^2 + \left[\frac{(c-b)\sqrt{1+n^2}}{m-n}\right]^2 $$
Canceling the $(c-b)^2$ and multiplying by $(m-n)^2$ to both sides, we get
$$(m-n)^2 = 1+m^2 + 1+n^2$$
$$m^2 -2mn + n^2 = 2+m^2 +n^2 $$
Canceling and simplifying, we find that $n=\frac{-1}{m}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/519620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 9,
"answer_id": 2
} |
the solution of Fredholm´s integral equation Be $\lambda \in \mathbb{R}$ such that $\left | \lambda \right |> \left \| \kappa \right \|_{\infty }(b-a)$.
Prove that the solution $f^*$ of the integral equation of Fredholm $$\lambda f -\int_{a}^{b}\kappa (x,y)f(y)dy=g(x)$$ for all $x\in [a,b]$ satisfies $$\left \| f^*-\sum_{m=1}^{k}\frac{1}{\lambda ^m}\Im ^{m-1}g \right \|_{\infty }\leq \frac{\alpha ^{k}}{(1-\alpha) \left | \lambda \right |} \left \| g \right \|_{\infty }$$ for all $k\in \mathbb{N}$ where $$\alpha :=\frac{\left \| \kappa \right \|(b-a)}{\left |\lambda \right | }$$
i´m really stuck in this problem, I know that I can use that $$\Im :C_{\infty }^{0}[a,b]\rightarrow C_{\infty }^{0}[a,b]$$ is Lipschitz continuous and that linear Fredholm´s operator is linear, can anybody just give a hint please?
thanks!
| Since $\lambda \ne 0$, we may write
\begin{align*}
f(x) - \frac{1}{\lambda}\int_{a}^{b} \kappa(x,y) f(y)\, \mathrm{d}y = \frac{g(x)}{\lambda}
\end{align*}
Let $\Im\colon C^{\infty}_{0} \to C^{\infty}_{0}$ be defined by
\begin{align*}
\Im[f](x):=\int_{a}^{b} \kappa(x,y) f(y) \, \mathrm{d}y
\end{align*}
Then by standard integral inequalities
\begin{align*}
\frac{\left|\Im[f](x) \right|}{\lambda}
\le \frac{\left\|\kappa\right\|_{\infty}}{\lambda}(b-a)\left\|f\right\|_{\infty} = \alpha\left\|f\right\|_{\infty} <\left\|f\right\|_{\infty},
\end{align*}
which shows that $\left\|\Im \right\|/|\lambda|\le \alpha < 1$, i.e, that $\Im/\lambda$ is a contractive linear operator on the Banach space $C_{0}^{\infty}$.
Symbolically, we can now write the equation as
\begin{align*}
(I-\Im/\lambda)f = g/\lambda.
\end{align*}
In elementary algebra, we would write $(1-x)^{-1} = \sum_{k=0}^{\infty} x^{k}$ if $|x|<1$. A similar identity holds here:
\begin{align*}
(I-\Im/\lambda)\sum_{m=0}^{k} \Im^{m}/\lambda^{m}
&=
I-(\Im/\lambda)^{k+1} \to I \text{ as } k\to \infty
\end{align*}
as $\left\|\Im\right\|_{\infty}/|\lambda| <1$.
Hence we may write
\begin{align*}
(I-\Im/\lambda)^{-1} = \sum_{m=0}^{\infty} \frac{\Im^{m}}{\lambda^{m}}
\end{align*}
Applying this operator gives
\begin{align*}
f
&= (1-\Im/\lambda)^{-1}g/\lambda
= \sum_{m=0}^{\infty} \frac{\Im^{m}}{\lambda^{m+1}}g \\
&= \sum_{m=1}^{\infty} \frac{\Im^{m-1}}{\lambda^{m}}g
\end{align*}
Now the problem is easy:
\begin{align*}
\left\|f^{*} -\sum_{m=1}^{k} \frac{\Im^{m-1}}{\lambda^{m}}g
\right\|_{\infty}
&=
\left\| \sum_{m=1}^{\infty} \frac{\Im^{m-1}}{\lambda^{m}}g -\sum_{m=1}^{k} \frac{\Im^{m-1}}{\lambda^{m}}g
\right\|_{\infty} \\
&\le
\left\|\sum_{m=k+1}^{\infty} \frac{\Im^{m-1}}{\lambda^{m-1}}\right\| \frac{\left\|g\right\|_{\infty}}{|\lambda|} \\
&\le
\sum_{m=0}^{\infty} \left(\frac{\left\|\Im \right\| }{|\lambda|}\right)^{m+k}
\frac{ \left\|g\right\|_{\infty} }{ |\lambda| } \\
&\le
\sum_{m=0}^{\infty} \alpha^{m}\alpha^{k}\frac{\left\|g\right\|_{\infty}}{|\lambda|} \\
&=
\frac{\alpha^{k}\left\|g\right\|_{\infty}}{(1-\alpha)|\lambda|}
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/519679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Vertices of an equilateral triangle Question: show that the following three points in 3D space A = <-2,4,0>, B = <1,2,-1> C = <-1,1,2> form the vertices of an equilateral triangle.
How do i approach this problem?
| Find the distance between all the pairs of points
$$|AB|,|BC|,|CA|$$
and check if
$$|AB|=|BC|=|CA|$$
For example:
$$|A B| = \sqrt{(-2-1)^2 + (4-2)^2 + (0-(-1))^2} = \sqrt{14}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/519764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Finding eigenvalues. I'm working on the following problem:
Define $T \in L(F^n)$ (T an operator) by
$T(x_1,...,x_n) = (x_1+...+x_n,...,x_1+...+x_n)$
Find all eigenvalues and eigenvectors of $T$.
I've found that the eigenvalues of $T$ are $\lambda = 0$ and $\lambda = n$. Is there an easy way to prove that these are the only eigenvalues of $T$? Determining and solving the characteristic polynomial is messy for arbitrary $n$.
| Try this: by direct computation,
$T^2 = nT, \tag{1}$
since every entry of $T^2$ is $n$.
So $m_T(x) = x^2 - nx$ is the minimal polynomial of $T$; every eigenvalue $\lambda$ of $T$ satisfies
$m_T(\lambda) = 0, \tag{2}$
so the only possibilities are $\lambda = 0$ and $\lambda = n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/519862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Use the Chinese remainder theorem to find the general solution of $x \equiv a \pmod {2^3}, \; x \equiv b \pmod {3^2}, \; x \equiv c \pmod {11}$ Help! Midterm exam is coming, but i still unable to solve this simple problem using the Chinese remainder theorem.
$$x \equiv a \pmod {2^3}, \quad x \equiv b \pmod {3^2}, \quad x \equiv c \pmod {11}.$$
| From the condition of the equation we have:
$$x \equiv a \pmod 8 \implies x = 8k + a$$
$$x \equiv b \pmod 9 \implies x = 9n + b$$
$$x \equiv c \pmod {11} \implies x = 11m + c$$
Now we have:
$$8k+a=9n+b$$
$$8k+a\equiv b \pmod 9$$
$$8k\equiv b-a \pmod 9$$
Having actual values would be easier to get congruence relation for k modulo 9, but now we'll use:
$$k \equiv \frac{b-a + 9s}{8} \pmod 9 \implies k = 9t + \frac{b-a+9s}{8}$$
Note that if we add 9s on the RHS the congruence relation won't change. Sowe add $9s$ in order to get an integer when we divide by 9.
Now substitute back we have:
$$x=8k+a = 8\left(9t + \frac{b-a}{8}\right) = 72t + b-a + 9s$$
Now using this relation for x we do the same thing for this realtion and $x = 11m + c$. It maybe clearer to you with example:
$$x \equiv 3 \pmod 8 \implies x = 8k + 3$$
$$x \equiv 2 \pmod 9 \implies x = 9n + 2$$
$$x \equiv 5 \pmod {11} \implies x = 11m + 5$$
$$8k + 3 = 9n + 2$$
$$8k + 3 \equiv 2 \pmod 9$$
$$8k \equiv -1 \equiv 8 \pmod 9$$
$$k \equiv 1 \pmod 9 \implies k = 9t + 1$$
Now we substitute back:
$$x = 8k + 3 = 8(9t+1) + 3 = 72t + 8 + 3 = 72t + 11$$
Now we repeat the same procedure:
$$72t + 11 = 11m + 5$$
$$72t + 11 \equiv 5 \pmod {11}$$
$$72t \equiv 5 \equiv 720 \pmod {11}$$
$$t \equiv 10 \pmod {11} \implies t = 11s + 10$$
Now we substitue:
$$x = 72t + 11 = 72(11s + 10) + 11 = 792s + 720 + 11 = 792s + 731$$
We have a congruence relation for $x$:
$$x \equiv 731 \pmod{792}$$
You can see that $792=8\cdot 9 \cdot 11$, that's because all moduli are coprime, otherwise we would end up with the least common multiple of the moduli.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/519930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Is ∑ with negative value solvable? Is it possible to have a negative value in sigma?
e.g.
$y = \Sigma_{k=0}^{k=-2} k \times 10$
Will this give the result $(0 \times 10) + (-1 \times 10) + (-2 \times 10) = -30 $?
Or will it be $\infty$ because $k$ will be increased with $1$ until it equals $-2$ (which is never).
Or something else?
| The concept of Sum has three basic definitions.
*
*Sum over (part of) a sequence
Given a unilateral sequence
$$
x_{\,0} ,\,x_{\,1} ,\, \cdots ,\;x_{\,n}
$$
we define a sum over a portion of it as
$$
\sum\limits_{k = \,a}^b {x_{\,k} }
$$
where it is understood that either $a$ and $b$ are integers and that $a \le b$.
Under this acception your sum does not have meaning.
Another way of writing the sum is by imposing restriction to the index
$$
\sum\limits_{a\, \le \,k\, \le \,b} {x_{\,k} }
$$
and if the condition is violated the sum is null.
But the sequence could be bi-lateral
$$
\cdots ,\;x_{\, - n} ,\; \cdots x_{\, - 1} ,x_{\,0} ,\,x_{\,1} ,\, \cdots ,\;x_{\,n} ,\; \cdots
$$
In this case you may want to write
$$
\sum\limits_{k = \,0}^b {x_{\,k} }
$$
leaving $b$ free to address any index in the sequence, and thus understanding
$$
\sum\limits_{k = \,0}^b {x_{\,k} } = \sum\limits_{k = \,b}^0 {x_{\,k} }
$$
but you shall clearly state this acception.
*Sum over a set
$$
\sum\limits_{x\, \in \,A} x
$$
and the meaning is clear.
*Indefinite sum (Antidelta)
Finally there is the concept of Indefinite Sum.
For a function $F(z)$, over the complex field in general, we define the (forward) Finite Difference as
$$
\Delta \,F(z) = F(z + 1) - F(z)
$$
If we have that
$$
\Delta \,F(z) = F(z + 1) - F(z) = f(z)
$$
then we write
$$
F(z) = \Delta ^{\, - \,1} \,F(z) = \sum\nolimits_{\;z\;} {f(z)} + c
$$
and in particular we have
$$
F(b) - F(a) = \sum\nolimits_{\;z = \,a\,}^b {f(z)} = - \sum\nolimits_{\;z = \,b\,}^a {f(z)} \quad \left| {\,a,b \in \mathbb C} \right.
$$
For example
$$
\eqalign{
& F(z) = \left( \matrix{ z \cr 2 \cr} \right)\quad \Leftrightarrow \quad
\left( \matrix{ z + 1 \cr 2 \cr} \right) - \left( \matrix{ z \cr 2 \cr} \right)
= \left( \matrix{ z \cr 1 \cr} \right) = z\quad \Leftrightarrow \cr
& \Leftrightarrow \quad \sum\nolimits_{\;x = \,a\,}^{\;b} z
= \left( \matrix{ b \cr 2 \cr} \right) - \left( \matrix{ a \cr 2 \cr} \right)
\quad \left| {\;a,b \in \mathbb C} \right.\quad \Rightarrow \cr
& \Rightarrow \quad \sum\nolimits_{\;k = \,0\,}^{\;n} k \quad \left| {\;0 \le n \in \mathbb Z} \right.\quad
= \sum\limits_{0\, \le \,k\, \le \,n - 1} k = \left( \matrix{ n \cr 2 \cr} \right)
= {{n\left( {n - 1} \right)} \over 2} \cr}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/520173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Induction proof that $n! > n^3$ for $n \ge 6$, and $\frac{(2n)!}{n! 2^n}$ is an integer for $n \ge 1$ Prove by induction that
(a) $n! > n^3$ for every $n \ge 6$.
(b) prove $\frac{(2n)!}{n!2^n}$ is an integer for every $n\geq 1$
I'm quite terrible with induction so any help would be appreciated.
| (a) $n! > n^3$ for every $n \geq 6$
For the induction base, we simple have to show that $6! \geq 6^3$. Hence, $6 * 5 * 4 * (3 * 2) * 1 \geq 6*6*6$. Hence $6 * 6 * (5 * 4) \geq 6^3$. This is evidently true, as $5*4 = 20 > 6$.
For the induction step, we have to show that $(n+1)! \geq (n+1)^3$ for $n\geq 6$. Now suppose $n! > n^3$ is true for all $n \geq 6$. Then $(n+1)! = (n+1)n! \geq (n+1)n^3$.
Hence, we are done if $(n+1)n^3 \geq (n+1)^3$ for $n\geq 6$. Thus, we need to show that $n^4 + n^3 \geq n^3 + 3n^2 + 3n + 1$ for $n\geq 6$.
Rearranging yields $n^4 - 3n^2 - 3n - 1 \geq 0$ for $n\geq 6$. The second derivative of this is $12n^2 - 6$, which is strictly convex for $n\geq 6$. Hence, we are through if both $f(n) := n^4 - 3n^2 - 3n - 1 \geq 0$ and $f'(n) = 4n^3 -6n - 3 \geq 0$ for $n=6$. This is trivial to check.
(b) $\frac{(2n)!}{n! 2^n}$ is an integer for every $n\geq 1$.
The induction base is trivial: simply insert $n=1$.
For the induction step, we need to show that $\frac{(2(n+1))!}{(n+1)^2 2^{n+1}} = \frac{(2n+2)!}{(n+1)n! * 2 * 2^n} = \frac{(2n+2)(2n+1)}{2(n+1)} * \frac{(2n)!}{n! 2^n}$ is an integer for all $n\geq 1$.
Now, by the induction hypothesis, the right multiplicand must be an integer. Hence, it suffices to show that the left multiplicand must be an integer as well. Since $(2n+2) = 2(n+1)$, this is obviously the case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/520235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Little help with permutations There are 4 letters $A, B, C, D$ with repetitions permitted. These letters are used in a a 3 letter code (the order is important).
*
*Question 1:
How many different 3 letter codes can be made?
*Question 2:
If one code is chosen at random from the set of all possible codes, what is the probability it contains two A's and a D?
Now I haven't done probability in a very long time but I believe this is just a standard permutation question? Would I be correct in saying that the first question is $4^3$? So there will be 64 different codes.
| Hint: How many codes satisfying the condition that they include two A and one D are there?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/520324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Finding centre of ellipse using a tangent line? I need to determine the centre coordinates (a, b) of the ellipse given by the equation:
$$\dfrac{(x-a)^2}{9} + \dfrac{(y-b)^2}{16} = 1$$
A tangent with the equation $y = 1 - x$ passes by the point (0, 1) on the ellipse's circumference.
I'm guessing I have to find the implicit derivative first, but I'm not quite sure how to derive the part of the right. According to my calculator, the implicit differentiation is:
$\frac{-16(x-a)}{9(y-b)}$
But I'd really like to try and do this by hand. I'm just really not sure of the steps I need to take to solve this.
Thanks for any help.
| Hints: Follow, understand and prove the following
Since the point $\;(0,1)\;$ is on the ellipse then
$$\frac{a^2}9+\frac{(1-b)^2}{16}=1$$
Now differentiate implicitly:
$$\frac29(x-a)dx+\frac18(y-b)dy=0\implies \frac{dy}{dx}=-\frac{\frac29(x-a)}{\frac18(y-b)}=-\frac{16}9\frac{x-a}{y-b}$$
But we know that
$$-1=\left.\frac{dy}{dx}\right|_{x=0}=-\frac{16}9\frac{-a}{1-b}$$
Well, now solve the two variable equations you got above...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/520424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Prove that $\sup[0,1] =1$ Alongside the question in the title, does it matter if the question is $(0,1)$ or $[0,1)$?
I know that it satisfies the first condition, $1$ is an upper bound but I am not sure where to go from there.
Thanks.
| Hint: To show that the supremum of $[0,1]$ is $1$, you need to show two things: 1) that it is an upper bound and 2) that there is no lower upper bound.
You say you've already proved (1); so, it comes down to (2). Can you show that if $x<1$, then $x$ is not an upper bound on $[0,1]$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/520516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Let $x_i$ be positive number satisfying $\sum x_i = 1$, what is $\sum ix_i$ Let $x_i$ be a positive number for each $i \in \{1, 2, 3 \dots \}$ such that $\sum_{i=1}^\infty x_i = 1$ is there a closed formula for
$$\sum_{i = 1}^\infty ix_i$$?
| There isn't a formula independent of the $x_i$. The series does not always converge (let $x_i=\frac{6}{\pi^{2}i^{2}}$), but can converge (let $x_{i}=\frac{1}{2^{i}}$). I would guess the only limitation on the value of the series is that it is greater than 1.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/520611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Speed of object towards a point not in the object's trajectory? Trying to study for my mid-term, but I'm having slight difficulties understanding what I'm supposed to do in this one problem:
A batter starts running towards first base at a constant speed of 6 m/s. The distance between each adjacent plate is 27.5 m. After running for 20 m, how fast is he approaching second base? At the same moment, how fast is he running away from third base? (see image below)
This is what I have so far:
*
*Let $d$ be the distance the batter has run thus far
*The distance between the batter and first base is 7.5 m
*The distance between the batter and second base is $\sqrt {27.5^2 + (27.5-d)^2}\ $, or approx. 28.5044 m when $d = 20$
*The distance between the batter and third base is $\sqrt {27.5^2 + d^2}\ $, or approx. 34.0037 m when $d = 20$
No need to hand feed me the answer, I'd just like a bit of insight on how to solve the problem.
| The first question is easy: the batter is running straight towards the first base, so he is approaching the first base with a speed of $6$ m/s. To answer the second question, try to find the function $f(t)$ of time that gives the distance to the third base. Then find the derivative at the point $t=\frac{20}{6}$, when the batter has been running for 20 metres.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/520676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
What am I doing wrong when trying to find a determinant of this 4x4 I have to find the determinant of this 4x4 matrix:
$
\begin{bmatrix}
5 & -7 & 2 & 2 \\
0 & 3 & 0 & -4 \\
-5 & -8 & 0 & 3 \\
0 & -5 & 0 & -6 \\
\end{bmatrix}
$
Here is my working which seems wrong according to the solutions. What am i doing wrong?:
And here is the solution:
| What you are doing wrong is precisely what the solution said you were doing wrong. The $2$ was alright, since that's the same as $2\cdot(-1)^{1+3},$ but the $-5$ was not, since $$-5\cdot(-1)^{2+1}=-5\cdot-1=5.$$ Keep in mind that we have an alternating sign factor as we move along a row/column, and that the starting sign depends on the row/column that we're in.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/520755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Equivalence class help I have a question that goes as follows:
Let d be a positive integer. Define the relation Rho on the integers Z as follows: for all m,n element of the integers.
m rho n if and only if d|(m-n)
Prove that rho is an equivalence relation. Then list its equivalence classes.
Now the first d that comes to mind is 1, so I proved it was an equivalence relation as follows:
Reflexive: m rho m <=> d|m-m
Symmetric: m rho n <=> d|m-n => d|n-m with n-m = -(m-n) <=> n rho m
Transitive: k is an element of integers:
m rho n and n rho k => d|m-n & d|n-k => d|(m-n) + (n-k) => d|m-k => m rho k
I am unsure if this is a sufficient proof or if my logic holds. However I can't think of any equivalence classes for this as d can vary. If d was 7 for example I would think equivalence classes would be 1 = {...,-13,-6,1,8,15,...}, 2 = {...,-12,-5,2,9,15,...} etc...
Does this relation have equivalence classes and I am missing something or?
| There is a different relation for each $d$. What is being asked is "for all $d$, is the corresponding relation an equivalence relation?"
As for your proof, it is correct, but you may want to be clearer with some of the steps, depending on how familiar the intended audience is with divisibility.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/520806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Rate of change of distance from particle (on a curve) to origin A particle is moving along the curve
$y = 2\sqrt{4x + 9}$
As the particle passes through the point
(4, 10)
its x-coordinate increases at a rate of
3 units per second. Find the rate of change of the distance from the particle to the origin at this instant.
Okay. Rate of change of the distance from the particle to the origin.
So the origin is going to be the point (0,y). So:
$y = 2\sqrt{4(0)+9} = 6$. The point (0,6) is the origin, then.
Now the problem is asking us for the rate of change of the distance between these two points, we recall that:
$D = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}$
So, now we have to differentiate:
$$d' = \frac{1}{2}\cdot ((x_2 - x_1)^2 + (y_2 - y_1)^2)^\frac{-1}{2} \cdot
\left[
2(x_2'-x_1') + 2(y_2'-y_1')
\right]$$
Then we have to substitute.
But I don't know if I'm even correct thus so far. Can someone help?
| The distance to the origin when the particle is at $(x,y)$ is given by $D(x,y)=\sqrt{x^2+y^2}$.
We want $\frac{dD}{dt}$ at a certain instant. I prefer to work with $D^2$. So we have
$$D^2=x^2+y^2.$$
Differentiate, using the Chain Rule. We have
$$2D\frac{dD}{dt}=2x\frac{dx}{dt}+2y\frac{dy}{dt}.\tag{1}.$$
We know that $y=2\sqrt{4x+9}$. So
$$\frac{dy}{dt}=\frac{4}{\sqrt{4x+9}}\frac{dx}{dt}.\tag{2}$$
Now "freeze" things at the instant when $x=4$. We know $\frac{dx}{dt}$ at this instant. We also know $\frac{dy}{dt}$, by (2). We also know $y$ and therefore $D$. Now we can use (1) to find $\frac{dD}{dt}$ at this instant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/520861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A couple has 2 children. What is the probability that both are girls if the eldest is a girl? This is another question like this one. And by the same reason, the book only has the final answer, I'd like to check if my reasoning is right.
A couple has 2 children. What is the probability that both are girls if the eldest is a girl?
| An alternative viewpoint:
For the eldest child to be a girl, they must have had a girl first. Therefore the probability of there being two girls is the probability of having a second girl which is $\frac{1}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/520968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Independent set of formulas of the sentential logic A set $\Gamma$ of well formed formulas of the sentential logic is called independent if for each $\varphi\in\Gamma$, $\Gamma-\{\varphi\}\nRightarrow\varphi\\$.
1 when $\Gamma=\{\varphi\}$ is independent?
2 Is $\{A\rightarrow B, B\rightarrow C, C\rightarrow A\}$ independent? ($A,B,C$ are sentencial letters).
In the first case I guess that $\varphi$ should be a tautology but I get confused because $\emptyset\Rightarrow\varphi$ has a meaning?
And for the second I said no, because I can easily find valuation that make two of them true but not the third one.
If it is useful $\alpha\Rightarrow\beta$ holds iff $\alpha\rightarrow\beta$ is a tautology.
| You haven't said whether $\implies$ is semantic or syntactic entailment, but the same goes either way. I'll assume you mean semantic entailment, but you can easily adjust the answer if you meant syntactic entailment.
*
*$\{\varphi\}$ is independent [on your definition] iff $\emptyset \nvDash \varphi$, i.e. iff $\varphi$ is not a tautology. But why does $\emptyset \vDash \varphi$ say that $\varphi$ is a tautology? Recall: $\Delta \vDash \varphi$ says that any valuation which makes $\varphi$ false must make some wff in $\Delta$ false. So: $\emptyset \vDash \varphi$ says that any valuation which makes $\varphi$ false must make some wff in $\emptyset$ false. But there are no wffs in the empty set to make false, so that's equivalent to saying no valuation makes $\varphi$ false, i.e. $\varphi$ is a tautology.
*Yes, except you need three valuations since you need to consider the three different cases where you extract in turn one of the propositions from the given set and see if the extracted set follows from the remainder.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$3x\equiv7\pmod{11}, 5y\equiv9\pmod{11}$. Find the number which $x+y\pmod{11}$ is congruent to. Given that $3x\equiv7\pmod{11}, 5y\equiv9\pmod{11}$. Find the number which $x+y\pmod{11}$ is congruent to. I'm thinking $20\equiv9\pmod{11}$, But I am having trouble find a number $3x$ that is divisible by $3$? Is there a better way of solving this problem.
| 3x $\equiv$ 7 (mod 11) and 5y $\equiv$ 9 (mod 11)
3x $\equiv$ 18 (mod 11) by adding 11 to 7, then 5y $\equiv$ 20 (mod 11) by add 11 to 9.
x $\equiv$ 6 (mod 11) by dividing 3 to both sides, then y $\equiv$ 4 (mod 11) by dividing 5 to both sides.
Then, x + y $\equiv$ 10 (mod 11)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Symmetric Tridiagonal Matrix has distinct eigenvalues. Show that the rank of $ n\times n$ symmetric tridiagonal matrix is at least $n-1$, and prove that it has $n$ distinct eigenvalues.
| This is for tridiagonal matrices with nonzero off-diagonal elements.
Let $\lambda$ be an eigenvalue of $A\in\mathbb{R}^{n\times n}$ (which is symmetric tridiagonal with nonzero elements $a_{2,1},a_{3,2},\ldots,a_{n,n-1}$ on the subdiagonal). The submatrix constructed by deleting the first row and the last column of $A-\lambda I$ is nonsingular (since it is upper triangular and has nonzero elements on the diagonal) and hence the dimension of the nullspace of $A-\lambda I$ is 1 (because its rank cannot be smaller than $n-1$ and the nullspace must be nontrivial since $\lambda$ is an eigenvalue). It follows then that the geometric multiplicity is 1 and hence the algebraic multiplicity of $\lambda$ is 1 as well. This holds for any eigenvalue of $A$ and hence they are distinct.
The fact that $\mathrm{rank}(A)\geq n-1$ is just a simple consequence of that ($0$ has also multiplicity 1 if $A$ is singular).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
If $ac-bd=p$ and $ad+bc=0$, then $a^2+b^2\neq 1$ and $c^2+d^2\neq 1$? I'm trying to prove the following:
Let $a,b,c,d\in\Bbb{Z}$ and $p$ be a prime integer. If $ac-bd=p$ and $ad+bc=0$, prove that $a^2+b^2\neq 1$ and $c^2+d^2\neq 1$.
Actually I'm not even sure if this is correct. A proof or counter-example (in case this assertion is wrong) would be great.
I got the following 3 results:
*
*$p^2=(a^2+b^2)(c^2+d^2)$
*$b(c^2+d^2)=-pd$
*$a(c^2+d^2)=pc$
*$c(a^2+b^2)=pa$
*$d(a^2+b^2)=-pb$
I'm at a loss as to how to proceed beyond this. Assuming $c^2+d^2=1$ or $a^2+b^2=1$ does not seem to cause any contradictions.
Thanks in advance!
| Consider $p^2 = p^2 + 0^2 = (ac-bd)^2 + (ad+bc)^2 = (a^2+b^2)(c^2+d^2)$. Then either one of $a^2+b^2$, $c^2+d^2$ equals $p^2$ and the other equals $1$ or both equal $p$. This follows from unique factorization of $\mathbb{Z}$.
Considering the first case, suppose wlog $a^2+b^2=1$ and $a=0$. Then $c^2+d^2=p^2$. Now if both $c$ and $d$ are nonzero, then, since $b$ is also not equal to zero, $ad+bc$ cannot be zero, a contradiction. Thus suppose, again wlog, that $c=0$ and $d^2=p^2$. This leads to a counterexample, namely $a=0,b=1,c=0,d=-p$ or $a=0,b=-1,c=0,d=p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solve $4^{9x-4} = 3^{9x-4}$ I am having some trouble trying to solve
$$4^{9x-4} = 3^{9x-4}$$
I tried to make each the same base but then I'm becoming confused as to what to do next.
These are the steps I took:
$$\begin{align}
4^{9x-4} &= 3^{9x-4} \\
\log_4(4^{9x-4}) &= \log_4(3^{9x-4}) \\
\end{align}$$
Where do I go from there?
Thanks!
| So you thought @fasttouch was complex?
Adapted from Mathematica:
$$x = \frac{4\log\frac{4}{3}- 2 \pi ni}{9 \log \frac{4}{3}}, n \in \Bbb Z$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 11,
"answer_id": 6
} |
$L_2$ is of first category in $L_1$ (Rudin Excercise 2.4b) We mean here $L_2$, and $L_1$ the usual Lebesgue spaces on the unit-interval. It is excercise 2.4 from Rudin. There's several ways to show that $L_2$ is nowhere dense in $L_1$.
But in (b) they ask to show that
$$\Lambda_n(f)=\int fg_n \to 0 $$
where $g_n = n$ on $[0,n^{-3}]$ and 0 otherwise, holds for $L_2$ but not for all $L_1$.
Apparantly this implies that $L_2$ is of the first Category, but I dont know how.
Second, I can show this holds for $L_2$ but I cant find a counterexample in $L_1$.
Theorem 2.7 in Rudin says:
Let $\Lambda_n:X\to Y$ a sequence of continuous linear mappings ($X,Y$ topological vector spaces)
If $C$ is the set of all $x\in X$ for which $\{\Lambda_n x\}$ is Cauchy in $Y$, and if $C$ is of the second Category, then $C=X$.
So if we find a $f\in L_1$ such that $\Lambda_n(f)$ is not Cauchy, then we proved that $L_2\subset C \subset L_1$ is of the first category. However I dont see why showing that $\Lambda_n(f)$ does not converge to 0 for some $f\in L_1$ is enough here.
Am I missing something?
| The simplest functions in $L_1 \setminus L_2$ are $f_\alpha \colon x \mapsto x^\alpha$ with $-1 < \alpha \leqslant -\frac12$.
Computing $\int fg_n$ for such an $f_\alpha$ yields
$$\begin{align}
\int f_\alpha g_n &= n\int_0^{n^{-3}} x^\alpha\,dx \\
&= \frac{n}{1+\alpha}n^{-3(1+\alpha)}\\
&= \frac{n^{-2-3\alpha}}{1+\alpha}.
\end{align}$$
We see that the sequence of integrals does not converge to $0$ iff $-2-3\alpha \geqslant 0 \iff \alpha \leqslant -\frac23$.
Regarding the second part, choosing $-1 < \alpha < -\frac23$ gives an $f\in L_1$ with $\Lambda_n(f) \to \infty$, so $\Lambda_n(f)$ certainly is not a Cauchy sequence. Choosing $\alpha = -\frac23$ gives an $f\in L_1$ such that $\Lambda_n(f)$ is constant, hence a Cauchy sequence, but does not converge to $0$.
Now, if $L_2$ were of the second category in $L_1$, then the fact that $\Lambda_n(f) \to 0$ for all $f\in L_2$ would imply that $\Lambda_n(f) \to 0$ for all $f\in L_1$, by part $(b)$ of theorem 2.7. But picking $\alpha < -\frac23$ to get an $f\in L_1$ such that $\Lambda_n(f)$ is not a Cauchy sequence seems preferable, since it's more direct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Iteration of an operator
Let $f_0(x)$ be integrable on $[0,1]$, and $f_0(x)>0$. We define $f_n$ iteratively by
$$f_n(x)=\sqrt{\int_0^x f_{n-1}(t)dt}$$
The question is, what is $\lim_{n\to\infty} f_n(x)$?
The fix point for operator $\sqrt{\int_0^x\cdot dt}$ is $f(x)=\frac{x}{2}$. But it's a bit hard to prove this result. I have tried approximate $f(x)$ by polynomials, but it's hard to compute $f_n$ when $f_0(x)=x^n$ since the coefficient is quite sophisticated. Thanks!
| Note: this is not a proof that the limit exists, but a computation of the limit if we know that it exists.
We know that $f(x)>0$ for $x>0$ and $f(0)=0$. We want to solve
$$
f(x)=\sqrt{\int_0^xf(t)\,dt},\quad 0\le x\le 1,
$$
that is,
$$
(f(x))^2=\int_0^xf(t)\,dt,\quad 0\le x\le 1.
$$
Derivate with respect to $x$ to obtain
$$
2\,f\,f'=f\implies f'(x)=1/2\implies f(x)=x/2.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Show that $\lim_{\delta \to 0}(1-\lambda \delta)^{1/\delta} = e^{-\lambda}$ My professor said that
$$\lim_{\delta \to 0}(1-\lambda \delta)^{t/\delta}=e^{-\lambda t}$$
can be shown with L'Hospital's rule. I don't know what he meant. What is the best way to show this (or, more simply, $\lim_{\delta \to 0}(1-\lambda \delta)^{1/\delta} = e^{-\lambda}$)?
If I try as follows
$$\lim_{\delta \to 0}\left(1-\lambda\delta \right)^{1/\delta} = \lim_{\eta \to \infty} \frac{(\eta-\lambda)^\eta}{\eta^\eta},$$
then I'm getting led into confusion trying LHR on the last one.
| Another approach: define
$$x:=\frac1\delta\implies \delta\to 0\implies x\to\infty$$
and our limit is
$$\left[\left(1-\frac\lambda x\right)^x\right]^t\xrightarrow[x\to\infty]{}(e^{-\lambda})^t=e^{-\lambda t}$$
We used above the basic
$$\lim_{x\to\infty}\left(1\pm\frac\lambda{f(x)}\right)^{f(x)}=e^{\pm\lambda}$$
for any function $\;f(x)\;$ s.t.
$$\lim_{x\to\infty}f(x)=\infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to calculate conditional expectation only from the characteristic function I would like to calculate conditional expectation $E[X|A]$, where $A$ is a set, only from the characteristic function $\phi(\omega)$ of a random variable $X$. How can I do this?
Since the characteristic function describes the density function completely, I should be able to do everything at the frequency domain but I dont know how it can be done. If there is no conditioning then, the result is simply the derivative of the characteristic function.
I also wonder how to calculate
$$\int_{-\infty}^A f(t)\mathrm{d}t$$
from the chracteristic function $\phi(\omega)$ without going back to the density domain.
Thanks alot...
NOTES:
I found a solution to the second part of my question from
$$F_X(x)=\frac{1}{2}+\frac{1}{2\pi}\int_0^\infty \frac{e^{iwx}\phi_X(-w)-e^{-iwx}\phi_X(w)}{iw} \mathrm{d}w$$
with $F_X(A)$
| The conditional expectation $E[X | A]$ will change depending on whether or not $A$ is independent from $X$. If independent, $E[X | A] = E[X]$, else $E[X | A]$ can have different values on $A$ and $A^c$. For example, if $X = 1_A$, then $E[1_A | A] = 1_A$, but if $B$ is a set independent from $A$, with $P(A) = P(B)$, then $E[1_B | A] = P(B)$ (deterministic). In this case, the distributions defined by $1_A$ and $1_B$ agree, and hence their characteristic functions agree. (Note that the characteristic function of $X$ at $t$ is the integral $\int e^{itx} dP_X$, and in particular depends only on the distribution induced by $X$ on the real line.)
So, since we can find two random variables, whose characteristic funtions agree, but whose conditional expectations (with respect to a particular sigma algebra) are substantially different (inducing different distributions), it is impossible to determine the conditional expectation from the characteristic function alone. (At least, without additional information.)
Does it make sense?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $\sin(x^2)$ is integrable around $\infty$. I have to show that $f(x)=\sin(x^2)$ is integrable on $[1, \infty[$. This is French terminology, so "intégrable" specifically means that the integral of $|f|$ exists.
The only method I know is to compare it to functions of the form $\frac{1}{x^\alpha}$, but it's not eventually smaller or larger than any of these. I can't imagine how it could be asymptotically equivalent to anything useful either, seeing as it oscillates like crazy.
| If you mean that
$$
\lim_{N\to\infty}\int_0^N\sin(x^2)\,\mathrm{d}x
$$
exists, then change variables $x\mapsto\sqrt{x}$ and integrate by parts:
$$
\begin{align}
&\lim_{N\to\infty}\int_0^N\sin(x^2)\,\mathrm{d}x\\
&=\int_0^1\sin(x^2)\,\mathrm{d}x
+\lim_{N\to\infty}\frac12\int_1^{N^2}\frac{\sin(x)}{x^{1/2}}\,\mathrm{d}x\\
&=\int_0^1\sin(x^2)\,\mathrm{d}x
+\lim_{N\to\infty}\frac12\left[\frac{1-\cos(x)}{x^{1/2}}\right]_1^{N^2}
+\lim_{N\to\infty}\frac14\int_1^{N^2}\frac{1-\cos(x)}{x^{3/2}}\,\mathrm{d}x\\
\end{align}
$$
Now each piece has a limit as $N\to\infty$ since
$$
\int_0^1\sin(x^2)\,\mathrm{d}x
$$
is constant
$$
\lim_{N\to\infty}\frac12\left[\frac{1-\cos(x)}{x^{1/2}}\right]_1^{N^2}=\frac{\cos(1)-1}2
$$
and
$$
\left|\frac{1-\cos(x)}{x^{3/2}}\right|\le\frac2{x^{3/2}}
$$
which is integrable over $[1,\infty]$ since $\frac32\gt1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is it possible to get a 'closed form' for $\sum_{k=0}^{n} a_k b_{n-k}$? This came up when trying to divide series, or rather, express $\frac1{f(x)}$ as a series, knowing that $f(x)$ has a zero of order one at $x=0$, and knowing the Taylor series for $f(x)$ (that is knowing the $b_i$ 's).
I write $$1=\frac1{f(x)}f(x) = (\frac{r}{x}+a_0+a_1x+a_2x^2+...)(b_1x+b_2x^2+b_3x^3+...)$$
And comparing coefficients I get $r= \frac{1}{b_1}$ and more importantly, the equations:
$$0=b_{n+1}r + \sum_{k=0}^{n} a_k b_{n-k}.$$
Is there a closed form for this recursion, i.e. is it possible to extract $a_n = ...$ from this? In the particular case I'm looking at, $b_n = \frac{1}{n!}$.
| If you know $f$ has a zero of order $1$ you can write it as $$f(x)=x\left(a_1+a_2x+a_3x^2+\ldots\right),$$
with $a_1\neq0$.
Then $\frac{1}{f}=\frac{1}{x}\frac{1}{a_1+a_2x+a_3x^2+\ldots}$.
To compute the series of $\frac{1}{a_1+a_2x+a_3x^2+\ldots}$ just apply long division of $1$ divided by $a_1+a_2x+a_3x^2+...$. This is an algorithm that allows you to compute, term by term, the series if the quotient.
If we have more information on $f$ other methods could be applied. For $f$ a rational function there is a much better method.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What did Newton and Leibniz actually discover? Most popular sources credit Newton and Leibniz with the creation and the discovery of calculus. However there are many things that are normally regarded as a part of calculus (such as the notion of a limit with its $\epsilon$-$\delta$ definition) that seem to have been developed only much later (in this case in the late $18$th and early $19$th century).
Hence the question - what is it that Newton and Leibniz discovered?
| This is actually quite a complicated question, since it spans two whole careers.
Some say calculus was not discovered by Newton and Leibniz because Archimedes and others did it first. That's a somewhat simple-minded view. Archimedes solved a whole slew of problems that would now be done by integral calculus, and his methods had things in common with what's now taught in calculus ("now" = since about 300 years ago), but his concepts were in a number of ways different, and I don't think he had anything like the "fundamental theorem".
I'm fairly sure Leibniz introduced the "Leibniz" notation, in which $dy$ and $dx$ are corresponding infinitely small increments of $y$ and $x$, and the integral notation $\int f(x)\,dx$. I suspect Newton and Leibniz were the first to systematically exploit the fundamental theorem. And the word "systematic" is also important here: Newton and Leibniz made the computation of derivatives and integrals systematic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 6,
"answer_id": 3
} |
Solving system of equations with R Suppose I have the following function :
$f(x,y,z,a)= \cos(ax) + 12y^2 - 9az$
and I want to solve the following syste of equations.
$ f(x,y,z,1)= 10 $,
$f(x,y,z,5)= 7 $,
and
$f(x,y,z,-3)= 17 $.
These are equivalent to
$\cos(x) + 12 y^2 - 9 z(1) = 10$,
$ \cos(5x) + 12y^2 - 9 z(5) = 7 $,
and $ \cos(-3x) + 12y^2 +27z = 17 $.
How do I solve these equations for $x$, $y$, and $z$? I would like to solve these equations, if possible, using R or any other computer tools.
| This is a partial answer; I can't give actual code, since I'm not familiar with R.
The standard way to solve this type of problem is to reformulate it as a nonlinear root-finding (or nonlinear optimization) problem and then use existing software or packages.
*
*First, write your equations as follows: \begin{align}
\cos x + 12 y^2 - 9 z - 10 &= 0 \\
\cos 5 x + 12 y^2 - 45 z - 7 &= 0 \\
\cos 3 x + 12 y^2 + 27 z - 17 &= 0
\end{align} (in the last equation we have used the even symmetry of $\cos$: $\cos(-x) = \cos x$).
*Next, observe that this the above equations are equivalent to the following problem: Find the root of the nonlinear function $\textbf{F} : \mathbb{R}^3 \to \mathbb{R}^3$: \begin{equation}
\textbf{F}(\textbf{x}) = \begin{bmatrix} \cos x + 12 y^2 - 9 z - 10 = 0 \\
\cos 5 x + 12 y^2 - 45 z - 7 = 0 \\
\cos 3 x + 12 y^2 + 27 z - 17 = 0 \end{bmatrix}, \end{equation} where \begin{equation} \textbf{x} = \begin{bmatrix} x \\ y \\ z \end{bmatrix}; \end{equation} that is, solve the equation $$ \textbf{F}(\textbf{x}) = \textbf{0} \tag{1} $$ for $\textbf{x} \in \mathbb{R}^3$.
Unfortunately it is not immediately clear if this problem has a solution. Moreover, if it does, it is not clear precisely how many solutions it has. Nonetheless, we can proceed using a couple of packages for R that are available on CRAN: rootSolve and ucminf.
I would suggest first using rootSolve: it is a nonlinear root-finding package, and will try to solve Equation (1) assuming that a solution exists. It will require an initial guess; ideally it would be close to what you think is a solution, but to get started, feel free to try $x = 0, y = 0, z = 0$ or $x = 1, y = 1, z = 1$ or even $x, y, z =$ random numbers. When your script is running and returning output, you should test that your solution is correct: put the output back into the function $\textbf{F}(\textbf{x})$ and test that it is $\approx \textbf{0}$.
If it fails this test, then you might need to use a different initial guess, or just cut your losses and use ucminf: this package is a nonlinear optimization package. It does not assume that your function has an exact solution; instead, it tries to find $\textbf{x} \in \mathbb{R}^3$ such that $\left\| \textbf{F}(\textbf{x}) \right\|$ is minimized. If it happens that the problem has a solution, but for some reason rootSolve was not able to find it, then this nonlinear optimization problem is equivalent; otherwise it will simply be an optimal "solution".
Again, let me stress that nonlinear problems may not even have a solution, or may have several solutions, even though the output of the functions will indicate only one solution. Moreover, the output should always checked. Solving nonlinear systems of equations is not easy. Hopefully someone with more experience in R can comment or make a post on how to actually implement this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/521996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Complete and elementary proof that $(a^x - 1)/x $ converges as x goes to 0 Anybody who has taken a calculus course knows that
$$\lim_{x \to 0} \frac{a^x - 1}{x}$$
exists for any positive real number $a$, simply because the limit is by definition the derivative of the function $a^x$ at $x = 0$. However, for this argument to be non-circular one must have an independent technique for proving that $a^x$ is differentiable. The standard approach involves the following two steps:
1) Calculate the derivative of $\log_a x$ by reducing it to the calculation of
$$\lim_{h \to 0} (1 + h)^{\frac{1}{h}}$$
2) Apply the inverse function theorem.
I find this unsatisfying for two reasons. First, the inverse function theorem is not entirely trivial, even in one variable. Second, the limit in step 1 is quite difficult; in books it is often calculated along the sequence $h = \frac{1}{n}$ where $n$ runs over the positive integers, but doing the full calculation seems to be quite a bit more difficult (if one hopes to avoid circular reasoning).
So I would like a different argument which uses only the elementary theory of limits and whatever algebra is needed. For instance, I would like to avoid logarithms if their use involves an appeal to the inverse function theorem. Is this possible?
| The most common definition of $e$ is $$e:=\lim_{x\to0}\left(1+x\right)^{1/x}$$ although you often see it with $n=1/x$ and as $n\to\infty$.
Now $$\begin{aligned}\lim_{x\to0}\left(\frac{e^x-1}{x}\right)&=\lim_{x\to0}\left(\frac{\left(\lim_{y\to0}\left(1+y\right)^{1/y}\right)^x-1}{x}\right)\\
&=\lim_{x\to0}\left(\frac{\lim_{y\to0}\left(1+y\right)^{x/y}-1}{x}\right)\\
&=\lim_{x\to0}\lim_{y\to0}\left(\frac{\left(1+y\right)^{x/y}-1}{x}\right)
\end{aligned}$$
Now are you willing to believe that $$\lim_{(x,y)\to(0,0)}\left(\frac{\left(1+y\right)^{x/y}-1}{x}\right)
$$ exists? If so it equals the last line above, and it also equals $$\lim_{x\to0}\left(\frac{\left(1+x\right)^{x/x}-1}{x}\right)
$$ by tracking along the line $y=x$. This last quantity is clearly $1$. From here, you can establish that the derivative of $e^x$ is $e^x$, from which the Chain Rule gives that the derivative of $a^x$ is $a^x\ln(a)$, from which you can get that $\lim_{x\to0}\left(\frac{a^x-1}{x}\right)=\ln(a)$.
See if you can find justification for the existence of that limit as $(x,y)\to(0,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/522077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 1
} |
How prove this analysis function $a\le\frac{1}{2}$ let
$$f(x)=\begin{cases}
x\sin{\dfrac{1}{x}}&x\neq 0\\
0&x=0
\end{cases}$$
show that:there exsit $M>0,(x^2+y^2\neq 0)$ ,
$$F(x,y)=\dfrac{f(x)-f(y)}{|x-y|^{a}}|\le M \Longleftrightarrow a\le\dfrac{1}{2}$$
My try:
(1)if $a\le\dfrac{1}{2}$, then
$$\dfrac{f(x)-f(y)}{|x-y|^a}=\dfrac{x\sin{\frac{1}{x}}-y\sin{\frac{1}{y}}}{|x-y|^a}$$
then How can prove
$$|x\sin{\frac{1}{x}}-y\sin{\frac{1}{y}}|<M|x-y|^a,a\le\dfrac{1}{2}$$
By other hand:
and if for any $x,y\in R$,and such $$ |x\sin{\frac{1}{x}}-y\sin{\frac{1}{y}}|<M|x-y|^a$$
then How prove must $a\le\dfrac{1}{2}$?
I think this is nice problem,Thank you
By the way:when I deal this problem, I find this nice equality
$$|x\sin{\frac{1}{x}}-y\sin{\frac{1}{y}}|<2\sqrt{|x-y|}$$
But I can't prove ,Thank you
| I think that this is just a partial answer or this is just an estimation for $a_0$ where $a\leq a_0$.
$ a_n=\frac{1}{2n\pi + \frac{\pi}{2}},\ b_n = \frac{1}{2n\pi - \frac{\pi}{2}}$
$f(a_n)- f(b_n) = a_n +b_n = \frac{4n\pi}{4n^2\pi^2 - (\pi/2)^2} \approx \frac{1}{n\pi}$ and $|a_n - b_n|^{a_0} \approx |\frac{\pi}{4n^2\pi^2} |^{a_0}$
Intuitively we conclude that $a_0=1/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/522163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
In the field $\mathbb{Z}_7[x]/\langle x^4+x+1\rangle$, find the inverse of $f(x)=x^3+x+3$. In the field $\mathbb{Z}_7[x]/\langle x^4+x+1\rangle$, find the inverse of $f(x)=x^3+x+3$.
I know how to find the inverses of elements within sets, rings, and fields. I know what to do if the field was just $\mathbb{Z}_7$, but the fact that the field is $\mathbb{Z}_7[x]/\langle x^4+x+1\rangle$ confuses me. I don't know where to start.
| The computation here is the same in ${\mathbb Q}$ as in
${\mathbb Z}_7$.
You look for a solution of the form
$$
z=a+bx+cx^2+dx^3 \tag{1}
$$
You then have
$$
z(x^3+x+1)=dx^6+cx^5+(b+d)x^4+(a+c+3d)x^3+(b+3c)x^2+(a+3b)x+3a=Q(x) \tag{2}
$$
Next, divide the result by $x^4+x+1$ :
$$
Q(x)=(x^4+x+1)(dx^2+cx+b+d)+R(x) \tag{3}
$$
where the remainder $R(x)$ equals
$$
R(x)=(a+c+2d)x^3+(b+2c-d)x^2+(a+2b-c-d)x+(3a-b-d) \tag{4}
$$
Then, solve the system
$$
a+c+2d=b+2c-d=a+2b-c-d=0, \ \ 3a-b-d=1 \tag{5}
$$
This will lead you to the solution
$$
a=\frac{11}{47}, b=\frac{-8}{47}, c=\frac{1}{47}, d=\frac{-6}{47},
z=\frac{11-8x+x^2-6x^3}{47} \tag{6}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/522236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Differentiation of function with chain rule the following expression is part of a function I have to differentiate:
$y = \tan^3(5x^4-7)$
I tried using the chain-rule, so:
$ y' = 3\tan^2(5x^4-7)\cdot(20x^3)$
is this correct?
| Set $t=5x^4-7,~~~u=\tan(t)$ so, you have $$y=u^3$$. Now use the following routine formulas:
$$\frac{dy}{dx}=\frac{dy}{du}\cdot\frac{du}{dt}\cdot\frac{dt}{dx},~~~(\tan (t))'=1+\tan^2(t)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/522414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is the answer of this problem? Suppose that $f(x)$ is bounded on interval $[0,1]$, and for $0 < x < 1/a$, we have $f(ax)=bf(x)$. (Note that $a, b>1$). Please calculate $$\lim_{x\to 0^+} f(x) .$$
| We get $f\left(a^{n}x\right)=b^{n}f\left(x\right)$ for $0<x<a^{-n}$.
If the limit does not equal $0$ then there is a series with $x_{n}\in(0,a^{-n})$
with $f\left(x_{n}\right)\geq\varepsilon>0$
for each $n$. Then $f\left(a^{n}x_{n}\right)=b^{n}f\left(x_{n}\right)\geq b^{n}\varepsilon$.
This contradicts the boundedness of $f$ since $b>1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/522525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Where is $\operatorname{Log}(z^2-1)$ Analytic? $\newcommand{\Log}{\operatorname{Log}}$
The question stands as
Where is the function $\Log(z^2-1)$ analytic
, where $\Log$ stands for the principal complex logarithm. My understanding is that
The domain of analyticity of any function $f(z) = \Log\left[g(z)\right]$, where $g(z)$ is analytic, will be the set of points $z$ such that $g(z)$ is defined and $g(z)$ does not belong to the set $\left \{z = x + iy\ |\ −\infty < x \leq 0, y = 0\right \}$.
Following this definition it would imply that the function $f(z)$ is analytic everywhere in complex plane except for the points where $-\infty<\Re(z^2-1)\leq0$ and $\Im(z^2-1)=0$. So I get $x^2-y^2-1\leq0$ and $2xy=0$. Graphically it must be analytic everywhere except on the real x axis, the imaginary y-axis and in the region inside the hyperbola $x^2-y^2=1$. The answers say
Everywhere except $\{z\in\mathbb{R}:|z|\leq1\}\bigcup\{iy:y\in\mathbb{R}\}$.
Please help correct my understanding. Thank you in advanced.
| If $2xy=0$ then either (a) $x=0$, in which case the other inequality becomes $-y^2-1\leq 0$ which is satisfied by all $y\in\mathbb{R}$, or (b) $y=0$, where the other inequality becomes $x^2 - 1 \leq 0$ which is satisfied by all $|x| \leq 1$.
These inequalities must both be satisfied together. You are describing the union of the sets they are satisfied on individually, where what you really want is the intersection.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/522579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Geometric interpretation of the addition of linear equations in general form I have a very simple question: suppose I have two 2D linear equations in general form
$$ a_1x + b_1y + c_1 = 0$$
$$ a_2x + b_2y + c_2 = 0$$
I'd like to know what's the (intuitive) geometric interpretation of their addition and subtraction
$$ (a_1 + a_2)x + (b_1 + b_2)y + (c_1 + c_2) = 0$$
$$ (a_1 - a_2)x + (b_1 - b_2)y + (c_1 - c_2) = 0$$
| In general, if you have a system of two linear equations whose solution is a line $L$ in $3$-space, you can visualize the general linear combination of the equations as giving another plane containing $L$. Think of this as different positions of a revolving door, pivoting around $L$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/522633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Problem finding in simple algebra It is given,
$$x= \sqrt{3}+\sqrt{2}$$
How to find out the value of $$x^4-\frac{1}{x^4}$$/
The answer is given $40 \sqrt{6}$ but my answer was not in a square-root form
I have done in thsi way:
$$x+ \frac{1}{x}= 2 \sqrt{3}$$
Then,
$$(x^2)^2-\left(\frac{1}{x^2}\right)^2= \left(x^2 + \frac{1}{x^2}\right)^2-2$$
But this way is not working. Where I am wrong?
| The idea you're having to change it to terms of $x^2$ isn't bad, but it seems a little overfancy. (Maybe I overlooked some economy about it, but I haven't seen the benefit yet.)
Why not just calculate it directly? (Hints follow:)
$x^2=3+2+2\sqrt{6}=5+2\sqrt{6}$
$x^4=(5+2\sqrt{6})^2=25+24+20\sqrt{6}=49+20\sqrt{6}$
$\dfrac{1}{x^4}=\dfrac{1}{49+20\sqrt{6}}=\dfrac{49-20\sqrt{6}}{2401-2400}=49-20\sqrt{6}$
You can take it from here, I think.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/522712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Perpendicular line intersection issues Do not downvote questions for being 'simple' to you. What one might find trivial another may find helpful. It is not in the spirit of SE. That being said,...
I have a line with the equation $y = -2.08x - 44$, and I must find the perpendicular equation, which will be $y \approx 0.4808x - b$.
Using the given coordinates $(0,0)$ for the $\perp$ line, I get $b = 0$. Now I can set the two lines equal to each other to solve for y, since the y values must be the same at an intersection: $-2.08x - 44 = 0.4808x$. I get $-2.5608x = 44$, and I can then multiply both sides by (1 / -2.5608) = -0.3905 to get $x = 112.676$. I then insert that back into the first equation to get $y = -2.08 * 112.676 - 44 = -278.36608$, so that I have $(112.676,278.367)$ which is not the correct answer, since an online calculator states that they intersect at $(-17, -8)$.
My question is, where in this process am I mistaken? Please tell me so that I can correct my errors and understand where I went wrong.
| Up to this point, you are correct: $\require{cancel}$
$$-2.5608\;x = 44$$
Dividing both sides of the equation by $-2.5608$ to solve for $x$ yields (or multiplying both sides by $\frac{1}{-2.5608}$) $$\dfrac{\cancel{-2.5608}\;x}{\cancel{-2.5608}} = \dfrac{44}{-2.5608} \iff x \approx \dfrac {44}{-2.5608} \approx -17.1821$$
Then proceed using the same logic you used to find $y$, but this time, use the correct value for $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/522947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.