Q
stringlengths 70
13.7k
| A
stringlengths 28
13.2k
| meta
dict |
---|---|---|
Asymptotic expansion of the sum $ \sum\limits_{k=1}^{n} \frac{\binom{n+1}{k} B_k}{ 3^k-1 } $ The situation :
I am looking for an asymptotic expansion of the sum $\displaystyle a_n=\sum_{k=1}^{n} \frac{\binom{n+1}{k} B_k}{ 3^k-1 } $ when $n \to \infty$.
(The $ B_k $ are the Bernoulli numbers defined by $ \displaystyle \frac{z}{e^{z}-1}=\underset{n=0}{\overset{+\infty }{\sum }}\frac{B_{n}}{n!}z^{n}$).
Context :
The initial problem was that I need to calculate a radius of convergence of a power series $\displaystyle \sum_{k=1}^{} a_n z^n $. I have almost tried everything to calculate this asymptotic expansion of the $a_n$, but to no avail.
The numerical test (computing) shows that $\displaystyle \lim_{n\to +\infty} \frac{a_{n+1}}{a_n} = 1$, that is, the convergence radius of the series is equal to $1$. But I can not analytically prove it.
My attempts to solve it :
$\displaystyle \large
\begin{align*}
a_n=\sum_{k=1}^{n} \frac{\binom{n+1}{k} B_k}{3^k-1 } &= \sum_{k=1}^{n} \frac{\binom{n+1}{k}B_k3^{-k}}{ 1- 3^{-k} } \\
&= \sum_{k=1}^{n} \binom{n+1}{k}B_k3^{-k} \sum_{p=0}^{+\infty}3^{-pk} \\
&= \sum_{p=0}^{+\infty} \sum_{k=1}^{n} \frac{\binom{n+1}{k}B_k} {3^{(p+1)k}} \\
\end{align*}
$
Using the Faulhaber's formula : $\displaystyle \large \sum_{k=1}^{n} \frac{\binom{n+1}{k}B_k} {N^k} = \frac{n+1}{N^{n+1}} \sum_{k=1}^{N-1} k^n -1$
We replace $N$ by $3^{p+1}$
$\displaystyle \large \sum_{k=1}^{n} \frac{\binom{n+1}{k}B_k} {3^{(p+1)k}} = \frac{n+1}{3^{(p+1)(n+1)}} \sum_{k=1}^{3^{p+1}-1} k^n -1$
That is to say
$\displaystyle \large a_n = \sum_{p=0}^{+\infty} \left(\frac{n+1}{3^{(p+1)(n+1)}} \sum_{k=1}^{3^{p+1}-1} k^n -1\right)$
Or
$\displaystyle \large a_n = \sum_{p=0}^{+\infty} \left( \frac{n+1}{3^{p+1}} \sum_{k=1}^{3^{p+1}-1} \left( \frac{k}{3^{p+1}} \right)^n -1\right)$
If I come by your help, to answer this question, I will publish a new formula of Riemann zeta function that I find elegant.
| Let $n\in\mathbb{N}_{\ge 1}$, from the identity
\begin{align}
(k+1)^{n+1}-k^{n+1}=(n+1)k^n+\sum_{\ell=0}^{n-1}\binom{n+1}{\ell}k^{\ell}
\end{align}
we have
\begin{align}
a_n&=\sum_{p\ge 1}\left(\frac{1}{3^{p(n+1)}}\sum_{k=0}^{3^{p}-1}\left((k+1)^{n+1}-k^{n+1}-\sum_{\ell=0}^{n-1}\binom{n+1}{\ell}k^{\ell}\right)-1\right)\\
&=-\sum_{p\ge 1}\frac{1}{3^{p(n+1)}}\sum_{k=0}^{3^{p}-1}\sum_{\ell=0}^{n-1}\binom{n+1}{\ell}k^{\ell}.
\end{align}
Hence,
\begin{align}
-a_n&=\sum_{p\ge 1}3^{-pn}+\sum_{\ell=1}^{n-1}\binom{n+1}{\ell}\sum_{p\ge 1}\frac{1}{3^{p(n+1)}}\sum_{k=1}^{3^{p}-1}k^{\ell}\\
&\ll 1+\sum_{\ell=1}^{n-1}\binom{n+1}{\ell}\sum_{k\ge 1}k^{\ell}\sum_{p\ge \log(k+1)/\log 3}\frac{1}{3^{p(n+1)}}\\
&\ll 1+\sum_{\ell=1}^{n-1}\binom{n+1}{\ell}\sum_{k\ge 1}k^{\ell}3^{-(\lfloor\log(k+1)/\log 3\rfloor+1)(n+1)}\\
&\ll 1+\sum_{\ell=1}^{n-1}\binom{n+1}{\ell}\sum_{k\ge 1}\frac{k^{\ell}}{(k+1)^{n+1}}\\
&\ll 1+\sum_{k\ge 1}\left(\sum_{\ell=0}^{n+1}\binom{n+1}{\ell}\frac{k^{\ell}}{(k+1)^{n+1}}-\frac{(n+1)k^{n}}{(k+1)^{n+1}}-\frac{k^{n+1}}{(k+1)^{n+1}}\right)\\
&=1+\sum_{k\ge 1}\left(1-\frac{(n+1)k^{n}}{(k+1)^{n+1}}-\frac{k^{n+1}}{(k+1)^{n+1}}\right).
\end{align}
Namely,
\begin{align}
a_{n}&\ll 1+\sum_{k\ge 1}\left(1-\frac{k^{n}}{(k+1)^{n}}\frac{n+1+k}{k+1}\right)\\
&\ll n^2+\sum_{k\ge n^3}\left(1-\frac{k^{n}}{(k+1)^{n}}\left(1+\frac{n}{k}\right)\right).
\end{align}
Note that
$$O\left(\frac{n}{k^2}\right)+\frac{1}{k} =\frac{1}{n}\log\left(1+\frac{n}{k}\right)\le \frac{1}{k}$$
for $k\ge n^2$. Thus
\begin{align}
a_{n}&\ll n^2+n\sum_{k\ge n^3}\left(1-\frac{k}{k+1}\left(1+\frac{n}{k}\right)^{\frac{1}{n}}\right)\ll n^2.
\end{align}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/273001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Existence of Pillai equations with Catalan type solutions? In Catalan's conjecture we have $$x^m-y^n=1$$ having solution $(3,2,1,1)$ and $(3,2,2,3)$.
Call $$ax^m-by^n=k$$ to be Pillai Diophantine equation.
*
*Is it true no Pillai Diophantine equation exists with integer solutions $(x,y,m,n)$ and $(x,y,m+1,n+r)$ with $1<r$ true at $a=b=1\neq k$?
*Is it also true at any fixed $a,b\in\Bbb Z$ and at any fixed $k\in\Bbb Z$ with $abk\neq0$ or $|a-b|\neq |k|$?
Following Jeremy Rouse's post both are no. His answer also works $k>0$ for 2. if $a=-2$ and $b=-7$ which has $b<a$.
I suspect there is no further equations with $k>0$ and $a=b=1$ or $k>0$ with $b\geq a$. I assume this to be true (without proof).
However his solution has $(m+1,n+r)-(m,n)=(1,2)$ and the main query I seek is whether we can get $(m+1,n+r)-(m,n)\neq (1,2)$ and $gcd(m,n)=gcd(m+1,n+r)=1$ at:
*$a=b=1$ and $k>0$?
*any other $a,b$ with $a,b,k>0$?
| The answer to questions 1 and 2 are both no. In the example you consider, where $m = 1$, $n=1$ and $r = 2$, you are seeking solutions to $x-y = x^{2} - y^{3} = k$. The equation $x-y = x^{2} - y^{3}$ is an elliptic curve and the largest integral point on this curve gives you a solution for $k = -20$, namely
$$
-20 = (-14) - 6 = (-14)^2 - 6^3.
$$
Variants of this trick work for other values of $a$ and $b$. For example, if $a = 2$ and $b = 7$ we find that
$$ -86968 = 2 \cdot (-40754) - 7 \cdot 780 = 2 \cdot (-40754)^{2} - 7 \cdot 780^{3}. $$
Indeed, there are $9$ different values of $k$ that work for $a = 2$, $b = 7$, $m = 1$, $n = 1$ and $r = 2$.
EDIT: The answers to questions 3 and 4 are also both no.
For one thing, the "trivial solution" with $x = 1$ and $y = -1$ gives $k = 2$ whenever $n$ and $n+r$ are both odd. However, there are many non-trivial solutions too.
For example, if $m = n = 1$ and $r = 4$ with $a = b = 1$, we have solutions with $k = 4$ and $k = 13$, namely
$$ 4 = 6-2 = 6^{2} - 2^{5}, \quad 13 = 16 - 3 = 16^{2} - 3^{5}. $$
This shows the answer to question 3 is no.
If we take $a = 2$ and $b = 7$, with $m = n = 1$ and $r = 4$ we find
$$ 175 = 2 \cdot 105 - 7 \cdot 5 = 2 \cdot 105^{2} - 7 \cdot 5^{5}. $$
This shows the answer to question 4 is no too.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/280320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
For which finite projective planes can the incidence structure be written as a circulant matrix? It is well known that the projective plane of order $2$ can be represented by the circulant matrix $M_2:=circ(x,x,1,x,1,1,1)= \begin{pmatrix}
x&x&1&x&1&1&1\\
1&x&x&1&x&1&1\\
1&1&x&x&1&x&1\\
1&1&1&x&x&1&x\\
x&1&1&1&x&x&1\\
1&x&1&1&1&x&x\\
x&1&x&1&1&1&x\\
\end{pmatrix}.$
For the one of order $3$ we can take $M_3:=circ(x,x,1,1,x,1, x, 1, 1, 1,1,1,1)=
\begin{pmatrix}
x& x& 1& 1& x& 1& x& 1& 1& 1& 1& 1& 1\\
1& x& x& 1& 1& x& 1& x& 1& 1& 1& 1& 1\\
1& 1& x& x& 1& 1& x& 1& x& 1& 1& 1& 1\\
1& 1& 1& x& x& 1& 1& x& 1& x& 1& 1& 1\\
1& 1& 1& 1& x& x& 1& 1& x& 1& x& 1& 1\\
1& 1& 1& 1& 1& x& x& 1& 1& x& 1& x& 1\\
1& 1& 1& 1& 1& 1& x& x& 1& 1& x& 1& x\\
x& 1& 1& 1& 1& 1& 1& x& x& 1& 1& x& 1\\
1& x& 1& 1& 1& 1& 1& 1& x& x& 1& 1& x\\
x& 1& x& 1& 1& 1& 1& 1& 1& x& x& 1& 1\\
1& x& 1& x& 1& 1& 1& 1& 1& 1& x& x& 1\\
1& 1& x& 1& x& 1& 1& 1& 1& 1& 1& x& x\\
x& 1& 1& x& 1& x& 1& 1& 1& 1& 1& 1& x
\end{pmatrix}.$
*
*Can the incidence structure of a finite projective plane always be written as a circulant matrix?
*Is this known at least for the Desarguesian planes?
*Is this circulant matrix essentially unique for a given plane? (i.e. up to cyclic permutation and reflection)
The determinant of this matrix for any projective plane of order $d$ is $$\det M_d=d^{d(d+1)/2}(x-1)^{d(d+1)}[(d+1)x+d^2],$$ but the structure of this expression does not seem to give more hints.
| To answer your questions:
1) A projective plane admits a circulant incidence matrix if and only if the automorphism group contains a cyclic group acting regularly on points and regularly on blocks. Equivalently, the projective plane comes from a difference set. The automorphism group of the projective plane coming from the Dickson near-field of order 9 is not divisible by 91, so it cannot have a circulant incidence matrix. Most likely it is the case that 'most' projective planes do not admit a circulant incidence matrix.
2) Yes, for the Desarguesian planes, a conjugacy class of such cyclic subgroups exist, they are called Singer cycles.
3) No, the circulant form for the incidence matrix is not unique. As mentioned in the comments, if one identifies the 1s in the first row of the incidence matrix with elements of the cyclic group of order $v$, then allowing the automorphism group of this cyclic group to act in the natural way gives different incidence matrices (but all come from equivalent difference sets). A single plane could have multiple inequivalent realisations as a difference set, these will give further realisations as circulant incidence matrices.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/291495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Elliptic-type integral with nested radical Let:
$$Q(A,Y) = Y^4-2 Y^2+2 A Y \sqrt{1-Y^2}-A^2+1$$
I’m curious as to whether it’s possible to find a closed-form solution for:
$$I(A)=\int_{Y_1(A)}^{Y_2(A)} \frac{1}{\sqrt{\left(1-Y^2\right) Q(A,Y)}}\,dY\\
= \int_{\sin^{-1}(Y_1(A))}^{\sin^{-1}(Y_2(A))}\frac{2 \sqrt{2}}{\sqrt{3-8 A^2+8 A \sin (2 \theta )+4 \cos (2 \theta )+\cos (4 \theta )}}\,d\theta$$
where $0 \le A \le \frac{3 \sqrt{3}}{4}$ and $-1\le Y_1(A), Y_2(A) \le 1$ are the real zeroes of $Q(A,Y)$.
I know a re-parameterisation that makes the zeroes a bit less messy than they would be in terms of $A$. If we set:
$$A(a) = \frac{\sqrt{a^3 (a+2)^3}}{2 (a+1)}$$
with $0 \le a \le 1$ then we have:
$$Y_{1,2}(a) = \frac{a^2+a\mp (a+2) \sqrt{1-a^2}}{2 (a+1)}$$
I’m not sure if there is any way to explicitly introduce these known zeroes into the integrand, as one could do if $Q(A,Y)$ were a polynomial. I do have the following relation:
$$Q(A,Y)\,Q(-A,Y)\\
= Q(A,Y)\,Q(A,-Y)\\
=\left(Y^4-2 Y^2+2 A Y \sqrt{1-Y^2}-A^2+1\right)\left(Y^4-2 Y^2-2 A Y \sqrt{1-Y^2}-A^2+1\right)\\
= P(A,Y)\,P(A,-Y)$$
where:
$$P(A,Y) = Y^4+2Y^3-2Y+A^2-1\\
P(A(a),Y)= (Y-Y_1(a))\,(Y-Y_2(a))\,\left(Y^2+(a+2)Y+\frac{a (a+2)^2+2}{2 (a+1)}\right)$$
$P(A,Y)$ has exactly the same real zeroes as $Q(A,Y)$, but has the opposite sign.
Mathematica is unable to explicitly evaluate the definite integral in either form, but it returns an indefinite integral for the trigonometric form:
$$I(A,\theta) =\\
\frac{4 \sqrt{2} \sqrt{\frac{\left(r_1-r_2\right) \left(r_3-\tan (\theta )\right)}{\left(r_1-r_3\right) \left(r_2-\tan (\theta )\right)}}
\left(r_1 \cos (\theta )-\sin (\theta )\right) \left(r_4 \cos (\theta )-\sin (\theta )\right) \times \\F\left(\sin
^{-1}\left(\sqrt{\frac{\left(r_2-r_4\right) \left(r_1-\tan (\theta )\right)}{\left(r_1-r_4\right) \left(r_2-\tan (\theta
)\right)}}\right)|\frac{\left(r_2-r_3\right) \left(r_1-r_4\right)}{\left(r_1-r_3\right) \left(r_2-r_4\right)}\right)}{\left(r_1-r_4\right)
\sqrt{\frac{\left(r_1-r_2\right) \left(r_2-r_4\right) \left(r_1-\tan (\theta )\right) \left(r_4-\tan (\theta
)\right)}{\left(r_1-r_4\right){}^2 \left(r_2-\tan (\theta )\right){}^2}} \sqrt{3-8 A^2+8 A \sin (2 \theta )+4 \cos (2 \theta )+\cos (4 \theta )}}$$
where the $r_i$ are the roots of:
$$R(A,Y) = A^2 Y^4 - 2 A Y^3 + 2 A^2 Y^2 - 2 A Y + A^2 -1$$
which for $A=A(a)$ are:
$$
\begin{array}{rcl}
r_1 &=& \frac{1-(a+1)^{3/2}\sqrt{1-a}}{a^{3/2} \sqrt{a+2}}\\
&=& \tan(\sin^{-1}(Y_1(a)))\\
r_2 &=& \frac{1+(a+1)^{3/2}\sqrt{1-a}}{a^{3/2} \sqrt{a+2}}\\
& =& \tan(\sin^{-1}(Y_2(a)))\\
r_3 &=& \frac{1-i (a+1) \sqrt{a^2+4 a+3}}{\sqrt{a} (a+2)^{3/2}}\\
r_4 &=& \frac{1+i (a+1) \sqrt{a^2+4 a+3}}{\sqrt{a} (a+2)^{3/2}}
\end{array}
$$
This formula is not very helpful in itself without a deeper understanding of how it was obtained; a naive attempt to get the definite integral from it gives imaginary results, probably because of some issue with branch cuts.
But it does suggest that a human who understood the technique that produced it could perform a similar process to obtain an expression for the definite integral in terms of an elliptic integral.
The motivation here is to find a closed-form expression for the probability density function for the area of a triangle whose vertices are chosen uniformly at random from the unit circle, as discussed in this question:
Moments of area of random triangle inscribed in a circle
I believe $Prob(A) = \frac{2}{\pi^2} I(A)$, where $A$ is the area of such a triangle. Numeric integrals for this quantity give the plot below; this goes to infinity at $A=0$, and is finite and non-zero for the maximum value, $A=\frac{3 \sqrt{3}}{4}$.
| If we define:
$$\begin{array}{rcl}
g(a)&=&a^2(a+2)^2-3\\
k(a)&=&(a+1)^3\sqrt{(1-a)(a+3)}\\
\end{array}$$
then, by taking limits of the antiderivative provided by Mathematica at the endpoints of the range of integration, we obtain:
$$I_0(a) = \frac{2\sqrt{2}\,(a+1)\,K\left(\frac{2 k(a) i}{g(a) + k(a) i}\right)}{\sqrt{a(a+2)}\,\sqrt{g(a) + k(a) i}}$$
Here $K$ is a complete elliptic integral of the first kind, and the convention used is that followed by Mathematica, where the argument of $K$ appears unsquared in the defining integral. The parameter $a$ is related to the original parameter $A$ by the equation stated in the question.
When $g(a)\ge 0$, which holds for $a \gt \sqrt{1+\sqrt{3}}-1 \approx 0.652892$, $I_0(a)$ is real-valued and agrees with numerical evaluations of the integral $I(a)$.
However, when $g(a)$ crosses zero, the function $K$ has a branch cut, and $I_0(a)$ jumps discontinuously to an imaginary-valued function.
This problem can be remedied across the full range for the parameter $a$ by using the analytic continuation of $K$ across the branch cut, written as $K'$, which is discussed in detail in the answer to this question on Math StackExchange:
https://math.stackexchange.com/questions/2008090/analytical-continuation-of-complete-elliptic-integral-of-the-first-kind
$$K'(m) = \frac{1}{\sqrt{m}}\left(K\left(\frac{1}{m}\right)+i K\left(1-\frac{1}{m}\right)\right)$$
$K'$ has its own, different branch cut located in a different part of the complex plane, and the argument in this application does not cross it. So if we define:
$$I_1(a) = \frac{2\sqrt{2}\,(a+1)\,K'\left(\frac{2 k(a) i}{g(a) + k(a) i}\right)}{\sqrt{a(a+2)}\,\sqrt{g(a) + k(a) i}}$$
then $I_1(a)$ is a real-valued function for $0\lt a \le 1$, and it agrees precisely with numerical evaluations of the original integral $I(a)$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/306100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Strings of consecutive integers divisible by 1, 2, 3, ..., N For each n, let $a_n$ be the least integer, greater than n, such that the numbers $a_n$, $a_n$+ 1, $a_n$+ 2, ..., $a_n$+ (n – 1) are divisible, in some order, by 1, 2, 3, ..., n. For example $a_{12}$ = 110.
What are the best estimates known for $a_n$?
| If $a_n=a \gt n,$ then there can only be one prime
among $a,a+1,\cdots, a+n-1.$ So the well studied topic of gaps between primes could provide upper bounds and might be the major factor.
Here are the gaps between the first few primes
$ 1, 2, 2, 4, 2, 4, 2, 4, 6, 2, 6, 4, 2, 4, 6, 6$
$, 2, 6, 4, 2, 6, 4, 6, 8$
$4 , 2, 4, 2, 4, 14, 4,$
The record gaps indicated are are $14=127-113$ , and $8=97-89$
We are here most interested in the gap between the $k$th and $k+2$nd primes
$3, 4, 6, 6, 6, 6, 6, 10, 8, 8, 10, 6, 6, 10, 12,$
$8, 8, 10, 6, 8, 10, 10, 14$
$ 6, 6, 6, 6, 18, 18$
The entries at least $13$ indicate the intervals that might perhaps contain $a_{12}$
The $14$ seems like a tight fit. We need to use either $84$ or $96$ (which will be the multiple of $12$) along with $85\cdots 95.$ But this fails as $88$ is the only available multiple of $8$ and also of $11.$ So in fact $a_{11}$ has to be further out.
The $18$’s show more promise : either part of
$110,111,112,\mathbf{113},\cdots ,126$ or part of
$114,\cdots ,\mathbf{127},128,129,130.$ The multiple of $11$ must be $110$ or $121$ and, since $110$ works, we are done.
It turns out that $a_{10}=a_{11}=a_{12}=110.$
It is possible to have $a_{i+1} \lt a_i.$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/326659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Diophantine equation $3(a^4+a^2b^2+b^4)+(c^4+c^2d^2+d^4)=3(a^2+b^2)(c^2+d^2)$ I am looking for positive integer solutions to the Diophantine equation $3(a^4+a^2b^2+b^4)+(c^4+c^2d^2+d^4)=3(a^2+b^2)(c^2+d^2)$ for distinct values of $(a,b,c,d)$.
There are many solutions with $a=b$ such as $(7,7,11,13)$ and $(13,13,22,23)$, and many solutions with $c=d$ such as $(3,5,7,7)$ and $(7,8,13,13)$, but it seems not so many with $4$ distinct values. The smallest distinct solution I know of is $(35,139,146,169)$.
Are there infinitely many solutions? Is there perhaps a parameterization of such solutions?
Of special interest are solutions where the quantity $(c^2+d^2)-(a^2+b^2)$ is a perfect square, such as $(a,b,c,d)=(323,392,407,713)$. Are there other solutions of this type?
This comes up in solving an unsolved problem in geometry that I can't discuss, but you might get your name on a paper if you can help.
| The equation you specify defines a surface $X$ in $\mathbb{P}^{3}$, and this surface is a K3 surface. It is conjectured that if $X$ is a K3 surface, there is a field extension $K/\mathbb{Q}$ over which the rational points on $X$ become Zariski dense. It's not clear that this happens over $\mathbb{Q}$.
However, one can find infinitely many solutions by finding genus $0$ curves on this surface. Specializing the polynomial to make $c = a+b$ it becomes the square of a quadratic, and this quadratic (which defines a conic in $\mathbb{P}^{2}$) has infinitely many points.
In particular, if $s$ and $t$ are positive integers with $s > t$, then
$a = s^{2}-t^{2}$, $b = 2st - t^{2}$, $c = s^{2} + 2st - 2t^{2}$, $d = s^{2} - st + t^{2}$ is a solution in positive integers to your equation. (The only solution in this family with $c^2+d^{2}-a^{2}-b^{2}$ a square is $a = c = d = 3$, $b = 0$. However, there easily could have been more.)
EDIT: The rational points on $X$ are Zariski dense. If $w$ is a rational number, specializing the polynomial at $c = aw + b$ gives a genus $1$ curve. This curve has a section, and using this, the surface $X$ is birational to the elliptic surface
$$
E : y^{2} = x^{3} + (6w^{4} - 30w^{2} + 24)x^{2} + (9w^{8} - 90w^{6} + 189w^{4} - 144w^{2} + 36)x.
$$
It's not too hard to find two independent points of infinite order on this elliptic surface, and the preimage on $X$ of a point on $E/\mathbb{Q}(w)$ is a rational curve. In particular $X$ has infinitely many rational curves.
One of the simplest rational curves gives rise to solutions with $a < b < c < d$. In particular, let $w = \frac{s}{t}$ with $s$ and $t$ coprime positive integers and $\frac{1}{2} < \frac{s}{t} < \frac{-1 + \sqrt{13}}{4}$. Then
\begin{align*}
a &= 32s^{5} t - 56s^{3} t^{3} + 8s^{2}t^{4} + 30st^{5} - 11t^{6}\\
b &= -16s^{6} - 16s^{5} t + 28s^{4} t^{2} + 48s^{3} t^{3} - 31s^{2} t^{4} - 32 st^{5} + 19t^{6}\\
c &= 16s^{6} - 16s^{5} t - 28s^{4} t^{2} + 56s^{3} t^{3} - s^{2} t^{4} - 43st^{5} + 19t^{6}\\
d &= -16 s^{6} - 32s^{5} t + 52s^{4} t^{2} + 40s^{3} t^{3} - 47 s^{2} t^{4} - 14st^{5} + 14t^{6}
\end{align*}
is a solution with $0 < a < b < c < d$ and with $c = (s/t)a + b$. For example setting $s = 5$ and $t = 8$, each of $a$, $b$, $c$ and $d$ above are multiples of $48$ and dividing through by that gives $(1392,2197,3067,3197)$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/378229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Inequality of two variables
Let $a$ and $b$ be positive numbers. Prove that:
$$\ln\frac{(a+1)^2}{4a}\ln\frac{(b+1)^2}{4b}\geq\ln^2\frac{(a+1)(b+1)}{2(a+b)}.$$
Since the inequality is not changed after replacing $a$ on $\frac{1}{a}$ and $b$ on $\frac{1}{b}$ and $\ln^2\frac{(a+1)(b+1)}{2(a+b)}\geq\ln^2\frac{(a+1)(b+1)}{2(ab+1)}$ for $\{a,b\}\subset(0,1],$
it's enough to assume that $\{a,b\}\subset(0,1].$
Also, $f(x)=\ln\ln\frac{(x+1)^2}{4x}$ is not convex on $(0,1]$ and it seems that Jensen and Karamata don't help here.
Thank you!
| Remarks: @Fedor Petrov's proof is very nice. Here we give an alternative proof.
Using the identity
$$\ln (1 + u) = \int_0^1 \frac{u}{1 + ut}\, \mathrm{d} t,$$
the desired inequality is written as
$$\int_0^1 \frac{(1 - a)^2}{t(1 - a)^2 + 4a}\, \mathrm{d} t
\cdot \int_0^1 \frac{(1 - b)^2}{t(1 - b)^2 + 4b}\, \mathrm{d} t \ge \left(\int_0^1 \frac{(1-a)(1-b)}{t(1-a)(1-b) + 2(a + b)}\,\mathrm{d} t\right)^2.$$
By the Cauchy-Bunyakovsky-Schwarz inequality for integrals, we have
$$\mathrm{LHS}
\ge \left(\int_0^1 \frac{|(1-a)(1-b)|}{\sqrt{[t(1-a)^2 + 4a][t(1-b)^2 + 4b]}}\,\mathrm{d} t\right)^2 \ge \mathrm{RHS}$$
where we use
$$ [t(1-a)(1-b) + 2(a + b)]^2 - [t(1-a)^2 + 4a][t(1-b)^2 + 4b] = 4(1-t)(a-b)^2 \ge 0.$$
We are done.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/386705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Real rootedness of a polynomial with binomial coefficients It is possible to show using diverse techniques that the following polynomial:
$$P_n(x)=1 + \binom{n}{2} x + \binom{n}{4} x^2 + \binom{n}{6} x^3 + \binom{n}{8} x^4 +\ldots + \binom{n}{2\lfloor\tfrac{n}{2}\rfloor} x^{\lfloor \frac{n}{2}\rfloor},$$
is real-rooted. For instance, it is an $s$-Eulerian polynomial, for which Savage and and Visontai in [1] have proved real-rootedness.
Here I ask for a very similar polynomial, which I think might be solved with some other technique that perhaps I am overlooking.
$$Q_n(x)=1 + \left(\binom{n}{2}-n\right) x + \binom{n}{4} x^2 + \binom{n}{6} x^3 + \binom{n}{8} x^4 +\ldots + \binom{n}{2\lfloor\tfrac{n}{2}\rfloor} x^{\lfloor \frac{n}{2}\rfloor},$$
This polynomial $Q_n(x) = P_n(x) - nx$ is the Ehrhart $h^*$-polynomial of the hypersimplex $\Delta_{2,n}$. There are several conjectures regarding unimodality/log-concavity/real-rootedness for the $h^*$-polynomial of an arbitrary hypersimplex $\Delta_{k,n}$, but for $k > 2$ the formulas are much more complicated.
| Let's consider $n\geqslant 5$ case only ($n=1,2,3,4$ are straightforward). Then $Q_n$ has non-negative coefficients and we care on the number of negative roots of $Q_n$.
We have $2P_n(-x)=(1+i\sqrt{x})^n+(1-i\sqrt{x})^n$. So, we should prove that $$2Q_n(-x)=(1+i\sqrt{x})^n+(1-i\sqrt{x})^n+2nx$$ has $\lfloor n/2\rfloor$ positive roots. Denote $x=\tan^2 t$, $0<t<\pi/2$ and $$h(t):=Q_n(\tan^2 t)=\frac{\cos nt}{\cos^n t}+n\tan^2 t=\frac{\cos nt+n\sin^2 t\cos^{n-2}t}{\cos^n t}.$$
Denote $a=n/2-1$ and note that the maximal value $(1-x)x^a$ over $x\in [0,1]$ is obtained for $x=a/(a+1)$ and equals $\frac{1}{(a+1)(1+1/a)^a}<\frac{1}{2(a+1)}=\frac1n$, applying this for $x=\cos^2 t$ we see that $0\leqslant n\sin^2 t\cos^{n-2}t<1$, thus the signs of $h(t)$ at the points $\pi k/n$, $k=0, 1,\ldots, \lfloor n/2\rfloor$ interchange and $h$ has at least $\lfloor n/2\rfloor$ distinct roots on $(0,\pi/2)$, as desired.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/394944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Deriving a relation in a group based on a presentation Suppose I have the group presentation $G=\langle x,y\ |\ x^3=y^5=(yx)^2\rangle$. Now, $G$ is isomorphic to $SL(2,5)$ (see my proof here). This means the relation $x^6=1$ should hold in $G$. I was wondering if anyone knows how to derive that simply from the group presentation (not using central extensions, etc.). Even nicer would be an example of how software (GAP, Magma, Magnus, etc.) could automate that.
| OK, here is the derivation, based completely on the amazing information provided by Victor Miller (who I should also thank for letting me know about kbmag).
First, some identities:
(1) From $x^3=xyxy$ we get: (a) $x^2=yxy$; (b) $xyx^{-1}=y^{-1}x$; (c) $x^{-1}yx=xy^{-1}$.
(2) From $y^5=xyxy$ we get: (a) $y^4=xyx$; (b) $x^{-1}y^3=yxy^{-1}$; (c) $y^3x^{-1}=y^{-1}xy$.
(3) From (1a) and (3b) we get $(yxy)(yxy^{-1})=(x^2)(x^{-1}y^3) = xy^3$; so $xy^2xy^{-1}=y^{-1}xy^3$.
(4) From (2b) and (1b) we get $(yxy^{-1})(xyx^{-1}) = (x^{-1}y^3)(y^{-1}x) = x^{-1}y^2x$, so that $yxy^{-1}xy=x^{-1}y^2x^2$.
(5) From (2c) we get $y^2x^{-1}y^{-1}=y^{-2}x$; squaring that yields $y(yx^{-1}yx^{-1})y^{-1}=y^{-2}xy^{-2}x$. (1c), inverse, squared, shows this is the same as $yx^{-1}y^{-2}xy^{-1}=y^{-2}xy^{-2}x$.
(6) Similar to (5). From (2c) we get $y^2x^{-1}=y^{-2}xy$; squaring that yields $y^2x^{-1}y^2x^{-1}=y^{-1}(y^{-1}xy^{-1}x)y$. (1b) squared shows this is the same as $y^2x^{-1}y^2x^{-1}=y^{-1}xy^2x^{-1}y$.
OK, now consider the word $(y^{-1}xy^3)xy^{-1}xy$. From (3) this is $xy^2x(y^{-1}xy^{-1}x)y$, which from (1b) squared is $xy^2x(xy^2x^{-1})y=xy^2x^2y^2x^{-1}y$.
This word can also be written as $y^{-1}xy^2(yxy^{-1}xy)$, which from (4) is $y^{-1}xy^2(x^{-1}y^2x^2)$. So the previous two computations show
$y^2x^2y^2xy^{-1}=x^{-1}(y^{-1}xy^2x^{-1}y)yx^2$
$=x^{-1}y^2x^{-1}y^2(x^{-1}yx)x$ ...... from (6)
$=x^{-1}y^2(x^{-1}y^2x)y^{-1}x$ ....... from (1c)
$=(x^{-1}y^2x)y^{-1}xy^{-2}x$ ......... from (1c) squared
$=xy^{-1}x(y^{-2}xy^{-2}x)$ ........... from (1c) squared
$=xy^{-1}(xyx^{-1})y^{-2}xy^{-1}$ ..... from (5)
$=x(y^{-2}xy^{-2}x)y^{-1}$ .............from (1b)
$=(xyx^{-1})y^{-2}xy^{-2}$ ............ from (5)
$=y^{-1}xy^{-2}xy^{-2}$ ............... from (1b).
So $y^2x^2y^2x^{-1}y=y^{-1}xy^{-2}xy^{-2}$, or $y^2x^2y^2=y^{-1}xy^{-2}xy^{-3}x$. But
$y^{-1}xy^{-2}x(y^{-3}x)=y^{-1}xy^{-2}(xyx^{-1})y^{-1}=y^{-1}x(y^{-3}x)y^{-1}=y^{-1}(xyx^{-1})y^{-2}=y^{-2}xy^{-2}$ (through repeated application of (2b) inverse, and (1b)).
Thus $y^2x^2y^2=y^{-2}xy^{-2}$, or $y^4x^2y^4=x$, and from (2a) we get $xyx^4yx=x$, or $1=yx^4yx=(yxy)x^4=x^6$ (the second follows from $x^3$ being central and the third from (1a)).
Done!
| {
"language": "en",
"url": "https://mathoverflow.net/questions/15180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 0
} |
Is there any finitely-long sequence of digits which is not found in the digits of pi? I know it's likely that, given a finite sequence of digits, one can find that sequence in the digits of pi, but is there a proof that this is possible for all finite sequences? Or is it just very probable?
| This is an expansion for Pi in base 16 numeric system:
$$\pi = \sum_{k = 0}^{\infty}\frac{1}{16^k} \left( \frac{4}{8k + 1} - \frac{2}{8k + 4} - \frac{1}{8k + 5} - \frac{1}{8k + 6} \right)$$
So to get k-th digit you have to get one term and take account for possible translation from a neighboring digit.
Thus to find the number of digit from which starts your arbitrary sequence, you should to solve a system of equations about the particular digits:
$$\frac{1}{16^k} \left( \frac{4}{8k + 1} - \frac{2}{8k + 4} - \frac{1}{8k + 5} - \frac{1}{8k + 6}\right)=a_1$$
$$\frac{1}{16^{k+1}} \left( \frac{4}{8(k+1) + 1} - \frac{2}{8(k+1) + 4} - \frac{1}{8(k+1) + 5} - \frac{1}{8(k+1) + 6}\right)=a_2$$
$$\frac{1}{16^{k+2}} \left( \frac{4}{8(k+2) + 1} - \frac{2}{8(k+2) + 4} - \frac{1}{8(k+2) + 5} - \frac{1}{8(k+2) + 6}\right)=a_3$$
etc. The numbers $a_k$ are unique for any sequence you are searching for.
If the system has no solution, it is likely that your sequence does not appear in the sequence of digits of Pi.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/18375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Eigenvectors of a certain big upper triangular matrix I'm looking at this matrix:
$$
\begin{pmatrix}
1 & 1/2 & 1/8 & 1/48 & 1/384 & \dots \\
0 & 1/2 & 1/4 & 1/16 & 1/96 & \dots \\
0 & 0 & 1/8 & 1/16 & 1/64 & \dots \\
0 & 0 & 0 & 1/48 & 1/96 & \dots \\
0 & 0 & 0 & 0 & 1/384 & \dots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{pmatrix}
$$
The first row contains the reciprocals of the double factorials
$$
2, \qquad 2 \cdot 4, \qquad 2 \cdot 4 \cdot 6, \qquad 2 \cdot 4 \cdot 6 \cdot 8, \qquad \dots
$$
Each row is a shift of a scalar multiple of the first row, and the scalar multiple is in each case itself a reciprocal of a double factorial, so that the main diagonal is the same as the first row. A consequence is that each column is proportional to the corresponding row of Pascal's triangle. E.g. the last column shown is proportional to
$$
1, 4, 6, 4, 1.
$$
This matrix is the matrix of coefficients in the "inversion formulas" section of this rant that I wrote.
I found the first three eigenvectors:
$$
\begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ \vdots \end{pmatrix},
\begin{pmatrix} 1 \\ -1 \\ 0 \\ 0 \\ 0 \\ \vdots \end{pmatrix},
\begin{pmatrix} 5 \\ -14 \\ 21 \\ 0 \\ 0 \\ \vdots \end{pmatrix}
$$
Meni Rosenfeld pushed this through some software and found that up to the 40th eigenvalue, the signs of the components of the eigenvectors alternate.
Can anything of interest be said about the eigenvectors?
Can anything of interest be said about this matrix?
| Might be a wild intuition, I'd say the eigenvalues are the entries of the first row, and that the eigenvector coresponding to the $n$th eigenvalue, $k$ is made by adjoining a column of zeroes to the eigenvector coresponding to the eigenvector coresponding to the same eigenvalue for the first $n\times n$ minor of the matrix.
Example:
for eigenvaue $1$ we take the matrix
$ \begin{pmatrix} 1 \end{pmatrix}$ ,the eigenvetor corresponding to $1 $ is $\begin{pmatrix} 1 \end{pmatrix}$ , so we obtain $\begin{pmatrix} 1 \cr 0 \cr 0 \cr 0 \cr 0 \cr \vdots \end{pmatrix} $ as the first eigevector.
for eigenvaue $1/2$ we take the matrix
$ \begin{pmatrix} 1& 1/2 \cr 0 & 1/2\end{pmatrix}$ ,the eigenvetor corresponding to $1/2$ is $\begin{pmatrix} 1 \cr -1\end{pmatrix}$ , so we obtain $\begin{pmatrix} 1 \cr -1 \cr 0 \cr 0 \cr 0 \cr \vdots \end{pmatrix} $ as the second eigevector .
The eigenvector coresponding to $1/8$ for $ \begin{pmatrix} 1& 1/2 & 1/8 \cr 0 & 1/2 & 1/4 \cr 0 & 0 & 1/8 \end{pmatrix}$ is $\begin{pmatrix} 5 \cr -14 \cr 21 \cr \end{pmatrix}$, you get the ideea .Also , the eigenvectors span the entire space , ie if a possibly infinite (but convergent) sum of eigenvectors is $\vec 0$ then the coefficients of those vectors are $0$.
Here is an explicit formula for the eigenvectors: first select $M_n$, the $n\times n$ truncation of the matrix and calculate $M_n - I v_n $, the $n$th eigenvalue. Example:
for $n=3$, we obtain
\begin{pmatrix} 7/8 & 1/2 & 1/8 \cr 0 & 3/8 & 1/4 \cr 0 & 0 & 0 \end{pmatrix}. Now let $S$ be the $(n-1)\times(n-1)$ truncation of that , ie \begin{pmatrix} 7/8 & 1/2 \cr 0 & 3/8 & \end{pmatrix}
Calculate $S^{-1} = \begin{pmatrix} 8/7 & -32/21 \cr 0 & 8/3 & \end{pmatrix}$, now multiply $S^{-1}$ with the truncation of the last column of $M_n$, \begin{pmatrix} 1/8 \cr 1/4 \end{pmatrix}
You obtain $$\begin{pmatrix} -5/21 \\ 2/3 \end{pmatrix}.$$ Concatenating $-1$ to that, you obtain $$\begin{pmatrix} -5/21 \\ 2/3 \\ -1 \end{pmatrix},$$ the third eigenvector, or the $n$th eigenvector in the general case.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/26389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
Trigonometric identity needed for sums involving secants I am looking for a closed-form formula for the following sum:
$\displaystyle \sum_{k=0}^{N}{\frac{\sin^{2}(\frac{k\pi}{N})}{a \cdot \sin^{2}(\frac{k\pi}{N})+1}}=\sum_{k=0}^{N}{\frac{1}{a+\csc^{2}(\frac{k\pi}{N})}}$.
Is such a formula known?
| Two other references to similar sums are
Bruce C. Berndt and Boon Pin Yeap, Explicit evaluations and reciprocity theorems for finite trigonometric sums, Advances in Applied Mathematics
Volume 29, Issue 3, October 2002, Pages 358--385
and
Ira Gessel, Generating Functions and Generalized Dedekind Sums, Electronic J. Combinatorics,
Volume 4, Issue 2 (1997) (The Wilf Festschrift volume), R11.
The paper of Berndt and Yeap uses contour integration and has an extensive list of references. My paper uses elementary methods, including partial fractions.
Here are the details of the partial fraction approach:
First we convert the trigonometric sum to a sum over roots of unity.
Let $\eta_k=e^{k\pi i /N}$ and let $\zeta_k=\eta_k^2 = e^{2k\pi i/N}$.
Then
\begin{equation*}
\csc^2(k\pi/N) = \left(\frac{2i}{\eta_k -\eta_k^{-1}}\right)^2
=\frac{-4\eta_k^2}{(\eta_k^2-1)^2}
=\frac{-4\zeta_k}{(\zeta_k-1)^2}.
\end{equation*}
Thus (since the summand vanishes for $k=0$) the sum is
\begin{equation*}
\sum_{\zeta^N=1} \frac{1} {a-4\zeta/(\zeta-1)^2}
=\sum_{\zeta^N=1} \frac{(\zeta-1)^2}{a(\zeta-1)^2 - 4\zeta}.
\end{equation*}
To apply the partial fraction method, we need to find the partial fraction expansion of
\begin{equation*}
F(z)=\frac{(z-1)^2}{a(z-1)^2 - 4z}
\end{equation*}
Factoring the denominator shows that we can simplify things if we make the substitution
$a=4c/(c-1)^2$, so that
\begin{equation*}
c = \frac{a+2+2\sqrt{a+1}}{a}.
\end{equation*}
Then we have
\begin{equation*}
F(z) =\frac{(c-1)^2}{4c} +\frac{(c-1)^3}{4(c+1)}\left(\frac{1}{z-c} -\frac{1}{c(cz-1)}\right)
\end{equation*}
We have
\begin{equation*}
\sum_{\zeta^N=1} (\zeta-c)^{-1} = - \frac{Nc^{N-1}}{c^N-1}
\end{equation*}
and
\begin{equation*}
\sum_{\zeta^N=1} (c\zeta-1)^{-1} = \frac{N}{c^N-1}
\end{equation*}
So the sum is
\begin{equation*}
\sum_{\zeta^N=1} F(\zeta) =
N\frac{(c-1)^2}{4c} \left(1-\frac{(c-1)}{(c+1)}\frac{(c^N+1)}{(c^N-1)}\right).
\end{equation*}
where $c=(a+2+2\sqrt{a+1})/a$.
In terms of $a$, we can simplify this a little to
\begin{equation*}
\frac{N}{a} \left(1-\frac{1}{\sqrt{a+1}}\frac{(c^N+1)}{(c^N-1)}\right).
\end{equation*}
If you really want an expression which is rational in $a$, it's possible to write this as a quotient of polynomials in $a$ that are given by generating functions.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/122231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Is $\lceil \frac{n}{\sqrt{3}} \rceil > \frac{n^2}{\sqrt{3n^2-5}}$ for all $n > 1$? An equivalent inequality for integers follows:
$$(3n^2-5)\left\lceil n/\sqrt{3} \right\rceil^2 > n^4.$$
This has been checked for n = 2 to 60000. Perhaps there is some connection to the convergents to $\sqrt{3}$.
$\lceil \frac{n}{\sqrt{3}} \rceil > \frac{n^2}{\sqrt{3n^2-5}}$
| The inequality is true for $n\geq 2$. Let $m:=\lceil n/\sqrt{3}\rceil$, then
$$ m^2-\frac{n^2}{3} = \frac{3m^2-n^2}{3} \geq \frac{2}{3}, $$
because $3m^2-n^2>0$ and $3m^2-n^2\not\equiv 1\pmod{3}$. Hence the inequality follows from
$$ \frac{2}{3}>\frac{n^4}{3n^2-5}-\frac{n^2}{3}=\frac{5n^2}{3(3n^2-5)}. $$
The latter is equivalent to $n^2>10$, hence we have a proof for $n\geq 4$. The remaining cases $n=2$ and $n=3$ can be checked by hand.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/186419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Nontrivial solutions for $\sum x_i = \sum x_i^3 = 0$ For $x_i \in \mathbb{Z}$, let $\{x_i\}$ be a fundamental solution to the equations:
$$
\sum_{i= 1}^N x_i = \sum_{i=1}^N x_i^3 = 0
$$
if $x \in \{x_i\} \Rightarrow -x \notin \{x_i\}$.
For instance, a fundamental solution with $N=7$ is given by
$$
x_1 = 4, \quad x_2 = x_3 = x_4 = -3, \quad x_5 = x_6 = 2, \quad x_7 = 1
$$
What is the minimum $N$ for which a fundamental solution exists?
| We can get $N=6$ from $1+5+5=2+3+6$, $1^3+5^3+5^3=2^3+3^3+6^3$.
We can get $N=5$ from $2+4+10=7+9$, $2^3+4^3+10^3=7^3+9^3$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/203453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 3
} |
Realization of numbers as a sum of three squares via right-angled tetrahedra De Gua's theorem
is a $3$-dimensional analog of the Pythagorean theorem:
The square of the area of the diagonal face of a right-angled tetrahedron
is the sum of the squares of the areas of the other three faces.
For certain tetrahedra, this provides a representation of
an integer $n$ as the sum of three integer squares.
Let the tetrahedron have vertices at
\begin{eqnarray}
& (0,0,0)\\
& (a,0,0)\\
& (0,b,0)\\
& (0,0,c)
\end{eqnarray}
If $a,b,c$ are integers, at least two of which are even,
then the squared areas of the three triangles incident to the origin
are each integer squares, and so "represent" $n=A^2$, the diagonal-face area squared.
Example.
Let $a,b,c$ be $2,3,4$ respectively.
The diagonal face-area squared is
\begin{eqnarray}
A^2 & = & \left[ (2 \cdot 3)^2 + (3 \cdot 4)^2 + (4 \cdot 2)^2 \right] \,/\, 4\\
& = & (36 + 144 +64) \,/\, 4 \\
& = & 9 + 36 + 16\\
& = & 61\\
A & = & \sqrt{61} \;.
\end{eqnarray}
So here, $61$ is represented as the sum of three squares: $9+36+16$.
Let $N_T(n)$ be the number of integers $\le n$ that can be represented
as a sum of three squares derived from deGua's tetrahedron theorem,
as above. Call these tetra-realized.
Let $N_L(n)$ be the number of integers $\le n$ that can be represented
as a sum of three squares.
$N_L$ is determined by
Legendre's three-square theorem, which
says that $n$ is the sum of three squares except when it is
of the form $n=4^a (8 b + 7)$, $a,b \in \mathbb{N}$.
I would like to know how prevalent is tetra-realization:
Q. What is the ratio of $N_T(n)$ to $N_L(n)$ as $n \to \infty$?
I would also be interested in any characterization of the tetra-realizable
$n$.
| These numbers are not very prevalent and the ratio in question goes to zero. Note first that by Legendre's theorem, a positive proportion of the numbers below $n$ may be expressed as a sum of three squares. Now consider $N_T(n)$. This amounts to counting (with parity restrictions on $x$, $y$, $z$, and all three positive) the number of distinct integers of the form $((xy)^2 + (yz)^2 +(xz)^2)/4$ lying below $n$. So we must have $xy$, $yz$, and $xz$ all lying below $2\sqrt{n}$, which means that
$$
xyz = \sqrt{(xy) (yz)(xz)} \le 2\sqrt{2} n^{\frac{3}{4}}.
$$
So the total number of possibilities for $(x,y,z)$ is bounded by the number of triples with product at most $X=2\sqrt{2}n^{\frac 34}$, and this is
$$
\sum_{xyz\le X} 1 \le \sum_{x,y\le X} \frac{X}{xy} \le X(1+\log X)^2.
$$
Thus, even if these choices for $(x,y,z)$ all led to distinct integers of
the form $(xy)^2+(yz)^2+(xz)^2$, we still would have no more than $Cn^{\frac 34}(\log n)^2$ integers up to $n$ that may be tetra-realized.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/226068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
} |
Integer solutions of (x+1)(xy+1)=z^3 Consider the equation
$$(x+1)(xy+1)=z^3,$$
where $x,y$ and $z$ are positive integers with $x$ and $y$ both at least $2$ (and so $z$ is necessarily at least $3$). For every $z\geq 3$, there exists the solution
$$x=z-1 \quad \text{and} \quad y=z+1.$$
My question is, if one imposes the constraint that
$$x^{1/2} \leq y \leq x^2 \leq y^4,$$
can there be any other integer solutions (with $x,y \geq 2$)?
Moreover, $z$ may be assumed for my purposes to be even with at least three prime divisors.
Thank you in advance for any advice.
Edit: there was a typo in my bounds relating $x$ and $y$.
| There are other solutions. Try $x+1=a^3$, $xy+1=b^3$, where $b$ is chosen so that $b^2+b+1$ is divisible by $a^3-1$. Such $b$ is always possible to choose provided that $a-1$ is not divisible by 3 and by primes of the form $3k-1$. Moreover, $b$ may be replaced to its remainder modulo $a^3-1=x$, in this case $xy<xy+1=b^3<x^3$, thus $y<x^2$. If $b<a^3/2-1$, replace $b$ to $a^3-b-2$, it makes $y$ of order at least $x^2/2+O(x)$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/235539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
A monotonicity proof of a sequence summation I meet a problem:
I have a conjecture that the following sequence summation is monotony increasing in N and tends to 1 as N tends to $\infty$
$$\sum_{k=0}^{2N+1}C_{2N+1}^{k}\pi^{k}(1-\pi)^{2N+1-k}\frac{(1-q)\pi^{k}(1-\pi)^{2N+1-k}}{(1-q)\pi^{k}(1-\pi)^{2N+1-k}+q(1-\pi)^{k}\pi^{2N+1-k}}$$
where $0<\pi<0.5,0<q<0.5$ and q is a certain prior probability.
This expression is an expectation in my problem under certain background. The fraction term comes from Bayesian formula.
I try some matlab simulation and this conjecture is right but how to prove it via tricky method?
| Let $0<b:=\frac{q}{1-q}<1$ and $0<x:=\frac{\pi}{1-\pi}<1$. So, your sum now takes the form
$$B(N):=\sum_{k=0}^{2N+1}\binom{2N+1}k\frac{\pi^k(1-\pi)^{2N+1-k}}{1+b\cdot x^{2N+1-2k}}.$$
We show monotonicity: $B(N+1)-B(N) > 0$.
Applying Pascal's recurrence twice, we get $\binom{2N+3}k=\binom{2N+1}k+2\binom{2N+1}{k-1}+\binom{2N+1}{k-2}$. After shifting indices, in the last two summations, we gather that
\begin{align} B(N+1)
&=(1-\pi)^2\sum_{k=0}^{2N+1}\binom{2N+1}k\frac{\pi^k(1-\pi)^{2N+1-k}}{1+b\cdot x^2\cdot x^{2N+1-2k}} \\
&+2\pi(1-\pi)\sum_{k=0}^{2N+1}\binom{2N+1}k\frac{\pi^k(1-\pi)^{2N+1-k}}{1+b\cdot x^{2N+1-2k}} \\
&+\pi^2\sum_{k=0}^{2N+1}\binom{2N+1}k\frac{\pi^k(1-\pi)^{2N+1-k}}{1+b\cdot x^{-2}\cdot x^{2N+1-2k}}.
\end{align}
Letting $y:=x^{2N+1-2k}$, the difference $B(N+1)-B(N)$ equals
\begin{align}
&\sum_{k=0}^{2N+1}\binom{2N+1}k\pi^k(1-\pi)^{2N+1-k}\left[
\frac{(1-\pi)^2}{1+bx^2y}+\frac{2\pi(1-\pi)-1}{1+by}+\frac{\pi^2}{1+bx^{-2}y}\right] \\
=&\sum_{k=0}^{2N+1}\binom{2N+1}k\frac{\pi^k(1-\pi)^{2N+1-k}}{(1+x)^2}\left[
\frac1{1+bx^2y}-\frac{1+x^2}{1+by}+\frac{x^2}{1+bx^{-2}y}\right] \\
&=\sum_{k=0}^{2N+1}\binom{2N+1}k\frac{\pi^k(1-\pi)^{2N+1-k}}{(1+x)^2}\left[
\frac{b^2y^2(1+x^2)(1-x^2)^2}{(1+bx^2y)(1+by)(x^2+by)}\right]>0.
\end{align}
We've proven that each quantity inside $[\cdots]$ is term-wise positive, which is stronger than saying the sum $B(N+1)-B(N)>0$.
Since $1+b\cdot x^{2N+1-2k}>1$, it is clear that your sequence
$$B(N)<\sum_{k=0}^{2N+1}\binom{2N+1}k\pi^k(1-\pi)^{2N+1-k}=(\pi+1-\pi)^{2N+1}=1.$$
So, $B(N)$ is increasing and bounded above; hence by the Monotone Convergence Theorem, this sequence has a limit on the real line $\mathbb{R}$. The analysis for $B(N)\rightarrow1$ has already been discussed by Antony Quas. However, in case you wish to see some clues, notice you can estimate the term
$$\frac1{1+b\cdot x^{2N+1-2k}}=\frac{x^{2k}}{x^{2k}+b\cdot x^{2N+1}}\sim
\frac{x^{2k}}{x^{2k}}=1$$
due to the fact that $0<x<1$ and $x^{2N+1}\rightarrow0$, as $N\rightarrow\infty$. Therefore, you estimates $B(N)$ for large $N$ such that
\begin{align} B(N)
&=\sum_{k=0}^{2N+1}\binom{2N+1}k\pi^k(1-\pi)^{2N+1-k}\frac{x^{2k}}{x^{2k}+b\cdot x^{2N+1}} \\
& \sim \sum_{k=0}^{2N+1}\binom{2N+1}k\pi^k(1-\pi)^{2N+1-k} \\
&=(\pi+1-\pi)^{2N+1}=1.
\end{align}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/257763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
The average of reciprocal binomials This question is motivated by the MO problem here. Perhaps it is not that difficult.
Question. Here is an cute formula.
$$\frac1n\sum_{k=0}^{n-1}\frac1{\binom{n-1}k}=\sum_{k=1}^n\frac1{k2^{n-k}}.$$
I've one justification along the lines of Wilf-Zeilberger (see below). Can you provide an alternative proof? Or, any reference?
The claim amounts to $a_n=b_n$ where
$$a_n:=\sum_{k=0}^{n-1}\frac{2^n}{n\binom{n-1}k} \qquad \text{and} \qquad
b_n:=\sum_{k=1}^n\frac{2^k}k.$$
Define $F(n,k):=\frac{2^n}{n\binom{n-1}k}$ and $\,G(n,k)=-\frac{2^n}{(n+1)\binom{n}k}$. Then, it is routinely checked that
$$F(n+1,k)-F(n,k)=G(n,k+1)-G(n,k),\tag1$$
for instance by dividing through with $F(n,k)$ and simplifying. Summing (1) over $0\leq k\leq n-1$:
\begin{align}
\sum_{k=0}^{n-1}F(n+1,k)-\sum_{k=0}^{n-1}F(n,k)
&=a_{n+1}-\frac{2^{n+1}}{n+1}-a_n, \\
\sum_{k=0}^{n-1}G(n,k+1)-\sum_{k=0}^{n-1}G(n,k)
&=-\sum_{k=1}^n\frac{2^n}{(n+1)\binom{n}k}+\sum_{k=0}^{n-1}\frac{2^n}{(n+1)\binom{n}k}=0. \end{align}
Therefore, $a_{n+1}-a_n=\frac{2^{n+1}}{n+1}$. But, it is evident that $b_{n+1}-b_n=\frac{2^{n+1}}{n+1}$. Since $a_1=b_1$, it follows $a_n=b_n$ for all $n\in\mathbb{N}$.
=====================================
Here is an alternative approach to Fedor's answer below in his elaboration of Fry's comment.
With $\frac1{n+1}\binom{n}k=\int_0^1x^{n-k}(1-x)^kdx$, we get $a_{n+1}=2^{n+1}\int_0^1\sum_{k=0}^nx^{n-k}(1-x)^kdx$. So,
\begin{align}
2^{n+1}\int_0^1 dx\sum_{k=0}^nx^{n-k}(1-x)^k
&=2^{n+1}\int_0^1x^ndx\sum_{k=0}^n\left(\frac{1-x}x\right)^k \\
&=2^{n+1}\int_0^1x^n\frac{\left(\frac{1-x}x\right)^{n+1}-1}{\frac{1-x}x-1}dx \\
&=\int_0^1\frac{(2-2x)^{n+1}-(2x)^{n+1}}{1-2x}\,dx:=c_{n+1}.
\end{align}
Let's take successive difference of the newly-minted sequence $c_{n+1}$:
\begin{align}
c_{n+1}-c_n
&=\int_0^1\frac{(2-2x)^{n+1}-(2-2x)^n+(2x)^n-(2x)^{n+1}}{1-2x}\,dx \\
&=\int_0^1\left[(2-2x)^n+(2x)^n\right]dx=2^{n+1}\int_0^1x^ndx=\frac{2^{n+1}}{n+1}. \end{align}
But, $b_{n+1}-b_n=\frac{2^{n+1}}{n+1}$ and hence $a_n=b_n$.
| As in the question
$$a_n:=\sum_{k=0}^{n-1}\frac{2^n}{n\binom{n-1}k} \qquad \text{and} \qquad
b_n:=\sum_{k=1}^n\frac{2^k}k.$$
It is clear that $a_1=b_1$ and $b_{n+1}-b_n=\frac{2^{n+1}}{n+1}$. But we have the same recursive relation for $a_n$ because
\begin{align}
a_n&=2^n\sum_{k=0}^{n-1}\frac{k!(n-1-k)!(k+1+n-k)}{n!(n+1)} \\
&=2^n\sum_{k=0}^{n-1}\left(\frac{(k+1)!(n-1-k)!}{(n+1)!}+\frac{k!(n-k)!}{(n+1)!}\right) \\
&=2^n\left(2\sum_{k=0}^{n}\frac{k!(n-k)!}{(n+1)!}-\frac{2}{n+1}\right)=a_{n+1}-\frac{2^{n+1}}{n+1}.
\end{align}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/262578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Is there an explicit expression for Chebyshev polynomials modulo $x^r-1$? This is an immediate successor of Chebyshev polynomials of the first kind and primality testing and does not have any other motivation - although original motivation seems to be huge since a positive answer (if not too complicated) would give a very efficient primality test (see the linked question for details).
Recall that the Chebyshev polynomials $T_n(x)$ are defined by $T_0(x)=1$, $T_1(x)=x$ and $T_{n+1}(x)=2xT_n(x)-T_{n-1}(x)$, and there are several explicit expressions for their coefficients. Rather than writing them down (you can find them at the Wikipedia link anyway), let me just give a couple of examples:
$$
T_{15}(x)=-15x(1-4\frac{7\cdot8}{2\cdot3}x^2+4^2\frac{6\cdot7\cdot8\cdot9}{2\cdot3\cdot4\cdot5}x^4-4^3\frac{8\cdot9\cdot10}{2\cdot3\cdot4}x^6+4^4\frac{10\cdot11}{2\cdot3}x^8-4^5\frac{12}{2}x^{10}+4^6x^{12})+4^7x^{15}
$$
$$
T_{17}(x)=17x(1-4\frac{8\cdot9}{2\cdot3}x^2+4^2\frac{7\cdot8\cdot9\cdot10}{2\cdot3\cdot4\cdot5}x^4-4^3\frac{8\cdot9\cdot10\cdot11}{2\cdot3\cdot4\cdot5}x^6+4^4\frac{10\cdot11\cdot12}{2\cdot3\cdot4}x^8-4^5\frac{12\cdot13}{2\cdot3}x^{10}+4^6\frac{14}{2}x^{12}-4^7x^{14})+4^8x^{17}
$$
It seems that $n$ is a prime if and only if all the ratios in the parentheses are integers; this is most likely well known and easy to show.
The algorithm described in the above question requires determining whether, for an odd $n$, coefficients of the remainder from dividing $T_n(x)-x^n$ by $x^r-1$, for some fairly small prime $r$ (roughly $\sim\log n$) are all divisible by $n$. In other words, denoting by $a_j$, $j=0,1,2,...$ the coefficients of $T_n(x)-x^n$, we have to find out whether the sum $s_j:=a_j+a_{j+r}+a_{j+2r}+...$ is divisible by $n$ for each $j=0,1,...,r-1$.
The question then is: given $r$ and $n$ as above ($n$ odd, $r$ a prime much smaller than $n$), is there an efficient method to find these sums $s_j$ without calculating all $a_j$? I. e., can one compute $T_n(x)$ modulo $x^r-1$ (i. e. in a ring where $x^r=1$) essentially easier than first computing the whole $T_n(x)$ and then dividing by $x^r-1$ in the ring of polynomials?
(As already said, only the question of divisibility of the result by $n$ is required; also $r$ is explicitly given (it is the smallest prime with $n$ not $\pm1$ modulo $r$). This might be easier to answer than computing the whole polynomials mod $x^r-1$.)
| The coefficient of $x^j$ in $(T_n(x)\bmod (x^r-1))$ equals the coefficient of $t^{n+r-j-1}$ in
$$\frac{(1+t^2)^{r-j}}{2^{r-j}} \frac{((1+t^2)^{r-1}t - 2^{r-1}t^{r-1})}{((1+t^2)^r - 2^rt^r)}.$$
This coefficient can be explicitly computed as
$$\sum_{k\geq 0} 2^{rk-r+j} \left( \binom{r-1-j-rk}{\frac{n+r-j-2-rk}{2}} - 2^{r-1}\binom{-j-rk}{\frac{n-j-rk}{2}}\right).$$
(here the binomial coefficients are zero whenever their lower indices are noninteger or negative)
| {
"language": "en",
"url": "https://mathoverflow.net/questions/286626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 2,
"answer_id": 0
} |
Polynomials for which $f''$ divides $f$ Let $n \geq 2$ and let $a < b$ be real numbers. Then it is easy to see that there is a unique up to scale polynomial $f(x)$ of degree $n$ such that
$$f(x) = \frac{(x-a)(x-b)}{n(n-1)} f''(x).$$
*
*Have these polynomials been studied? Do they have a standard name?
*Is it true that all the roots of $f(x)$ are real numbers in $[a,b]$?
Here are the polynomials $f$, with $a=-1$ and $b=1$, for $2 \leq n \leq 10$:
-1 + x^2,
-x + x^3,
1/5 - (6 x^2)/5 + x^4,
(3 x)/7 - (10 x^3)/7 + x^5,
-(1/21) + (5 x^2)/7 - (5 x^4)/3 + x^6,
-((5 x)/33) + (35 x^3)/33 - (21 x^5)/11 + x^7,
5/429 - (140 x^2)/429 + (210 x^4)/143 - (28 x^6)/13 + x^8,
(7 x)/143 - (84 x^3)/143 + (126 x^5)/65 - (12 x^7)/5 + x^9,
-(7/2431) + (315 x^2)/2431 - (210 x^4)/221 + (42 x^6)/17 - (45 x^8)/17 + x^10
| Recording a CW-answer to take this off the unanswered list. The Gegenbauer polynomials are defined by the differential equation
$$(1-x^2) g'' - (2 \alpha+1) x g' +n(n+2 \alpha) g =0.$$
Putting $\alpha = -1/2$, we get
$$g=\frac{(x+1)(x-1)}{n(n-1)} g''.$$
If we want some other values for $a$ and $b$, we can put $f(x) = g(\ell(x))$ where $\ell$ is the affine linear function with $\ell(a) = -1$ and $\ell(b)=1$.
The Gegenbauer polynomials are orthogonal with respect to the weight $(1-x^2)^{\alpha - 1/2}$, so $(1-x^2)^{-1}$ in our case, so standard results on orthogonal polynomials tell us that the roots are in $[-1,1]$. If you are worried about the poles at $x = \pm 1$, then put $g_n(x) = (1-x^2) h_n(x)$, and the $h$'s are orthogonal with respect to $(1-x^2)$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/292703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
Elliptic-type integral with nested radical Let:
$$Q(A,Y) = Y^4-2 Y^2+2 A Y \sqrt{1-Y^2}-A^2+1$$
I’m curious as to whether it’s possible to find a closed-form solution for:
$$I(A)=\int_{Y_1(A)}^{Y_2(A)} \frac{1}{\sqrt{\left(1-Y^2\right) Q(A,Y)}}\,dY\\
= \int_{\sin^{-1}(Y_1(A))}^{\sin^{-1}(Y_2(A))}\frac{2 \sqrt{2}}{\sqrt{3-8 A^2+8 A \sin (2 \theta )+4 \cos (2 \theta )+\cos (4 \theta )}}\,d\theta$$
where $0 \le A \le \frac{3 \sqrt{3}}{4}$ and $-1\le Y_1(A), Y_2(A) \le 1$ are the real zeroes of $Q(A,Y)$.
I know a re-parameterisation that makes the zeroes a bit less messy than they would be in terms of $A$. If we set:
$$A(a) = \frac{\sqrt{a^3 (a+2)^3}}{2 (a+1)}$$
with $0 \le a \le 1$ then we have:
$$Y_{1,2}(a) = \frac{a^2+a\mp (a+2) \sqrt{1-a^2}}{2 (a+1)}$$
I’m not sure if there is any way to explicitly introduce these known zeroes into the integrand, as one could do if $Q(A,Y)$ were a polynomial. I do have the following relation:
$$Q(A,Y)\,Q(-A,Y)\\
= Q(A,Y)\,Q(A,-Y)\\
=\left(Y^4-2 Y^2+2 A Y \sqrt{1-Y^2}-A^2+1\right)\left(Y^4-2 Y^2-2 A Y \sqrt{1-Y^2}-A^2+1\right)\\
= P(A,Y)\,P(A,-Y)$$
where:
$$P(A,Y) = Y^4+2Y^3-2Y+A^2-1\\
P(A(a),Y)= (Y-Y_1(a))\,(Y-Y_2(a))\,\left(Y^2+(a+2)Y+\frac{a (a+2)^2+2}{2 (a+1)}\right)$$
$P(A,Y)$ has exactly the same real zeroes as $Q(A,Y)$, but has the opposite sign.
Mathematica is unable to explicitly evaluate the definite integral in either form, but it returns an indefinite integral for the trigonometric form:
$$I(A,\theta) =\\
\frac{4 \sqrt{2} \sqrt{\frac{\left(r_1-r_2\right) \left(r_3-\tan (\theta )\right)}{\left(r_1-r_3\right) \left(r_2-\tan (\theta )\right)}}
\left(r_1 \cos (\theta )-\sin (\theta )\right) \left(r_4 \cos (\theta )-\sin (\theta )\right) \times \\F\left(\sin
^{-1}\left(\sqrt{\frac{\left(r_2-r_4\right) \left(r_1-\tan (\theta )\right)}{\left(r_1-r_4\right) \left(r_2-\tan (\theta
)\right)}}\right)|\frac{\left(r_2-r_3\right) \left(r_1-r_4\right)}{\left(r_1-r_3\right) \left(r_2-r_4\right)}\right)}{\left(r_1-r_4\right)
\sqrt{\frac{\left(r_1-r_2\right) \left(r_2-r_4\right) \left(r_1-\tan (\theta )\right) \left(r_4-\tan (\theta
)\right)}{\left(r_1-r_4\right){}^2 \left(r_2-\tan (\theta )\right){}^2}} \sqrt{3-8 A^2+8 A \sin (2 \theta )+4 \cos (2 \theta )+\cos (4 \theta )}}$$
where the $r_i$ are the roots of:
$$R(A,Y) = A^2 Y^4 - 2 A Y^3 + 2 A^2 Y^2 - 2 A Y + A^2 -1$$
which for $A=A(a)$ are:
$$
\begin{array}{rcl}
r_1 &=& \frac{1-(a+1)^{3/2}\sqrt{1-a}}{a^{3/2} \sqrt{a+2}}\\
&=& \tan(\sin^{-1}(Y_1(a)))\\
r_2 &=& \frac{1+(a+1)^{3/2}\sqrt{1-a}}{a^{3/2} \sqrt{a+2}}\\
& =& \tan(\sin^{-1}(Y_2(a)))\\
r_3 &=& \frac{1-i (a+1) \sqrt{a^2+4 a+3}}{\sqrt{a} (a+2)^{3/2}}\\
r_4 &=& \frac{1+i (a+1) \sqrt{a^2+4 a+3}}{\sqrt{a} (a+2)^{3/2}}
\end{array}
$$
This formula is not very helpful in itself without a deeper understanding of how it was obtained; a naive attempt to get the definite integral from it gives imaginary results, probably because of some issue with branch cuts.
But it does suggest that a human who understood the technique that produced it could perform a similar process to obtain an expression for the definite integral in terms of an elliptic integral.
The motivation here is to find a closed-form expression for the probability density function for the area of a triangle whose vertices are chosen uniformly at random from the unit circle, as discussed in this question:
Moments of area of random triangle inscribed in a circle
I believe $Prob(A) = \frac{2}{\pi^2} I(A)$, where $A$ is the area of such a triangle. Numeric integrals for this quantity give the plot below; this goes to infinity at $A=0$, and is finite and non-zero for the maximum value, $A=\frac{3 \sqrt{3}}{4}$.
| If the purpose of this calculation is to test the geometry conjecture, comparing expansions in powers of $A$ or $a$ should be effective. The indefinite integral can readily be evaluated to any order in $A$, but for the definite integral I run into a difficulty. Take the zeroth order term $A=0=a$, when $Y_{1,2}=\pm 1$, $Q=Y^4-2Y^2+1$ and the integral over $Y$ is
$$I(0)=\int_{-1}^1\frac{1}{\sqrt{(1-Y^2)(Y^4-2Y^2+1)}}\,dY,$$
which has a nonintegrable singularity at the end points (the integrand diverges as $(1\pm y)^{-3/2}$).
I have checked that the small-$a$ asymptotic of $I(a)$ is indeed a $1/\sqrt{a}$ divergence.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/306100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Does the equation $(xy+1)(xy+x+2)=n^2$ have a positive integer solution? Does there exist a positive integral solution $(x, y, n)$ to $(xy+1)(xy+x+2)=n^2$? If there doesn't, how does one prove that?
| $4n^2=4(xy+1)(xy+x+2)=(3+x+2xy)^2-(1+x)^2$.
For positive even $k$ and $ka^2+b^2=c^2$, when $a,b,c$ is positive integers, and $a=2m,b=km^2-1,c=km^2+1$, when $m$ is any positive integer.
Then $n=2m,1+x=4m^2-1,3+x+2xy=4m^2+1$, and then $2m^2=1$, i.e. $m$ is not integer - contradiction. Means equation $n^2=(xy+1)(xy+x+2)$ has no positive integer solution.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/313339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 3,
"answer_id": 2
} |
Formula for a sum of product of binomials We know that equation $$s_1+s_2+s_3=n-1 \quad \mbox{$s_1,s_2,s_3$}\geq 1$$
has $\binom{n-2}{2}$ solution.
I want to find any good formulae for the following form :
$$\sum_{(s_1,s_2,s_3)}\prod_{i=1}^3\binom{s_i+s_{i-1}-1}{s_i}=?$$
where, $s_0=1$ and each $(s_1,s_2,s_3)$ is the solution of above equation.
I find that following:
$$n=4\Longrightarrow 1$$
$$n=5\Longrightarrow 5$$
$$n=6\Longrightarrow 18$$
$$n=7\Longrightarrow 57$$
$$n=8\Longrightarrow 169$$
$$n=9\Longrightarrow 502$$
*
*My all attempts have failed
| The generating function is $$ \sum_{s_1,s_2, s_3} {s_1 + s_2-1 \choose s_2} {s_2+ s_3-1 \choose s_3} x^{s_1+s_2+s_3}.$$
$$ \sum_{s_3} {s_2+ s_3-1 \choose s_3} x^{s_3} = \left( \frac{1}{1-x}\right)^{s_2}$$.
Then the sum over the $s_2$ variable is
$$ \sum_{s_2} {s_1 + s_2-1 \choose s_2}\left( \frac{x}{1-x}\right)^{s_2} = \left( \frac{1}{1- \frac{x}{1-x}} \right)^{s_1} = \left( \frac{1-x}{1-2x} \right)^{s_1}$$
and the sum over just the $s_1$ variable is
$$ \sum_{s_1} \left( \frac{x-x^2}{1-2x}\right)^{s_1} = \frac{1}{1- \frac{x-x^2}{1-2x}}=\frac{1-2x}{1-3x+x^2}.$$
However, we need to remove the terms when $s_1,s_2$ or $s_3$ is zero. Because of the binomial coefficients, if $s_1$ vanishes then $s_2$ vanishes and if $s_2$ vanishes then $s_3$ vanishes, so it suffices to remove the terms with $s_3=0$, which are
$$ \sum_{s_1,s_2} {s_1 + s_2-1 \choose s_2} x^{s_1+s_2}= \frac{1-x}{1-2x}$$ by the same logic.
So the full generating function is $$ \frac{1-2x}{1-3x+x^2} - \frac{1-x}{1-2x}.$$
Your sum is then the coefficient of $x^{n-1}$ in this series. To get this, as EFinat-S suggests we may use partial fractions.
$$ \frac{1-2x}{1-3x+x^2} - \frac{1-x}{1-2x} = \frac{- 2 + \sqrt{5} }{1- \frac{3 + \sqrt{5}}{2} x} + \frac{-2-\sqrt{5} }{1- \frac{3 - \sqrt{5}}{2} x} - \frac{1/2}{1-2x} - \frac{1}{2} $$
which will match the expression Carlo Beenaker gave.
Moreover, this will generalize to the analogue with $s_1,\dots,s_k$, giving a rational generating function. There is a straightforward enumerative interpretation, along the lines of the OEIS reference Carlo found as length $2(n-1)$ depth $k$ nested balanced parantheses expressions / plane trees / Dyck paths, which will thus be related to a column ofOEIS A080936.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/317499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Can you make an identity from this product? Start with the product
$$(1+x+x^2) (1+x^2)(1+x^3)(1+x^4)\cdots$$
(The first polynomial is a trinomial..The others are binomials..)
Is it possible by changing some of the signs to get a series all of whose coefficients are $ -1,0,$or $1$?
A simple computer search should suffice to answer the question if the answer is "no." I haven't yet done such a search myself.
This question is a takeoff on the well known partition identities like:
$$\prod_{n=1}^{\infty} (1-x^n)= 1-x-x^2+x^5+x^7-\ldots$$
| Proof of Doriano Brogloli's answer:
Call $a(n)$ the $n$th coefficient of $A(x)=(1-x)(1-x^2)\cdots$. By Euler's pentagonal theorem
we have $a(n)=0$ unless $n=m(3m\pm1)/2$ for some $m$, in which case $a(m)=(-1)^m$.
Call $b(n)$ the $n$th coefficient of $B(x)=(1-x^2)(1-x^3)...$. Since $A(x)=(1-x)B(x)$
we have $b(n)-b(n-1)=a(n)$, so $b(n)=\sum_{0\le j\le n}a(j)$. Now for any integer $n$ there exists a unique $m$ such that $(m-1)(3(m-1)+1)/2\le n<m(3m+1)/2$. If
$(m-1)(3m-2)/2\le n<m(3m-1)/2$ we thus have $b(n)=1+2\sum_{1\le j\le m-1}(-1)^j=(-1)^{m-1}$, and if $m(3m-1)/2\le n<m(3m+1)/2$ we have $b(n)=(-1)^{m-1}+(-1)^m=0$.
Finally, we have $C(x)=(1-x+x^2)(1-x)(1-x^2)\cdots=A(x)+x^2B(x)$ so its $N+2$th
coefficient $c(N+2)$ is equal to $a(N+2)+b(N)$. Thus, if $(m-1)(3m-2)/2\le N<m(3m-1)/2$ but $N\ne m(3m-1)/2-2$ we have $c(N+2)=(-1)^{m-1}$, while if $N=m(3m-1)/2-2$ we have $c(N+2)=(-1)^{m-1}+(-1)^m=0$, and if $m(3m-1)/2\le N<m(3m+1)/2$ but $N\ne m(3m+1)/2-2$ we have $c(N+2)=0$, while if $N=m(3m+1)/2-2$ we have $c(N+2)=(-1)^m$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/333548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 2
} |
Solutions of $y^2=\binom{x}{0}+2\binom{x}{1}+4\binom{x}{2}+8\binom{x}{3}$ for positive integers $x$ and $y$ I was interested in create and solve a Diophantine equation similar than was proposed in the section D3 of [1]. I would like to know what theorems or
techniques can be applied to prove or refute that the Diophantine equation of the title has a finite number of solutions, I don't have the intuition to know it. Our equation is given as $$y^2=2^0\binom{x}{0}+2^1\binom{x}{1}+2^2\binom{x}{2}+2^3\binom{x}{3},$$
thus using the definition of binomial coefficients we are interested in to solve this equation for positive integers $x\geq 0$ and $y\geq0$
$$3y^2=4x^3-6x^2+8x+3.$$
Computational fact. I got up to $10^4$ that the only solutions $(x,y)$ for positive integers $x,y\geq 0$ are $(x,y)=(0,1)$,$(2,3)$, $(62,557)$ and $(144,1985)$. For example our third solution is $$3\cdot 557^2=930747=4\cdot(62)^3-6\cdot (62)^2+8\cdot(62)+3.$$
Question. Does the equation $$y^2=\binom{x}{0}+2\binom{x}{1}+4\binom{x}{2}+8\binom{x}{3}$$ have a finite number of solutions for positive integers $x,y\geq0$ ? If it is very difficult to solve, what work can be done? Many thanks.
References:
[1] Richard K. Guy, Unsolved Problems in Number Theory, Problem Books in Mathematics, Unsolved Problems in Intuitive Mathematics Volume I, Springer-Verlag (1994).
| The curve $y^2 = 1+\frac{8}{3} x - 2 x^2 + \frac{4}{3} x^3$ is elliptic. Siegel's theorem says it has only finitely many integral points.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/337036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Difference of two integer sequences: all zeros and ones? Suppose that $c$ is a nonnegative integer and $A_c = (a_n)$ and $B_c = (b_n)$ are strictly increasing complementary sequences satisfying
$$a_n = b_{2n} + b_{4n} + c,$$
where $b_0 = 1.$ Can someone prove that the sequence $A_1-A_0$ consists entirely of zeros and ones?
Notes:
$$
A_0 = (2, 10, 17, 23, 31, 38, 44, 52, 59, 65, 73, 80, 86, \ldots ) \\
A_1 = (3, 11, 17, 24, 31, 39, 45, 53, 59, 66, 74, 80, 87, \ldots )
$$
The sequence $A_0$ satisfies the linear recurrence $a_n = a_{n-1} + a_{n-3} - a_{n-4}$.
It may help to watch $A_1$ get started. Since $b_0=1$ we have $a_0=1+1+1=3$, and since $A_1$ and $B_1$ are complementary, we have $b_1=2$. Next, $a_1=b_2+b_2+1 \geq 4+6+1=11$, so that $b_2=4, b_3=5,\ldots,b_8=10$, and $a_1=11$. Then $a_2=b_4+b_8+1 \geq 17$, and so on.
| The same method as in this answer to a previous your question works as well.
As for $(A_0)$. Starting with a guess
$$
7n+2\leq a_n\leq 7n+3,
$$
and trying to prove it inductionally, we arrive at $b_{6n+2}\geq 7n+4$ and $b_{6n}\leq 7n+1$, hence
$$
t+\left\lfloor\frac{t+4}6\right\rfloor+1\leq b_t\leq t+\left\lceil\frac t6\right\rceil+1, \qquad(**_0)
$$
which agree for all $t=6k+2,\dots,6k+6$. So for all even $t$ we have $b_t=t+\lceil t/6\rceil+1$, which yields even an exact formula for $a_n$:
$$
a_n=7n+\begin{cases}2,& n\mod 3=0; \\ 3,& n\mod 3\neq 0.\end{cases}
$$
As for $(A_1)$.
Starting with a guess
$$
7n+3\leq a_n\leq 7n+4,
$$
and trying to prove it inductionally, we arrive at $b_{6n+3}\geq 7n+5$ and $b_{6n+1}\leq 7n+2$, hence
$$
t+\left\lfloor\frac{t+3}6\right\rfloor+1\leq b_t\leq t+\left\lceil\frac{t-1}6\right\rceil+1,
\qquad(**_1)
$$
which agree for all $t=6k+3,\dots,6k+7$. So we have $b_t=t+\lceil (t-1)/6\rceil+1$, except for $7k+3\leq b_{6k+2}\leq 7k+4$. This yields that
$$
a_n=7n+\begin{cases}3,& n\mod 3=0; \\ 3\text{ or }4,& n\mod 3\neq 0.\end{cases}
$$
The required conclusion follows.
In fact, that conclusion could be also derived directly from $(**_0)$ and $(**_1)$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/342867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
an identity between two elliptic integrals I would like a direct change of variable proof of the identity
$$\int_0^{\arctan\frac{\sqrt{2}}{\sqrt{\sqrt{3}}}} \frac{1}{\sqrt{1-\frac{2+\sqrt{3}}{4}\sin^2\phi }}d\phi=\int_0^{\arctan\frac{1}{\sqrt{\sqrt{3}}}}\frac{1}{\sqrt{1-\frac{2+\sqrt{3}}{4}\sin^22\phi }}d\phi\,.$$
I need it as part of a paper on Legendre's proof of the "third singular modulus."
| Since the bountied question has changed substantially, now asking for the application of an identity in Legendre's Traite des fonctions elliptiques, I am starting a new answer. Legendre defines
\begin{align}
&F(\phi,k)=\int_0^{\phi}\frac{d\phi'}{\sqrt{1-k^2\sin^2\phi'}},\\
&\sin\phi=\frac{\sin(\theta/2)}{\sqrt{\tfrac{1}{2}+\tfrac{1}{2}\Delta(\theta)}},\;\;\Delta(\theta)=\sqrt{1-k^2\sin^2\theta},
\end{align}
and then derives the identity
$$F(\phi,k)=\tfrac{1}{2}F(\theta,k).$$
Now we apply this to $k^2=\frac{2+\sqrt{3}}{4}$, $\theta=2 \arctan 3^{-1/4}$ and find
$$\sin\phi=\frac{2 \sin (\theta/2)}{\sqrt{\sqrt{4-\left(\sqrt{3}+2\right) \sin ^2\theta}+2}}=\sqrt{3}-1,$$
and thus $\phi=\arcsin(\sqrt{3}-1)=\arctan\left(3^{-1/4}\sqrt{2}\right)$. Hence, Legendre's identity gives
$$F\left(\arctan\left(3^{-1/4}\sqrt{2}\right),\frac{2+\sqrt{3}}{4}\right)=\frac{1}{2}F\left(2 \arctan 3^{-1/4},\frac{2+\sqrt{3}}{4}\right)$$
or equivalently
$$\int_0^{\arctan\left(3^{-1/4}\sqrt{2}\right)}\frac{d\phi'}{\sqrt{1-\frac{2+\sqrt{3}}{4}\sin^2 \phi'}}=\int_0^{\arctan 3^{-1/4}}\frac{d\phi'}{\sqrt{1-\frac{2+\sqrt{3}}{4}\sin^2 2\phi'}},$$
which is the identity in the OP.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/349272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Cancellation in a particular sum In an attempt to compute cycle counts in an of a certain number theoretic graph, the following estimate was needed.
It is true that
$$\bigg|\sum_{a,b,c\in \mathbb{Z}/p\mathbb{Z}}\bigg(\sum_{d=1}^{p-1}\bigg(\frac{d}{p}\bigg)w^{-d-(a^2+b^2+c^2)/d}\bigg)^3\bigg| = o(p^{9/2}).$$
$w$ here is $\exp\big(\frac{2\pi i}{p}\big)$. Is such an estimate known? The fact that the sum in question is $O(p^{9/2})$ follows from using that the second moment is $O(p^{4})$ and the inner sum $O(p^{1/2})$ which uses Weil's work on RH for curves.
| We can actually get an explicit formula for the whole sum (I will assume $p \ne 2,3$ throughout). We start with the Salié sum:
$$\sum_{d=1}^{p-1}\bigg(\frac{d}{p}\bigg)w^{-d-(a^2+b^2+c^2)/d} = \bigg(\sum_d \bigg(\frac{d}{p}\bigg)w^{-d}\bigg) \sum_{x^2 \equiv 4(a^2+b^2+c^2)} w^x.$$
The first factor is a Gauss sum, and has absolute value $\sqrt{p}$, and in fact the exact value is either $\sqrt{p}$ or $-i\sqrt{p}$ depending on whether $p$ is $1$ or $3$ modulo $4$. Write $G$ for this Gauss sum. Then the whole sum becomes
$$G^3\bigg(\sum_{x \not\equiv 0} \sum_{4(a^2+b^2+c^2) \equiv x^2} \frac{1}{2}(w^x + w^{-x})^3 + \sum_{4(a^2+b^2+c^2) \equiv 0} 1\bigg).$$
So we just need to compute the number of ways $N_x$ to write $x^2$ as a sum of three squares, for each $x$. By rescaling, we see that $N_x = N_1$ for $x \not\equiv 0 \pmod{p}$, so the sum becomes
$$G^3(N_0 + N_1\sum_{x \not\equiv 0}\frac{1}{2}(w^{3x} + 3w^x+3w^{-x} + w^{-3x})) = G^3(N_0 - 4N_1).$$
To compute $N_1$, we can use stereographic projection from the point $(1,0,0)$ to the plane $(0,x,y)$, to see that $N_2$ is $p^2$ minus the number of pairs $x,y$ with $x^2+y^2 \equiv -1$, plus the number of pairs $b,c$ with $b^2+c^2 \equiv 0$. Another stereographic projection argument (and the fact that every number is a sum of two squares modulo $p$) shows that there are exactly $p-(\frac{p}{4})$ ways to choose $x,y$ with $x^2+y^2 \equiv -1$. The number of pairs $b,c$ with $b^2+c^2 \equiv 0$ is $p+(p-1)(\frac{p}{4})$. Thus $N_1 = p^2 + p(\frac{p}{4})$.
To compute $N_0$, we can rescale to reduce to counting the number of pairs $x,y$ with $x^2 + y^2 \equiv -1$ (times a factor of $p-1$) or $b^2+c^2 \equiv 0$, and we find that $N_0 = p^2$. Thus the full sum comes out to
$$G^3\bigg(-3p^2-4p\big(\frac{p}{4}\big)\bigg),$$
and the absolute value is $3p^{7/2} \pm 4p^{5/2}$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/356759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Alternative proofs sought after for a certain identity Here is an identity for which I outlined two different arguments. I'm collecting further alternative proofs, so
QUESTION. can you provide another verification for the problem below?
Problem. Prove that
$$\sum_{k=1}^n\binom{n}k\frac1k=\sum_{k=1}^n\frac{2^k-1}k.$$
Proof 1. (Induction). The case $n=1$ is evident. Assume the identity holds for $n-1$. Then,
\begin{align*} \sum_{k=1}^{n+1}\binom{n+1}k\frac1k-\sum_{k=1}^n\binom{n}k\frac1k
&=\frac1{n+1}+\sum_{k=1}^n\left[\binom{n+1}k-\binom{n}k\right]\frac1k \\
&=\frac1{n+1}+\sum_{k=1}^n\binom{n}{k-1}\frac1k \\
&=\frac1{n+1}+\frac1{n+1}\sum_{k=1}^n\binom{n+1}k \\
&=\frac1{n+1}\sum_{k=1}^{n+1}\binom{n+1}k=\frac{2^{n+1}-1}{n+1}.
\end{align*}
It follows, by induction assumption, that
$$\sum_{k=1}^{n+1}\binom{n+1}k\frac1k=\sum_{k=1}^n\binom{n}k\frac1k+\frac{2^{n+1}-1}{n+1}=\sum_{k=1}^n\frac{2^k-1}k+\frac{2^{n+1}-1}{n+1}
=\sum_{k=1}^{n+1}\frac{2^k-1}k.$$
The proof is complete.
Proof 2. (Generating functions) Start with $\sum_{k=1}^n\binom{n}kx^{k-1}=\frac{(x+1)^n-1}x$ and integrate both sides: the left-hand side gives
$\sum_{k=1}^n\binom{n}k\frac1k$. For the right-hand side, let $f_n=\int_0^1\frac{(x+1)^n-1}x\,dx$ and denote the generating function
$G(q)=\sum_{n\geq0}f_nq^n$ so that
\begin{align*} G(q)&=\sum_{n\geq0}\int_0^1\frac{(x+1)^n-1}x\,dx\,q^n =\int_0^1\sum_{n\geq0}\frac{(x+1)^nq^n-q^n}x\,dx \\
&=\int_0^1\frac1x\left[\frac1{1-(x+1)q}-\frac1{1-q}\right]dx=\frac{q}{1-q}\int_0^1\frac{dx}{1-(x+1)q} \\
&=\frac{q}{1-q}\left[\frac{\log(1-(1+x)q)}{-q}\right]_0^1=\frac{\log(1-q)-\log(1-2q)}{1-q} \\
&=\frac1{1-q}\left[-\sum_{m=1}^{\infty}\frac1mq^m+\sum_{m=1}^{\infty}\frac{2^m}mq^m\right]=\frac1{1-q}\sum_{m=1}^{\infty}\frac{2^m-1}m\,q^m \\
&=\sum_{n\geq1}\sum_{k=1}^n\frac{2^k-1}k\,q^n.
\end{align*}
Extracting coefficients we get $f_n=\sum_{k=1}^n\frac{2^k-1}k$ and hence the argument is complete.
| \begin{align*}
\sum_{k=1}^{n} \frac{2^k-1}{k}
&=\sum_{k=1}^{n} \frac{1}{k}\left(\sum_{j=1}^{k} \binom{k}{j}\right) \\
&=\sum_{j=1}^{n} \sum_{k=j}^{n} \binom{k}{j}\frac{1}{k} \\
&=\sum_{j=1}^{n} \frac{1}{j}\left(\sum_{k=j}^{n} \binom{k-1}{j-1}\right) \\
&=\sum_{j=1}^{n} \frac{1}{j} \binom{n}{j}.
\end{align*}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/379248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 6,
"answer_id": 4
} |
Inequality of two variables
Let $a$ and $b$ be positive numbers. Prove that:
$$\ln\frac{(a+1)^2}{4a}\ln\frac{(b+1)^2}{4b}\geq\ln^2\frac{(a+1)(b+1)}{2(a+b)}.$$
Since the inequality is not changed after replacing $a$ on $\frac{1}{a}$ and $b$ on $\frac{1}{b}$ and $\ln^2\frac{(a+1)(b+1)}{2(a+b)}\geq\ln^2\frac{(a+1)(b+1)}{2(ab+1)}$ for $\{a,b\}\subset(0,1],$
it's enough to assume that $\{a,b\}\subset(0,1].$
Also, $f(x)=\ln\ln\frac{(x+1)^2}{4x}$ is not convex on $(0,1]$ and it seems that Jensen and Karamata don't help here.
Thank you!
| $$
\ln\frac{(a+1)^2}{4a}\ln\frac{(b+1)^2}{4b}
=\ln\left(1-\left(\frac{a-1}{a+1}\right)^2\right)\ln\left(1-\left(\frac{b-1}{b+1}\right)^2\right)\\=
\left(\sum_{n=1}^\infty\frac1n \left(\frac{a-1}{a+1}\right)^{2n}\right)\times
\left(\sum_{n=1}^\infty\frac1n \left(\frac{b-1}{b+1}\right)^{2n}\right)\\
\geqslant
\left(\sum_{n=1}^\infty\frac1n \left(\frac{(a-1)(b-1)}{(a+1)(b+1)}\right)^{n}\right)^2\\
=\ln^2\frac{(a+1)(b+1)}{2(a+b)}.
$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/386705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Proving a binomial sum identity
QUESTION. Let $x>0$ be a real number or an indeterminate. Is this true?
$$\sum_{n=0}^{\infty}\frac{\binom{2n+1}{n+1}}{2^{2n+1}\,(n+x+1)}=\frac{2^{2x}}{x\,\binom{2x}x}-\frac1x.$$
POSTSCRIPT. I like to record this presentable form by Alexander Burstein:
$$\sum_{n=0}^{\infty}\frac{\binom{2n}n}{2^{2n}(n+x)}=\frac{2^{2x}}{x\binom{2x}x}.$$
| Let, $f(x)=\frac{\sqrt{\pi}}{\Gamma(x)}$, The values of $f(x)$ at $x=-1,-2,...-N$ points are $y_{-n}=\frac{\binom{2n}{n}}{(-4)^n}$.
Now, for $N+1$ points $n=0,-1,-2,-3,..,-N$ we define $F_N(x):=\frac{\sqrt{\pi}}{\Gamma(x+\frac{1}{2})}N^x$ and $y_N(-n)=\frac{n!\binom{2n}{n}N^{-n}}{(-4)^n}$.
Hence, from Lagrange's interpolation:
$W(x)\sum_{n=0}^{N} \frac{1}{(n+x)(-1)^nn!(N-n)!}y_N(-n)≈F_N(x)$
[ Here, $W(x)=\prod_{n=0}^{N} (x+n)$ ]
or, $\frac{W(x)}{N!}\sum_{n=0}^{N} \frac{\binom{2n-1}{n}}{2^{2n-1}}N(N-1)..(N-n+1)N^{-n}≈F_N(x)$
[$2\binom{2n-1}{n}:=1$ as we get from the previous step]
Now, tending $N \to \infty$, and using $\lim\limits_{N \to \infty} \frac{W(x)}{N!N^x}=\frac{1}{\Gamma(x)}$
Hence, we get $\frac{1}{\Gamma(x)}\left(\frac{1}{x}+\sum_{n=0}^{\infty} \frac{\binom{2n+1}{n+1}}{2^{2n-1}(n+1+x)}\right)=\frac{\sqrt{\pi}}{\Gamma(x+\frac{1}{2})}$
The error term would go to zero as $N$ is increased when $x>-N$. This proves the identity.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/398317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Upper bound on double series We consider the sum
$$ \sum_{m \in \mathbb Z^2} \frac{1}{(3 m_1^2+3m_2^2+3(m_1+m_1m_2+m_2)+1)^2}. $$
Numerically, it is not particularly hard to see that the value of this series is well below $4$, indeed one gets numerically an upper bound of roughly $3.43$
I wonder if there is analytically a quick argument that the value of this double sum is less than $4$?
EDIT: One could observe that
$$ \sum_{m_1=-1}^1\sum_{m_2=-1}^1 \frac{1}{(3 m_1^2+3m_2^2+3(m_1+m_1m_2+m_2)+1)^2} = \frac{40545}{12544}. $$
So it could suffice to show that the remaining terms are sufficiently small.
| As noted in the comment by Beni Bogosel, the sum in question is
\begin{equation}
s:=\sum_{x=-\infty}^\infty\sum_{y=-\infty}^\infty\frac1{f(x,y)^2},
\end{equation}
where
\begin{equation}
f(x,y):=\frac32\, ((x + 1)^2 + (y + 1)^2 + (x + y)^2) - 2.
\end{equation}
Note that
\begin{equation}
f(x,y)\ge x^2+y^2+2\ge2\sqrt{x^2+1}\sqrt{y^2+1}\text{ if } \max(|x|,|y|)\ge3.
\end{equation}
So,
\begin{equation}
s\le s_9+r_{10},
\end{equation}
where
\begin{equation}
s_9:=\sum_{x=-9}^9\sum_{y=-9}^9\frac1{f(x,y)^2}<3.42256
\end{equation}
and
\begin{equation}
\begin{aligned}
r_k&:=4\sum_{x\ge k}\sum_{y=-\infty}^\infty\frac1{4(x^2+1)(y^2+1)} \\
&=\sum_{x\ge k}\frac1{x^2+1}\sum_{y=-\infty}^\infty\frac1{y^2+1} \\
&\le\int_{k-1/2}^\infty\frac{dx}{x^2}\,\Big(1+2\int_2^\infty\frac{dy}{y^2}\Big)
=\frac5{k-1/2}<0.52632
\end{aligned}
\end{equation}
for $k=10$.
Thus,
\begin{equation}
s<3.42256 + 0.52632<4,
\end{equation}
as desired.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/418629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Upper bound for a+b+c in terms of ab+bc+ac I am given a triple of positive integers $a,b,c$ such that $a \geq 1$ and $b,c \geq 2$.
I would like to find an upper bound for $a+b+c$ in terms of $n = ab+bc+ac$. Clearly $a+b+c < ab+bc+ac = n$.
Is there any sharper upper bound that could be obtained (perhaps asimptotically)?
| The bound $n/2 + 1$ is tight. First, it is a bound because
$a + b + c \leq a b/2 + b c /2 + a c \leq n/2 + a c / 2 \leq n/2 + 1$
Equality is achieved when $a = 1$ and $b = c = 2$, since then we get $a + b + c = 5$ and $n/2 + 1 = (2+4+2)/2 + 1 = 5$.
Note that there might be other tight bounds that are also functions of $n$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/21088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
The difference of two sums of unit fractions I had this question bothering me for a while, but I can't come up with a meaningful answer.
The problem is the following:
Let integers $a_i,b_j\in${$1,\ldots,n$} and $K_1,K_2\in$ {$1,\ldots,K$}, then how small (as a function of $K$ and $n$), but strictly positive, can the following absolute difference be.
$\biggl|(\sum_{i=1}^{K_1} \frac{1}{a_i})-(\sum_{j=1}^{K_2} \frac{1}{b_j})\biggr|$
As an example for $K_1=1,$$K_2=1$ choosing $a_1=n,$$b_1=n-1$gives the smallest positive absolute difference, that is $\frac{1}{n(n-1)}$. What could the general case be?
| We can prove Christian Elsholtz's conjecture using only linear functions.
Let $m_1, ..., m_{2K}$ and $\alpha_1, ..., \alpha_{2K}$ be integers satisfying $\sum_i \frac{\alpha_i^j}{m_i} = 0$ for $j = 0, ..., 2K-2$. Then for large $x$ we have
$\sum_i \frac{1}{m_ix - m_i\alpha_i} = \frac{1}{x}\sum_i\frac{1}{m_i}\sum_j\frac{\alpha_i^j}{x^j} = \sum_j \frac{1}{x^{j+1}} \sum_i \frac{\alpha_i^j}{m_i} \approx \frac{C}{x^{2K}}$,
where $C = \sum_i \frac{a_i^{2K-1}}{m_i}$.
For instance, we can pick $\alpha_i = i-1, m_i = \frac{(-1)^iD}{\binom{2K-1}{i}}$, where $D$ is a multiple of $\binom{2K-1}{i}$ for each $i$. Then we will have the same number of positive and negative $m_i$s, so for large $x$ half of the fractions will be positive and half will be negative.
Examples:
$\frac{1}{3x} + \frac{1}{x-2} - \frac{1}{x-1} - \frac{1}{3x-9} = \frac{-2}{x(x-1)(x-2)(x-3)}$, and
$\frac{1}{10x} + \frac{1}{x-2} + \frac{1}{2x-8} - \frac{1}{2x-2} - \frac{1}{x-3} - \frac{1}{10x-50} = \frac{-12}{x(x-1)(x-2)(x-3)(x-4)(x-5)}$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/40819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
Perfect numbers $n$ such that $2^k(n+1)$ is also perfect The smallest two perfect numbers $n=6$ and $m=28$ satisfy
$$
\frac{m}{n+1} = 2^k
$$
with $k=2.$
Question: Are there more pairs of perfect numbers $n,m$ with $n < m$
and such that
$$
\frac{m}{n+1} = 2^k
$$
for some positive integer $k>0.$
Observe that the perfect number $n$ , the smallest of $n,m$
may be also an odd number.
| If $m$ is odd, it's clearly impossible.
If $m$ is even and $n$ is odd, I don't know.
So suppose $m$, $n$ both even. Then $m=2^{r-1}p$ where $p=2^r-1$ is prime, and $n=2^{s-1}q$ where $q=2^s-1$ is prime, and $s\lt r$.
The equation becomes $$2^k(n+1)=2^k(2^{s-1}q+1)=2^{k+s-1}q+2^k=2^{r-1}p$$ Now $2^k$ divides the second last term, so it divides the last term, so $2^{s-1}q+1=2^{r-k-1}p$. If $s\gt1$ this forces $r-k-1=0$, so $2^{s-1}q+1=p=2^r-1$. Then $2^r-2^{s-1}q=2$, so either $r\le1$ or $s\le2$. But $s\lt r$, so we reject $r\le1$, so $s=2$, $q=3$, and there's only the one solution.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/64340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
antiderivative involving modified bessel function This little integral has been holding me up for weeks. Has anyone come across a similar integral in their work.
$\int {\frac{d x}{c-I_0(x)}} $
| I doubt that there is a closed-form formula. Neither Maple nor Mathematica could find one. They don't even get formulas for the case $c=0$. Would a series expansion in $x$ be of any use? It is
$$
\frac{1}{c-1} x + \frac{1}{12 (c-1)^2} x^{3} + \frac{c+3}{320 (c-1)^3} x^{5} +
$$
$$
+ \frac{c^2+16 c+19}{16128 (c-1)^4} x^{7} + \frac{c^3+65 c^2+299 c+211}{1327104 (c-1)^5} x^{9} +
$$
$$
+ \frac{c^4+246 c^3+3156 c^2+7346 c+3651}{162201600 (c-1)^6} x^{11} + O(x^{13})$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/68529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Number of height-limited rational points on a circle Consider origin-centered circles $C(r)$ of radius $r \le 1$.
I am seeking to learn how many rational points might lie on $C(r)$,
where each rational point coordinate has height $\le h$.
For example, these are the rationals in $[0,1]$ with $h \le 5$:
$$
\left(
0,\frac{1}{5},\frac{1}{4},\frac{1}{3},\frac{2}{5},\frac{1}{2},\frac{3}{5},
\frac{2}{3},\frac{3}{4},\frac{4}{5},1
\right)
$$
Rational points of height $\le h$ have both coordinates from this list,
multiplied by $\pm 1$.
Q. What is the growth rate of the maximum number of rational points
of height $\le h$ on $C(r)$, $r \le 1$, as a function of $h$?
Here is a bit of data up to $h=20$:
For example, for $h=7$, $C(\frac{5}{7})$
passes through
these $12$ points:
$$
\left(
-\tfrac{4}{7} , -\tfrac{3}{7}
\right),
\left(
-\tfrac{3}{7} , -\tfrac{4}{7}
\right),
\left(
0 , -\tfrac{5}{7}
\right),
\left(
\tfrac{3}{7} , -\tfrac{4}{7}
\right),
\left(
\tfrac{4}{7} , -\tfrac{3}{7}
\right),
\left(
\tfrac{5}{7} , 0
\right),
$$
$$
\left(
\tfrac{4}{7} , \tfrac{3}{7}
\right),
\left(
\tfrac{3}{7} , \tfrac{4}{7}
\right),
\left(
0 , \tfrac{5}{7}
\right),
\left(
-\tfrac{3}{7} , \tfrac{4}{7}
\right),
\left(
-\tfrac{4}{7} , \tfrac{3}{7}
\right),
\left(
-\tfrac{5}{7} , 0
\right)
$$
If I've calculated correctly, no circle passes through more than $12$
points of height $\le 7$.
Circles that achieve these maxima are illustrated below.
Background points are the rational points of height $h \le 20$.
Added. Since the radii that achieve the maxima I found
for $13 \le h \le 20$ are all exactly $1$,
it may be that the question can be reduced to counting the
number of rational points of height $\le h$ on just specifically $C(1)$.
Answered. Lucia's answer matches even the small-$h$ data I gathered:
| I'll content myself with counting the number of points on $C(1)$ (which should surely be close to the maximum) -- the answer is quite nice, it is about $ \frac{4}{\pi } h$.
To see this, note that we are counting essentially Pythagorean triples $u^2-v^2, 2uv, u^2+v^2$, with $u^2+v^2\le h$ and we may suppose that $u$ and $v$ are non-negative, that $u$ and $v$ are coprime, and that $u^2+v^2$ is odd. The lattice point count we need is four times this number, since we must also count the lattice point $(2uv/(u^2+v^2),(u^2-v^2)/(u^2+v^2))$ (in addition to $((u^2-v^2)/(u^2+v^2),2uv/(u^2+v^2))$, and we must also allow the $2uv/(u^2+v^2)$ coordinate to be negative).
Thus to summarize we want
$$
4 \sum_{n\le h, n \text{ odd }} R(n),
$$
where $R(n)$ is the number of ways of writing $n$ as $u^2+v^2$ with both $u$ and $v$ non-negative and coprime (taking care to set $R(1)$ to be $1$).
A little number theory, going back to Fermat, gives that $R(n)$ is a multiplicative function with $R(2^k)=0$
(so we don't have to worry about $n$ odd anymore), $R(p^k)=2$ for $p\equiv 1\pmod 4$ and $k\ge 1$, and $R(p^k)=0$ if $p\equiv 3\pmod 4$. For example if $h=20$, then $R(1)=1$, $R(5)=2$, $R(13)=2$, and $R(17)=2$ and the rest are zero, and the number here is $28$ as in the numerics.
From here a standard argument (or one can do this via counting lattice points in a circle) leads to the asymptotic
$$
4 \sum_{n\le h} R(n) \sim 4 \frac{1}{2} \prod_{p\equiv 1 \pmod 4} \Big(1+\frac{2}{p}+\frac{2}{p^2}+\ldots \Big) \Big(1-\frac 1p\Big)
\prod_{p\equiv 3\pmod 4} \Big(1-\frac 1p\Big) h,
$$
and the above simplifies (using $1-1/3+1/5-1/7+\ldots =\pi/4$ and $1/1^2+1/3^2+1/5^2+\ldots = \pi^2/8$) to give
$$
\sim 2 \frac{\pi/4}{\pi^2/8} h = \frac{4}{\pi} h.
$$
One should be able to refine this to count lattice points on other circles as well, and thus show that radius $1$ does achieve the maximum.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/219465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 2,
"answer_id": 0
} |
Integer solutions of (x+1)(xy+1)=z^3 Consider the equation
$$(x+1)(xy+1)=z^3,$$
where $x,y$ and $z$ are positive integers with $x$ and $y$ both at least $2$ (and so $z$ is necessarily at least $3$). For every $z\geq 3$, there exists the solution
$$x=z-1 \quad \text{and} \quad y=z+1.$$
My question is, if one imposes the constraint that
$$x^{1/2} \leq y \leq x^2 \leq y^4,$$
can there be any other integer solutions (with $x,y \geq 2$)?
Moreover, $z$ may be assumed for my purposes to be even with at least three prime divisors.
Thank you in advance for any advice.
Edit: there was a typo in my bounds relating $x$ and $y$.
| Here is another family of solutions.
For natural $q$:
$x=3,y=144 q^{3} + 360 q^{2} + 300 q + 83,z=12q+10$.
Yours is linear in $y$, so solutions are positive integers
of the form $y=\frac{z^{3} - x - 1}{x^{2} + x}$.
Plugging small values of $x$, more solutions may come from
integers:
x= 2 y= (1/6) * (z^3 - 3)
x= 3 y= (1/12) * (z^3 - 4)
x= 4 y= (1/20) * (z^3 - 5)
x= 5 y= (1/30) * (z^3 - 6)
x= 6 y= (1/42) * (z^3 - 7)
x= 7 y= (1/56) * (z - 2) * (z^2 + 2*z + 4)
x= 8 y= (1/72) * (z^3 - 9)
x= 9 y= (1/90) * (z^3 - 10)
x= 10 y= (1/110) * (z^3 - 11)
x= 11 y= (1/132) * (z^3 - 12)
x= 12 y= (1/156) * (z^3 - 13)
x= 13 y= (1/182) * (z^3 - 14)
x= 14 y= (1/210) * (z^3 - 15)
| {
"language": "en",
"url": "https://mathoverflow.net/questions/235539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Is there an explicit expression for Chebyshev polynomials modulo $x^r-1$? This is an immediate successor of Chebyshev polynomials of the first kind and primality testing and does not have any other motivation - although original motivation seems to be huge since a positive answer (if not too complicated) would give a very efficient primality test (see the linked question for details).
Recall that the Chebyshev polynomials $T_n(x)$ are defined by $T_0(x)=1$, $T_1(x)=x$ and $T_{n+1}(x)=2xT_n(x)-T_{n-1}(x)$, and there are several explicit expressions for their coefficients. Rather than writing them down (you can find them at the Wikipedia link anyway), let me just give a couple of examples:
$$
T_{15}(x)=-15x(1-4\frac{7\cdot8}{2\cdot3}x^2+4^2\frac{6\cdot7\cdot8\cdot9}{2\cdot3\cdot4\cdot5}x^4-4^3\frac{8\cdot9\cdot10}{2\cdot3\cdot4}x^6+4^4\frac{10\cdot11}{2\cdot3}x^8-4^5\frac{12}{2}x^{10}+4^6x^{12})+4^7x^{15}
$$
$$
T_{17}(x)=17x(1-4\frac{8\cdot9}{2\cdot3}x^2+4^2\frac{7\cdot8\cdot9\cdot10}{2\cdot3\cdot4\cdot5}x^4-4^3\frac{8\cdot9\cdot10\cdot11}{2\cdot3\cdot4\cdot5}x^6+4^4\frac{10\cdot11\cdot12}{2\cdot3\cdot4}x^8-4^5\frac{12\cdot13}{2\cdot3}x^{10}+4^6\frac{14}{2}x^{12}-4^7x^{14})+4^8x^{17}
$$
It seems that $n$ is a prime if and only if all the ratios in the parentheses are integers; this is most likely well known and easy to show.
The algorithm described in the above question requires determining whether, for an odd $n$, coefficients of the remainder from dividing $T_n(x)-x^n$ by $x^r-1$, for some fairly small prime $r$ (roughly $\sim\log n$) are all divisible by $n$. In other words, denoting by $a_j$, $j=0,1,2,...$ the coefficients of $T_n(x)-x^n$, we have to find out whether the sum $s_j:=a_j+a_{j+r}+a_{j+2r}+...$ is divisible by $n$ for each $j=0,1,...,r-1$.
The question then is: given $r$ and $n$ as above ($n$ odd, $r$ a prime much smaller than $n$), is there an efficient method to find these sums $s_j$ without calculating all $a_j$? I. e., can one compute $T_n(x)$ modulo $x^r-1$ (i. e. in a ring where $x^r=1$) essentially easier than first computing the whole $T_n(x)$ and then dividing by $x^r-1$ in the ring of polynomials?
(As already said, only the question of divisibility of the result by $n$ is required; also $r$ is explicitly given (it is the smallest prime with $n$ not $\pm1$ modulo $r$). This might be easier to answer than computing the whole polynomials mod $x^r-1$.)
| There's a rapid algorithm to compute $T_n(x)$ modulo $(n,x^r-1)$. Note that
$$
\pmatrix{T_n(x) \\ T_{n-1}(x)} = \pmatrix { 2x & -1 \\ 1&0} \pmatrix{T_{n-1}(x) \\ T_{n-2}(x)} = \pmatrix { 2x & -1 \\ 1&0}^{n-1} \pmatrix{ x\\ 1}.
$$
Now you can compute these matrix powers all modulo $(n, x^{r}-1)$ rapidly by repeated squaring. Clearly $O(\log n)$ multiplications (of $2\times 2$ matrices) are required, and the matrices have entries that are polynomials of degree at most $r$ and coefficients bounded by $n$. So the complexity is a polynomial in $r$ and $\log n$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/286626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 2,
"answer_id": 1
} |
Bounding an elliptic-type integral Let $K>L>0$. I would like to find a good upper bound for the integral
$$\int_0^L \sqrt{x \left(1 + \frac{1}{K-x}\right)} \,dx.$$
An explicit expression for the antiderivative would have to involve elliptic functions; thus, if we want to stick to simple expressions, a bound is the best we can do.
One obvious approach goes as follows:
$$\begin{aligned}
\int_0^L \sqrt{x \left(1 + \frac{1}{K-x}\right)} \, dx &\leq
\int_0^L \sqrt{x} \left(1 + \frac{1/2}{K-x}\right)\, dx\\
&\leq \frac{2}{3} L^{3/2} + \frac{1}{2} \sqrt{\int_0^L x \,dx \cdot \int_0^L \frac{dx}{(K-x)^2}}\\
&= \frac{2}{3} L^{3/2} + \frac{1}{\sqrt{8}} \frac{L^{3/2}}{\sqrt{K (K-L)}}
\end{aligned}$$
The second inequality (Cauchy-Schwarz) does not seem too bad, though one can easily improve on the constant $1/\sqrt{8}$ by proceeding more directly.
I dislike the first step, however, as it makes the integral diverge as $L\to K^-$, whereas the original integral did not.
What other simple approximations are there? Is it obvious that a bound of type $(2/3) L^{3/2} + O(L^{3/2}/K)$, say, could not be valid?
| In general one has the following bound on your integral
$$
\frac{2}{3}L^{3/2}+ \frac{L^{1/2}}{\sqrt{2}}\ln \left(\frac{1+2K}{2+2(K-L)}\right)\leq \mathrm{integral}\leq \frac{2}{3}L^{3/2}+ L^{1/2} \ln\left(\frac{3+4K}{1+4(K-L)}\right)
$$
Let us make change of variables $x=Ly$. Then we want to estimate from above
$$
L^{3/2}\int_{0}^{1}\sqrt{y \left(1+ \frac{1}{K-Ly}\right)} - \sqrt{y}\; dy
$$
We have
$$
\int_{0}^{1}\sqrt{y}\left(\frac{\sqrt{K-Ly+1}-\sqrt{K-Ly}}{\sqrt{K-Ly}} \right)\; dy =
\frac{1}{K}\times \int_{0}^{1}\sqrt{y}\left(\frac{1}{\sqrt{1-(L/K)y}(\sqrt{1-(L/K)y+(1/K)}+\sqrt{1-(L/K)y})} \right)\; dy \leq
\frac{1}{K} \times \int_{0}^{1}\left(\frac{1}{\sqrt{1-(L/K)y}\sqrt{1-(L/K)y+(1/K)}} \right)\; dy = \frac{1}{L}\ln\left( \frac{1+2K+2\sqrt{K}\sqrt{K+1}}{1+2(K-L)+2\sqrt{K-L}\sqrt{K-L+1}}\right)\leq \frac{1}{L}\ln\left(\frac{3+4K}{1+4(K-L)}\right)
$$
Update (lower bound):
$$
\int_{0}^{1}\sqrt{y}\left(\frac{1}{\sqrt{1-(L/K)y}(\sqrt{1-(L/K)y+(1/K)}+\sqrt{1-(L/K)y})} \right)\; dy \geq \frac{1}{\sqrt{2}}\int_{1/2}^{1}\left(\frac{1}{\sqrt{1-(L/K)y}\sqrt{1-(L/K)y+(1/K)}} \right)\; dy = \frac{1}{\sqrt{2}}\frac{K}{L}\int_{1-\frac{L}{K}}^{1-\frac{L}{2K}} \frac{1}{\sqrt{z}\sqrt{z+\frac{1}{K}}}dz
$$
Now let us use the fact that
$$
\int \frac{1}{\sqrt{z^{2}+\frac{z}{K}}} = \ln \left(1+ 2Kz +2\sqrt{Kz}\sqrt{Kz+1}\right) - \ln K, \quad z>0
$$
which can be checked by direct differentiation (but I will skip it), then we can continuo our chain of inequalities
$$
=\frac{1}{\sqrt{2}}\frac{K}{L}\,\ln \frac{1+2(K-\frac{L}{2}) +2\sqrt{K-\frac{L}{2}}\sqrt{K-\frac{L}{2}+1}}{1+2(K-L) +2\sqrt{K-L}\sqrt{K-L+1}}\geq
\frac{1}{\sqrt{2}}\frac{K}{L}\,\ln \frac{1+4(K-\frac{L}{2})}{2+2(K-L)}\geq
\frac{1}{\sqrt{2}}\frac{K}{L}\,\ln \frac{1+2K}{2+2(K-L)}
$$
which is pretty much the same.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/299025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Number of numbers in $n$th difference sequence Suppose that $r$ is an irrational number with fractional part between $1/3$ and $2/3$. Let $D_n$ be the number of distinct $n$th differences of the sequence $(\lfloor{kr}\rfloor)$. It appears that
$$D_n=(2,3,3,5,4,7,5,9,6,11,7,13,8,\ldots),$$
essentially A029579. Can someone verify that what appears here is actually true?
Example: for $r=(1+\sqrt{5})/2$, we find
$$(\lfloor{kr}\rfloor)=(1,3,4,6,8,9,11,12,14,16,17,\ldots) = A000201 = \text{ lower Wythoff sequence,}$$
\begin{align*}
\Delta^1 =&(2,1,2,2,1,2,1,2,2,1,2,2,1,\ldots), & D_1=2, \\
\Delta^2 =&(1,-1,1,0,-1,1,-1,1,0,-1,1,\ldots), & D_2=3, \\
\Delta^3 =&(-2,2,-1,-1,2,-2,2,-1,-1,2,\ldots), & D_2=3.
\end{align*}
| It's easy to see that $D_n\leq \texttt{A029579}(n)$. Indeed, $\Delta^1$ is a Sturmian word, which is known to have exactly $n+1$ factors of length $n$.
Now, $\Delta^n$ is formed by values of the $(n-1)$-th difference operator on the factors of $\Delta^1$ of length $n$, i.e.,
$$\Delta^n = \left(\sum_{i=0}^{n-1} \binom{n-1}{i} (-1)^{n-1-i} \Delta^1_{k+i}\quad \big|\quad k=1,2,\dots\right).$$
For even $n$, we immediately have $D_n\leq n+1 = \texttt{A029579}(n)$.
For odd $n$, we additionally notice (I did not verify this carefully) that (i) the reverse of a Sturmian factor is a factor itself, (ii) values of the operator on a factor and its reverse are the same, and (iii) there are exactly two symmetric factors. Hence, here we have
$$D_n\leq \frac{n+1-2}2 + 2 = \frac{n+3}2 = \texttt{A029579}(n).$$
It remains to prove that, besides the aforementioned cases, the operator values on factors of length $n$ are distinct.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/331724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
an identity between two elliptic integrals I would like a direct change of variable proof of the identity
$$\int_0^{\arctan\frac{\sqrt{2}}{\sqrt{\sqrt{3}}}} \frac{1}{\sqrt{1-\frac{2+\sqrt{3}}{4}\sin^2\phi }}d\phi=\int_0^{\arctan\frac{1}{\sqrt{\sqrt{3}}}}\frac{1}{\sqrt{1-\frac{2+\sqrt{3}}{4}\sin^22\phi }}d\phi\,.$$
I need it as part of a paper on Legendre's proof of the "third singular modulus."
| Not yet an answer, but a bit too long for a comment. The Legendre normal form of these elliptic integrals might be a first step, at least by introducing simpler integration limits:
\begin{align}
&I_1=\int_0^{\arctan\frac{\sqrt{2}}{\sqrt{\sqrt{3}}}} \frac{d\phi}{\sqrt{1-\frac{2+\sqrt{3}}{4}\sin^2\phi }}=\int_0^{\sqrt{3}-1}\frac{dt}{\sqrt{(1-t^2)(1-\frac{2+\sqrt{3}}{4}t^2)}}, \\
&I_2=\int_0^{\arctan\frac{1}{\sqrt{\sqrt{3}}}}\frac{d\phi}{\sqrt{1-\frac{2+\sqrt{3}}{4}\sin^22\phi }}=\frac{1}{2}\int_0^{3^{1/4}(\sqrt{3}-1)}\frac{dt}{\sqrt{(1-t^2)(1-\frac{2+\sqrt{3}}{4}t^2)}}.
\end{align}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/349272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to show that $x_{k+1}+x_{k+2} + \cdots + x_n < 2m$? Let $k \le n$ be positive integers and let $m$ be a positive integer. Assume that $x_1, \ldots, x_n$ are non-negative integers and
\begin{align}
& x_1^2 + x_2^2 + \cdots + x_n^2 - (k-2) m^2=2, \\
& x_1 + \cdots + x_n = k m, \\
& x_1 \ge x_2 \ge \cdots \ge x_n.
\end{align}
How to show that $x_{k+1}+x_{k+2} + \cdots + x_n < 2m$?
It is easy to see that the result is true for $k=1,2$.
In the case of $k=3$, we have
\begin{align}
& x_1^2 + x_2^2 + \cdots + x_n^2 = m^2 + 2, \\
& x_1 + \cdots + x_n = 3 m, \\
& x_1 \ge x_2 \ge \cdots \ge x_n.
\end{align}
We have to estimate the solutions of the above equations. Are there some method to do this? Thank you very much.
| I am afraid it is not true. Test the situation when $x_2=x_3=\ldots=1$, equations read as $x_1^2+(n-1)=(k-2)m^2+2$, $x_1+(n-1)=km$, that gives $x_1^2-x_1=(k-2)m^2-km+2$. Let's think that both $k$ and $m$ are large (say greater than 1000). Then $x_1$ is something like $\sqrt{k}m$, $x_{k+1}+\ldots+x_n=n-k=km+1-k-x_1$ is something like $(k+o(k))m\gg 2m$.
It remains to find the solution of $x_1^2-x_1=(k-2)m^2-km+2$ with large $k$ and $m$. It reads as $x_1^2-x_1+m^2-2=k(m^2-m)$. For fixed $m$ it is solvable if $x^2-x+m^2-2$ may be divisible by $m^2-m$ (then it may be divisible for large $x$ that makes $k$ also large). Modulo $m$ we may choose $x\equiv 2\pmod 2$. Modulo $m-1$ we need $x^2-x-1\equiv 0$, so just take $m-1=a^2-a-1$ for some large $a$. Now combine solutions modulo $m$ and $m-1$ by Chinese remainders theorem.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/351162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Coefficients of $(2+x+x^2)^n$ from trinomial coefficients I would like to be able to express the coefficients of $(2+x+x^2)^n$ in terms of the trinomial coefficients studied by Euler, ${n \choose \ell}_2 = [x^\ell](1+x+x^2)^n$ where $[x^\ell]$ denotes the coefficient of $x^\ell$. The triangle of these numbers is given in OEIS A027907 and begins
\begin{matrix}
1 \\
1 & 1 & 1 \\
1 & 2 & 3 & 2 & 1 \\
1 & 3 & 6 & 7 & 6 & 3 & 1\\
1 & 4 & 10 & 16 & 19 & 16 & 10 & 4 & 1
\end{matrix}
The triangle $t(n,\ell) = [x^\ell](2+x+x^2)^n$ I want to relate to the ${n \choose \ell}_2$ begins
\begin{matrix}
1 \\
2 & 1 & 1 \\
4 & 4 & 5 & 2 & 1 \\
8 & 12 & 18 & 13 & 9 & 3 & 1\\
16 & 32 & 56 & 56 & 49 & 28 & 14 & 4 & 1
\end{matrix}
I'm hoping for a general result of the form $t(n,\ell) = \left(\text{function of ${m \choose k}_2$}\right)$ with $m \le n$ and $k \le \ell$. I see patterns for certain columns and diagonals, and recurrence relations within the triangle, but not yet a general expression in terms of trinomial coefficients.
One note: The trinomial coefficients can be worked out in terms of binomial coefficients, but I'd like an expression in ${n \choose \ell}_2$ instead, as this is the first step in a larger program: Eventually I want to relate the coefficients of $(2+x+\cdots+x^k)^n$ to ${n \choose \ell}_k = [x^\ell](1+x+\cdots+x^k)^n$.
| Using Abdelmalek's tip in the comments, here's a solution to a more general version of the "larger program" mentioned at the end. For an arbitrary constant $c$,
\begin{align}
[x^\ell](c+x+\cdots+x^k)^n & = [x^\ell] \left((c-1) + (1+x+\cdots+x^k)\right)^n \\
& = \sum_{m=0}^n {n \choose m}(c-1)^{n-m}[x^\ell](1+x+\cdots+x^k)^m \\
& = \sum_{m=0}^n {n \choose m}(c-1)^{n-m}{m \choose \ell}_k
\end{align}
where we use the binomial theorem in the second line.
In the case of the original question, $c=2$ means the $(c-1)^{n-m}$ factor is always 1. You can think of the row $t(4,\ell)$ coming from dot products of $(1,4,6,4,1)$ with each column in the first five rows of the ${n \choose k}_2$ triangle:
\begin{gather}
(1,4,6,4,1)\cdot(1,1,1,1,1) = 16,\\
(1,4,6,4,1)\cdot(0,1,2,3,4) = 32,\\
(1,4,6,4,1)\cdot(0,1,3,6,10) = 56,\\
(1,4,6,4,1)\cdot(0,0,2,7,16) = 56,\\
(1,4,6,4,1)\cdot(0,0,1,6,19) = 49,\\
(1,4,6,4,1)\cdot(0,0,0,3,16) = 28,\\
(1,4,6,4,1)\cdot(0,0,0,1,10) = 14,\\
(1,4,6,4,1)\cdot(0,0,0,0,4) = 4,\\
(1,4,6,4,1)\cdot(0,0,0,0,1) = 1.
\end{gather}
Thanks for putting up with what ended up being an elementary question.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/362041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The edge precoloring extension problem for complete graphs Consider coloring the edges of a complete graph on even order. This can be seen as the completion of an order $n$ symmetric Latin square except the leading diagonal. My question pertains to whether we can always complete the edge coloring in $n-1$ colors given a certain set of colors? The number of colors I fix is exactly equal to $\frac{(k)(k+2)}{2}$, where $k=\frac{n}{2}$ and form $4$ distinct consecutive last four subdiagonals (and, by symmetry, superdiagonals) in the partial Latin square.
For example, in the case of $K_8$, I fix the following colors:
\begin{bmatrix}X&&&&1&3&7&4\\&X&&&&2&4&1\\&&X&&&&3&5\\&&&X&&&&6\\1&&&&X&&&\\3&2&&&&X&&\\7&4&3&&&&X&\\4&1&5&6&&&&X\end{bmatrix}
A completion to a proper edge coloring in this case would be:
\begin{bmatrix}X&5&6&2&1&3&7&4\\5&X&7&3&6&2&4&1\\6&7&X&4&2&1&3&5\\2&3&4&X&7&5&1&6\\1&6&2&7&X&4&5&3\\3&2&1&5&4&X&6&7\\7&4&3&1&5&6&X&2\\4&1&5&6&3&7&2&X\end{bmatrix}
Can the above be always done if the colors I fix follow the same pattern for all even order complete graphs? Note that the pattern followed in the precoloring consists of two portions-
i) the last $k-1$ subdiagonals are actually taken from a canonical $n$-edge coloring of the complete graph on $n-1$ vertices, where $n$ is even. By canonical, I mean the commutative idempotent 'anti-circulant' latin square. Like in the example above, the canonical coloring of the complete graph on $7$ vertices is
\begin{bmatrix}1&5&2&6&3&7&4\\5&2&6&3&7&4&1\\2&6&3&7&4&1&5\\6&3&7&4&1&5&2\\3&7&4&1&5&2&6\\7&4&1&5&2&6&3\\4&1&5&2&6&3&7\end{bmatrix}
ii)The $k$-th subdiagonal just consists of entries in the pattern $1-2-3-$ so on and takes into account the previous entries to create an appropriate entry. Like in the example above the last diagonal I took was $1-2-3-6$. It could also have been $1-2-3-7$.
And, if the completion exists, would the completion be unique? Any hints? Thanks beforehand.
| Assuming that you mean to precolour $k$ subdiagonals and have no further constraints on the precolouring, the answer to both of your questions is no.
For every $n$ there is a precolouring which cannot be extended: choose colours $1, \dots n/2$ in the first row and colours $n/2+1, \dots, n-1$ in the second row (and thus the second column). Then there is no valid colour for the entry in the first row/second column, so we cannot complete the colouring.
If we can complete the colouring, then the completion is not necessarily unique: note that we can always give a valid precolouring only using colours $1 \dots k$. Thus in any completion of this precolouring we can permute the colours $k+1, \dots, n-1$ to obtain a different completion.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/366312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Looking for a combinatorial proof for a Catalan identity Let $C_n=\frac1{n+1}\binom{2n}n$ be the familiar Catalan numbers.
QUESTION. Is there a combinatorial or conceptual justification for this identity?
$$\sum_{k=1}^n\left[\frac{k}n\binom{2n}{n-k}\right]^2=C_{2n-1}.$$
| Let, $$\sum_{k=1}^{n} (\frac{k}{n}\binom{2n}{n-k})^2 =A_{n}$$
Now, using the fact that $(n+k)^2+(n-k)^2=2(n^2+k^2)$ and $\binom{2n}{n-k}=\binom{2n}{n+k}$, we get the following
$$\frac{1}{n^2}(\sum_{a=0}^{2n} a^2\binom{2n}{a}^2 -n^2\binom{2n}{n}^2)=\frac{1}{n^2}[\sum_{k=1}^n (n-k)^2\binom{2n}{n-k}^2+(n+k)^2\binom{2n}{n+k}^2]=(\sum_{k=0}^{2n} \binom{2n}{k}^2- \binom{2n}{n}^2)+2A_{n}$$ $\cdots (1)$
Now, $$\frac{1}{n^2}(\sum_{a=0}^{2n} a^2\binom{2n}{a}^2-n^2\binom{2n}{n}^2)=4\binom{4n-2}{2n-1}-\binom{2n}{n}^2$$
and $$(\sum_{k=0}^{2n} \binom{2n}{k}^2- \binom{2n}{n}^2)=\binom{4n}{2n}-\binom{2n}{n}^2$$
Hence,$$A_{n}=2\binom{4n-2}{2n-1}-\frac{4n-1}{2n}\binom{4n-2}{2n-1}$$...from equation (1)
$$=\frac{1}{2n}\binom{4n-2}{2n-1}=C_{2n-1}$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/383314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 1
} |
The devil's playground On the $\mathbb{R}^2$ plane, the devil has trapped the angel in an equilateral triangle of firewalls.
The devil
*
*starts at the apex of the triangle.
*can move at speed $1$ to leave a trajectory of firewall behind, as this
*
*can teleport from one point to another along the firewall.
The angel
*
*can teleport to any point that is not completely separated by firewalls from her current position.
The devil catches the angel if their distance is $0$.
Question 1: how should the devil move to catch the angel in the shortest amount of time?
Question 2: if the devil is given a fixed length of firewalls to enclose the angel in the beginning, what shape maximizes the survival time of the angel? (The devil always starts on the firewall)
| Here is an upper bound for the triangle of side length $1$.
First, divide in half into two triangles of angles $\frac{\pi}{6},\frac{\pi}{3}, \frac{\pi}{2}$ triangle and side lengths $1, \frac{\sqrt{3}}{2},\frac{1}{2}$. This requires drawing an edge of length $\frac{\sqrt{3}}{2}$. It doesn't matter which one the angel goes into.
Next, divide each of these triangles into two triangles with angles $\frac{\pi}{6},\frac{\pi}{3}, \frac{\pi}{2}$. The angle wisely goes into the larger one. Repeat this process.
Starting from a triangle with side lengths $a, \frac{\sqrt{3}}{2} a, \frac{1}{2}a$, this produces a triangle of side lengths $\frac{\sqrt{3}}{2} a, \frac{3}{4}a, \frac{\sqrt{3}}{4}a$, after drawing an edge of length $\frac{\sqrt{3}}{4}a$.
So the total length drawn is
$$ \frac{\sqrt{3}}{2} + \frac{ \sqrt{3}}{4} + \frac{ \sqrt{3}}{2} \frac{ \sqrt{3}}{4} +\left( \frac{ \sqrt{3}}{2}\right)^2 \frac{ \sqrt{3}}{4} + \dots = \frac{\sqrt{3}}{2} + \frac{ \frac{\sqrt{3}}{4}}{1- \frac{\sqrt{3}}{2}}= \frac{\sqrt{3}}{2} + \frac{ \sqrt{3} }{4 - 2 \sqrt{3} }=\frac{3 \sqrt{3} -3}{4 - 2 \sqrt{3}} =4.098\dots$$
which is probably not optimal.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/403396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
} |
An inequality involving binomial coefficients and the powers of two I came across the following inequality, which should hold for any integer $k\geq 1$:
$$\sum_{j=0}^{k-1}\frac{(-1)^{j}2^{k-1-j}\binom{k}{j}(k-j)}{2k+1-j}\leq
\frac{1}{3}.$$
I have been struggling with this statement for a while. It looks valid for small $k$, but a formal proof seems out of reach with my tools. Any suggestions on how to approach this?
| For $j=0,\dots,k-1$,
\begin{equation*}
\frac1{2k+1-j}=\int_0^1 x^{2k-j}\,dx.
\end{equation*}
So,
\begin{equation*}
\begin{aligned}
s:=&\sum_{j=0}^{k-1}\frac{(-1)^{j}2^{k-1-j}\binom{k}{j}(k-j)}{2k+1-j} \\
&=\int_0^1 dx\,\sum_{j=0}^{k-1}(-1)^{j}2^{k-1-j}\binom{k}{j}(k-j)x^{2k-j} \\
&=\int_0^1 dx\,kx^{k+1}(2x-1)^{k-1}=I_1+I_2,
\end{aligned}
\tag{1}\label{1}
\end{equation*}
where
\begin{equation*}
I_1:=\int_0^{1/2} dx\,kx^{k+1}(2x-1)^{k-1}=(-1)^{k-1}\frac{k!(k+1)!}{(2k+1)!}\le
\frac1{2^{2k+2}}\le\frac1{16k^2}, \tag{2}\label{2}
\end{equation*}
\begin{equation*}
I_2:=\int_{1/2}^1 dx\,ke^{g(x)},
\end{equation*}
\begin{equation*}
g(x):=(k+1)\ln x+(k-1)\ln(2x-1).
\end{equation*}
Next, $g(1)=0$, $g'(1)=3k-1$, and, for $x\in(1/2,1)$,
\begin{equation*}
g''(x)=-\frac{k+1}{x^2}-\frac{4(k-1)}{(2x-1)^2}\le-(k+1)-4(k-1)=3-5k
\end{equation*}
and hence $g(x)\le h(x):=
(3k-1)(x-1)+(3-5k)(x-1)^2/2$.
So,
\begin{equation*}
I_2\le\int_{-\infty}^1 dx\,ke^{h(x)}=J(k):=\sqrt{\frac{\pi }{2}} e^{\frac{(1-3 k)^2}{10 k-6}} k\,
\frac{\text{erf}\left(\frac{1-3 k}{\sqrt{10 k-6}}\right)+1}{\sqrt{5 k-3}}.
\tag{3}\label{3}
\end{equation*}
Let
\begin{equation*}
H(k):=\text{erf}\left(\frac{1-3 k}{\sqrt{10 k-6}}\right)+1
-\left(\frac{1}{3}-\frac{1}{16k^2}\right)\frac{\sqrt{\frac{2}{\pi }} e^{-\frac{(1-3 k)^2}{10 k-6}} \sqrt{5 k-3} }{k}.
\end{equation*}
Then
\begin{equation*}
H'(k)=\frac{e^{-\frac{(1-3 k)^2}{10 k-6}} \left(160 k^4-647 k^3+75 k^2+456 k-162\right)}{48 \sqrt{2 \pi } k^4 (5 k-3)^{3/2}}>0
\end{equation*}
for $k\ge4$ and $H(k)\to0$ as $k\to\infty$. So, for $k\ge4$ we have $H(k)<0$ or, equivalently,
\begin{equation*}
J(k)<\frac{1}{3}-\frac{1}{16k^2}.
\end{equation*}
Therefore and in view of \eqref{1}, \eqref{2}, and \eqref{3}, for $k\ge4$ we have
\begin{equation*}
s<\frac13,
\end{equation*}
as desired. Checking the latter inequality for $k=1,2,3$ is straightforward.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/417283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Perfect numbers $n$ such that $2^k(n+1)$ is also perfect The smallest two perfect numbers $n=6$ and $m=28$ satisfy
$$
\frac{m}{n+1} = 2^k
$$
with $k=2.$
Question: Are there more pairs of perfect numbers $n,m$ with $n < m$
and such that
$$
\frac{m}{n+1} = 2^k
$$
for some positive integer $k>0.$
Observe that the perfect number $n$ , the smallest of $n,m$
may be also an odd number.
| [After typing out this attempt at a "partial" answer, I realized that the details have already been worked out by Luis, Gerhard and Todd. I am posting it as an answer for anybody else who might be interested in how the final result is obtained. - Arnie]
Suppose $m$ is even and $n$ is odd.
Then if $m$ and $n$ are perfect numbers, we have the forms
$$m = 2^{p-1}({2^p} - 1),$$
where $p$ and ${2^p} - 1$ are primes, and
$$n = {q^r}{s^2},$$
where $q$ is prime with $q \equiv r \equiv 1 \pmod 4$ and $\gcd(q,s) = 1$.
Now, from the additional constraint
$$\frac{m}{n + 1} = 2^k,$$
where $k > 0$ is an integer, we obtain the equation
$$m = {2^k}(n + 1).$$
Writing out this last equation in full by plugging in the respective forms for $m$ and $n$ as before, we get
$$2^{p-1}({2^p} - 1) = {2^k}({q^r}{s^2} + 1).$$
By divisibility considerations, since we can assume without loss of generality that $p \geq 2$, $k \geq 1$ and $r \geq 1$, and since $n \equiv 1 \pmod 4$, we get
$$\gcd(2^{p-1}, {q^r}{s^2} + 1) = 2 \Longrightarrow 2^{p-2} \mid 2^k.$$
Similarly, $\gcd({2^k},{2^p} - 1) = 1$ and $({q^r}{s^2} + 1) = n + 1 \equiv 2 \pmod 4 \Longrightarrow 2^{k+1} \mid 2^{p-1} \Longrightarrow 2^k \mid 2^{p-2}$.
Thus,
$$2^k = 2^{p - 2}.$$
This gives
$$k = p - 2.$$
Consequently, we get
$$\frac{m}{n + 1} = 2^k = 2^{p-2} = \frac{2^{p-1}({2^p} - 1)}{{q^r}{s^2} + 1},$$
which yields
$$\frac{1}{2} = \frac{2^p - 1}{{q^r}{s^2} + 1}.$$
This implies
$${q^r}{s^2} + 1 = 2^{p+1} - 2.$$
This finally gives (as Gerhard, Todd and Luis had already noted)
$${q^r}{s^2} = n = 2^{p+1} - 3.$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/64340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Trigonometric identity needed for sums involving secants I am looking for a closed-form formula for the following sum:
$\displaystyle \sum_{k=0}^{N}{\frac{\sin^{2}(\frac{k\pi}{N})}{a \cdot \sin^{2}(\frac{k\pi}{N})+1}}=\sum_{k=0}^{N}{\frac{1}{a+\csc^{2}(\frac{k\pi}{N})}}$.
Is such a formula known?
| This may not be of much (any!) help, but Mathematica 7 gives a closed-form solution in terms of QPolyGamma functions:
$\frac{\psi _{e^{-\frac{2 i \pi }{n}}}^{(0)}\left(1-\frac{\log
\left(\frac{a-2 \sqrt{a+1}+2}{a}\right)}{\log \left(e^{-\frac{2 i
\pi }{n}}\right)}\right)-\psi _{e^{-\frac{2 i \pi
}{n}}}^{(0)}\left(n-\frac{\log \left(\frac{a-2
\sqrt{a+1}+2}{a}\right)}{\log \left(e^{-\frac{2 i \pi
}{n}}\right)}+1\right)+\sqrt{a+1} n \log \left(e^{-\frac{2 i \pi
}{n}}\right)}{a \sqrt{a+1} \log \left(e^{-\frac{2 i \pi
}{n}}\right)}$
$+$
$\frac{\psi _{e^{-\frac{2 i \pi }{n}}}^{(0)}\left(n-\frac{\log
\left(\frac{a+2 \sqrt{a+1}+2}{a}\right)}{\log \left(e^{-\frac{2 i
\pi }{n}}\right)}+1\right)-\psi _{e^{-\frac{2 i \pi
}{n}}}^{(0)}\left(1-\frac{\log \left(\frac{a+2
\sqrt{a+1}+2}{a}\right)}{\log \left(e^{-\frac{2 i \pi
}{n}}\right)}\right)}{a \sqrt{a+1} \log \left(e^{-\frac{2 i \pi
}{n}}\right)}$
$\psi^{(0)}_q$ is the q-PolyGamma function.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/122231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Weiestrass Form How to convert this to weiestrass form?
$x^{2}y^{2}-2\left( 1+2\rho \right) xy^{2}+y^{2}-x^{2}-2\left( 1+2\rho
\right) x-1=0$
| You can rewrite the form as
\begin{equation*}
y^2=\frac{x^2+2(2\rho+1)x+1}{x^2-2(2\rho+1)x+1}
\end{equation*}
so, for rational solutions (which I presume you want), there exists $z \in \mathbb{Q}$ with
\begin{equation*}
z^2=(x^2+2(2\rho+1)x+1)(x^2-2(\rho+1)x+1)=x^4-2(8\rho^2+8\rho+1)x^2+1
\end{equation*}
This quartic can be transformed to an equivalent elliptic curve using the method described by Mordell in his book Diophantine Equations. We get (after some fiddling)
\begin{equation*}
v^2=u(u+4\rho^2+4\rho)(u+4\rho^2+4\rho+1)
\end{equation*}
with $x=v/u$.
For example, $\rho=11$ gives a rank $1$ elliptic curve with point $(147,8190)$, giving $x=8190/147$ and $y=\pm 527/163$.
This elliptic curve could be transformed to Weierstrass form if you want, but the above form is probably more useful.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/130893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How many solutions to $2^a + 3^b = 2^c + 3^d$? A few weeks ago, I asked on math.stackexchange.com how many quadruples of non-negative integers $(a,b,c,d)$ satisfy the following equation:
$$2^a + 3^b = 2^c + 3^d \quad (a \neq c)$$
I found 5 quadruples: $5 = 2^2 + 3^0 = 2^1 + 3^1$, $11 = 2^3 + 3^1 = 2^1 + 3^2$, $17 = 2^4 + 3^0 = 2^3 + 3^2$, $35 = 2^5 + 3^1 = 2^3 + 3^3$, $259 = 2^8 + 3^1 = 2^4 + 3^5$
I didn't get an answer, but only a link to an OEIS sequence (no more quadruples below $10^{4000}$), so I'm asking the same question here.
Is there a way to prove that they are [not] infinite?
And, more generally, are there known tuples for which the following equation:
$$p_{i_1}^{a_1} + p_{i_2}^{a_2} + ... + p_{i_n}^{a_n}=p_{i_1}^{b_1} + p_{i_2}^{b_2} + ... + p_{i_n}^{b_n}$$
holds for infinitely many (or holds only for finitely many) $a_i,b_i$?
| Here is a proof of finiteness, along the lines of Jordan's answer. Assume without loss that $a>c$ and $d>b$. Then
$$2^a-2^c=3^d-3^b,$$
or equivalently
$$\frac{1-3^{b-d}}{1-2^{c-a}}=\frac{2^a}{3^d}.$$
Since $a>d$, Matveev's explicit bound for linear forms in logarithms implies that
$$\Bigl\lvert\frac{2^{c-a}-3^{b-d}}{1-2^{c-a}}\Bigr\rvert=\lvert 2^a 3^{-d}-1\rvert \ge (ea)^{-R}$$
where
$$
R:=e\cdot 2^{3.5}30^5\log 3.$$
Next, $2^c$ is the largest power of $2$ which divides $3^{d-b}-1$; if $2^i$ denotes the largest power of $2$ which divides $d-b$, then either $[i=0\,\text{ and }\,c=1]$ or $[i>0\,\text{ and }\,c=i+2]$. Likewise, $b=0$ if and only if $a\not\equiv c\pmod{2}$; and if $a\equiv c\pmod{2}$ then $3^{b-1}$ is the largest power of $3$ which divides $a-c$. So in any case we have $c\le 2+\log_2(d-b)$ and
$b\le 1+\log_3(a-c)$. For any fixed value of $d$ there are only finitely many possibilities for $a$ (since $2^{a-1}\le 2^a-2^c<3^d$), and hence also for $b$ and $c$. So assume that $d$ (and hence $a$) is sufficiently large; then $c\le 2+\log_2(d)<2+\log_2(a)$ and $b\le 1+\log_3(a)$ while $\log_3(2^{a-1})<d$, so $b\le\log_3(d)+r$ for some absolute constant $r$. Now
$$\Bigl\lvert\frac{2^{c-a}-3^{b-d}}{1-2^{c-a}}\Bigr\rvert$$
is at most roughly $2^{-a}$, which is smaller than $(ea)^{-R}$ whenever $a$ is sufficiently large.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/164624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 0
} |
Combinatorial identity involving the square of $\binom{2n}{n}$ Is there any closed formula for
$$
\sum_{k=0}^n\frac{\binom{2k}{k}^2}{2^{4k}}
$$
?
This sum of is made out of the square of terms $a_{k}:=\frac{\binom{2k}{k}}{2^{2k}}$
I have been trying to verify that $$
\lim_{n\to\infty} (2n+1)\left[\frac{\pi}{4}-\sum_{k=0}^{n-1}\frac{\left(\sum_{j=0}^k a^2_{j}\right)}{(2k+1)(2k+2)}\right] -\frac{1}{2}{\sum_{k=0}^na^2_{k}}=\frac{1}{2\pi},
$$
which seems to be true numerically using Mathematica.
The question above is equivalent to finding some formula for $$b_{n}:=\frac{1}{2^{2n}}\sum_{j=0}^n\frac{\binom{2n+1}{j}}{2n+1-j}.$$ This is because one can verify that
$$(2n+1)b_n=2nb_{n-1}+a_n,\qquad a_{n+1}=\frac{2n+1}{2n+2}a_n,$$
and combining these two we get
$$(2n+2)a_{n+1}b_{n}-(2n)a_nb_{n-1}=a_n^2$$
Summing we get
$$\sum_{k=0}^na_k^2=(2n+1)a_nb_n.$$
I also know that
$$
\frac{\binom{2n}{n}}{2^{2n}}=\binom{-1/2}{n},
$$
so that
$$
\sum_{k=0}^{\infty}\frac{\binom{2k}{k}}{2^{2k}}x^k=(1-x)^{-1/2},\quad |x|<1.
$$
I have also seen the identity
$$
\sum_{k=0}^n\frac{\binom{2k}{k}}{2^{2k}}=\frac{2n+1}{2^{2n}}\binom{2n}{n}.
$$
| The limit I want to verify is
$$
\lim_{n\to\infty} (2n+1)\left[\frac{\pi}{4}-\sum_{k=0}^{n-1}\frac{\left(\sum_{j=0}^k a^2_{j}\right)}{(2k+1)(2k+2)}\right] -\frac{1}{2}{\sum_{k=0}^na^2_{k}}=\frac{1}{2\pi}
$$
For this it is sufficient to prove that the above expression under the limit is bounded above. I know this is true because of the original problem this limit is coming from, but I do not have a short prove. Then, given that such expression is bounded, we can argue as follows: using summation by parts we see that $$
\sum_{k=0}^{n-1}\left(\frac{1}{2k+1}-\frac{1}{2k+3}\right)\sum_{j=0}^ka^2_{j}=\sum_{k=0}^n\frac{a^2_{k}}{2k+1}-\frac{1}{2n+1}\sum_{k=0}^na^2_{k}
$$ and so we can write the limit as
\begin{align*}
\lim_{n\to\infty} (2n+1)\left[\frac{\pi}{4}-\frac{1}{2}\sum_{k=0}^n\frac{a^2_{k}}{2k+1}-\sum_{k=0}^{n-1}\frac{\sum_{j=0}^ka^2_{j}}{(2k+1)(2k+2)(2k+3)}\right]
\end{align*}
Since the limit of the bracket must be zero in under for the whole expression to remain bounded above, $\pi/4$ must equal the series (sum from $k=0$ up to $\infty$) and since
\begin{align*}
a_{n}:=\frac{1}{2^{2n}}&\binom{2n}{n}= \frac{\Gamma(1/2)\Gamma(n+1/2)}{\pi\Gamma(n+1)}= \frac{1+O(1/n)}{\sqrt{\pi n}}
\end{align*}
the limit becomes
\begin{align*}
&\lim_{n\to\infty}(2n+1)\left[\frac{1}{2}\sum_{k=n+1}^\infty\frac{a^2_{k}}{2k+1}+\sum_{k=n}^{\infty}\frac{\sum_{j=0}^ka^2_{j}}{(2k+1)(2k+2)(2k+3)}\right]\\
=&\lim_{n\to\infty} \frac{(2n+1)}{2\pi}\sum_{k=n+1}^\infty\frac{1+O(1/k)}{k(2k+1)}+ (2n+1)O\left(\sum_{k=n}^{\infty}\frac{\sum_{j=0}^k\frac{1}{j}}{(2k+1)(2k+2)(2k+3)}\right)\\
=&\frac{1}{2\pi}.
\end{align*}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/167355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 1
} |
Number of zeros of a polynomial in the unit disk Suppose $m$ and $n$ are two nonnegative integers. What is the number of zeros of the polynomial $(1+z)^{m+n}-z^n$ in the unit ball $|z|<1$?
Some calculations for small values of $m$ and $n$ suggests the following formula:
$$n-1+2\left\lfloor\frac{m-n+5}{6} \right\rfloor$$
Does this formula work for all values of $m$ and $n$?
| Here is my solution of this problem. So we have next equation for the zeros:
$$
(1+z)^{n+m}=z^n
$$
We can modify it like that:
$$
\Bigl(1+\frac{1}{z}\Bigr)^n \Bigl(1+z\Bigr)^m = 1
$$
Next we can mark the first factor as $re^{i\varphi}$, so the equation splits into two ones:
$$
\Bigl(1+z\Bigr)^m = re^{i\varphi} \\
\Bigl(1+\frac{1}{z}\Bigr)^n = \frac{1}{r}e^{-i\varphi}
$$
Let's consider the first equation. We have to extract the root:
$$
1 + z = r^{\frac{1}{m}} e^{i\frac{\varphi + 2 \pi k}{m}}
$$
Next we take away $1$ and write down the module of $z$ using the condition $|z|<1$:
$$
|z|^2 = |r^{\frac{1}{m}} e^{i\frac{\varphi + 2 \pi k}{m}} - 1|^2 = r^{\frac{2}{m}} - 2 r^{\frac{1}{m}} \cos \frac{\varphi + 2 \pi k}{m} + 1 < 1
$$
Finally, we divide by $r^{\frac{1}{m}}$, so:
$$
r^{\frac{1}{m}} < 2 \cos \frac{\varphi + 2 \pi k}{m}
$$
At the same time we know that $r^{\frac{1}{m}}<1$, so we have to understand, which inequation is stronger. Let's mark $z=\rho e^{i\psi}$ and write down the following inequation:
$$
\Bigl|1 + \frac{1}{z}\Bigr|^2 = \rho^{-2} + 2\rho^{-1} \cos \psi + 1 > 1
$$
Next,
$$
\rho \cos \psi + 1 > \frac{1}{2}
$$
It's clear that the left part equals $\Re \sqrt[m]{re^{i\varphi}}$, so:
$$
r^{\frac{1}{m}} \cos \frac{\varphi + 2 \pi k}{m} > \frac{1}{2}
$$
Because of $r^{\frac{1}{m}} < 1$ we have:
$$
\cos \frac{\varphi + 2 \pi k}{m} > \frac{1}{2}
$$
Therefore $r^{\frac{1}{m}} < 1$ is stronger and we have $\cos \frac{\varphi + 2 \pi k}{m} > \frac{1}{2}$. Next, we perform the same procedure for the second equation, and finally obtain another inequation:
$$
\cos \frac{-\varphi + 2 \pi k'}{n} < \frac{1}{2}
$$
Let's transform these inequations:
$$
-\frac{m}{6} < \frac{\varphi}{2\pi} + k < \frac{m}{6} \\
\frac{n}{6} < -\frac{\varphi}{2\pi} + k' < \frac{5n}{6}
$$
We have to take away $\varphi$, so:
$$
-\frac{m}{6} - \frac{\varphi}{2\pi} < k < \frac{m}{6} - \frac{\varphi}{2\pi}
$$
And:
$$
-\frac{m}{6} + \frac{n}{6} - k' < k < \frac{m}{6} + \frac{5n}{6} - k'
$$
So, finally:
$$
\frac{n-m}{6} < \ell < \frac{5n+m}{6}
$$
where $\ell = k + k'$. The number of $\ell$'s satisfying this inequation defines the number of zeros in the unit disk.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/171895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 2,
"answer_id": 0
} |
Number of intervals needed to cross, Brownian motion Let $B_t$ be a standard Brownian motion. Let $E_{j, n}$ denote the event$$\left\{B_t = 0 \text{ for some }{{j-1}\over{2^n}} \le t \le {j\over{2^n}}\right\},$$and let$$K_n = \sum_{j = 2^n + 1}^{2^{2n}} 1_{E_{j,n}},$$where $1$ denotes indicator function. I have three questions.
*
*What is $\lim_{n \to \infty} \mathbb{P}\{K_n = 0\}$?
*What is $\lim_{n \to \infty} 2^{-n} \mathbb{E}[K_n]$?
*Does there exist $\rho > 0$ such that for $\mathbb{P}\{K_n \ge \rho2^{n}\} \ge \rho$ for all $n$?
| I'll address the second question on the expected value of the sum $K_n$.
Let $\phi(x)$ and $\Phi(x)$ be the probability density function and cumulative distribution functions for a standard normal distribution.
Let $h(a,b)$ be the probability that a Brownian motion without drift returns to $0$ at some time in $[a,b]$. Let $h(b)=h(1,b)$. Then by rescaling, $h(a,b)=h(1,b/a)=h(b/a)$. We can calculate this exactly.
For $x \gt 0$ let $f(x,t)$ be the probability that a Brownian motion released at position $x$ will hit $0$ by time $t$. Let $f(x) = f(x,1)$. By rescaling, $f(x,t)= f(\frac{x}{\sqrt{t}},1) = f(\frac{x}{\sqrt{t}})$. By reflection, $f(x) = 2 \Phi(-x) = 2-2\Phi(x)$.
$$\begin{eqnarray}h(b) &=& 2 \int_0^\infty \phi(x) f(x,b-1)~dx \newline &=&2 \int_0^\infty \phi(x)\cdot 2 \Phi\left(\frac{-x}{\sqrt{b-1}}\right)~dx\end{eqnarray}$$
We can use differentiation under the integral sign. $h(1)=0$ and $h(b) = \int_1^b h'(t) dt$.
$$\begin{eqnarray} h'(b) &=& 4 \int_0^\infty \phi(x) \phi\left(\frac{x}{\sqrt{b-1}}\right) \left(\frac{1}{2} \frac{x}{(b-1)^{3/2}}\right) dx \newline &=&\frac{2}{(b-1)^{3/2}}\int_0^\infty x \phi(x) \phi\left( \frac{x}{\sqrt{b-1}}\right) dx \newline &=& \frac{2}{(b-1)^{3/2}} \int_0^\infty \frac{x}{2 \pi} e^{-x^2 \cdot \left(\frac{1}{2} + \frac{1}{2(b-1)}\right)}dx \newline &=& \frac{1}{\pi b \sqrt{b-1}}\end{eqnarray}$$
So, $h(b) = \int_1^b \frac{dy}{\pi y \sqrt{y-1}} = 1-\frac{2}{\pi} \arcsin \frac{1}{\sqrt{b}}$.
$\mathbb{P}(E_{j+1,n})$ is the probability that the Brownian motion returns to $0$ on $\left[\frac{j}{2^n},\frac{j+1}{2^n}\right]$ which is $h(\frac{j}{2^n},\frac{j+1}{2^n}) = h(1 + \frac{1}{j}) = 1-\frac{2}{\pi} \arcsin \frac{1}{\sqrt{1+1/j}}.$ That can be simplified to $1-\frac{2}{\pi}(\frac{\pi}{2} - \arctan \frac{1}{\sqrt{j}}) = \frac{2}{\pi}\arctan \frac{1}{\sqrt{j}}$. For large $j$, this is approximately $\frac{2}{\pi} \frac{1}{\sqrt{j}}$.
$$\mathbb{E}(K_n) \sim \sum_{j=2^n}^{2^{2n}-1} \frac{2}{\pi} \frac{1}{\sqrt{j}} \approx \frac{2}{\pi} \int_{2^n}^{2^{2n}} \frac{1}{\sqrt{x}} dx =\frac{4}{\pi}(2^n -2^{n/2}) \sim \frac{4}{\pi} 2^n.$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/221115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
} |
Number of all different $n\times n$ matrices where sum of rows and columns is $3$ For a given positive integer $n$, I need to learn the number of $n\times n$ matrices of nonnegative integers with the following restrictions:
*
*The sum of each row and column is equal to $3$.
*Two matrices are considered equal if one can be obtained by permuting rows and/or columns.
For example, for $n=2$ there are two different matrices as follows:
$$M_1=\begin{pmatrix}
1&2\\2&1
\end{pmatrix},\qquad M_2=\begin{pmatrix}
3&0\\0&3
\end{pmatrix}.$$
For $n=3$ there are five different matrices as follows:
$$\begin{pmatrix}
1&1&1\\1&1&1\\1&1&1
\end{pmatrix},\quad \begin{pmatrix}
0&1&2\\1&2&0\\2&0&1
\end{pmatrix}, \quad \begin{pmatrix}
0&1&2\\1&1&1\\2&1&0
\end{pmatrix}, \quad \begin{pmatrix}
1&2&0\\2&1&0\\0&0&3
\end{pmatrix}, \quad \begin{pmatrix}
3&0&0\\0&3&0\\0&0&3
\end{pmatrix}.$$
| This problem was solved by Ron Read in his PhD thesis (University of London, 1958). Without requirement 2 there is a summation which isn't too horrible. With requirement 2 added as well, Read's solution needs the cycle index polynomial of a group and you could reasonably call it a solution in principle.
The problem is easier to handle in the asymptotic sense. Without requirement 2, the number is asymptotically
$$ \frac{(3n)!}{6^{2n}}\exp\Bigl(2 - \frac{2}{9n} + O(n^{-2})\Bigr),$$
see this paper.
With requirement 2, counting equivalence classes, the key thing to note is that asymptotically almost all of them have only trivial symmetry group, so asymptotically you can divide by $(n!)^2$. See this paper.
For small sizes, starting with $n=1$, I get 1, 2, 5, 12, 31, 103, 383, 1731, 9273, 57563, 406465. This sequence is OEIS A232215 but its a bit hard to see that given the non-combinatorial description.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/251916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Efficiently lifting $a^2+b^2 \equiv c^2 \pmod{n}$ to coprime integers Let $n$ be integer with unknown factorization. Assume factoring $n$
is inefficient.
Let $a,b,c$ satisfy $a^2+b^2 \equiv c^2 \bmod{n}, 0 \le a,b,c \le n-1$.
Is it possibly to lift the above
congruence to coprime integers in $O(\mathsf{polylog}(n))$ time with probability at least $O\Big(\frac1{\mathsf{polylog}(n)}\Big)$?
i.e., find coprime integers $A,B,C$ satisfying $$A^2+B^2=C^2, A \equiv a \bmod{n},B \equiv b \bmod{n},C \equiv c \bmod{n}$$
If we drop the coprime constraint, the problem is easy.
I do not know if this is equivalent to factoring $n$.
Explaining per comments.
Write $A=a+a'n, B=b+b'n, C=c+b'n$ for unknown integers $a',b'$.
Then $A^2+B^2-C^2=0$ is linear in $b'$.
Then $b'=-1/2*(a'^2*n^2 + 2*a*a'*n + a^2 + b^2 - c^2)/((b - c)*n)$
Since $n$ divides $a^2+b^2-c^2$, $b'= -1/2*(a'^2n+2a a'+ (a^2+b^2-c^2)/n)/(b-c) $.
If we can trial factor $b-c$ (it is prime with probability $1/\log{n}$),
we try to solve $(a'^2n+2a a'+ (a^2+b^2-c^2)/n)=0$ modulo $2(b-c)$ for $a'$.
If solution exists, we know $a',b'$ and the lift.
If we can't factor $b-c$ or solution doesn't exist, replace $b$
with $b+b''n$, this doesn't change the congruence and we will hit
primes/numbers we can trial factor in the arithmetic progression
$b-c + b''n$.
The problem with this approach is the lift is not coprime.
| Unless I'm mistaken, if we had an efficient algorithm we would be able to factor integers efficiently. We may suppose $n$ is odd.
Randomly choose
coprime integers $X, Y$, not both odd, in some large interval. Then $A = X^2 - Y^2$, $B = 2 X Y$,
$C = X^2 + Y^2$ are a primitive Pythagorean triple. Now compute reduced residues mod $n' = 2n$, $a, b, c$, of $A, B, C$ respectively, and give them to the algorithm (with $n$ replaced by $n'$),
obtaining $A', B', C'$ forming a primitive Pythagorean triple with
$A' \equiv a \equiv A \mod n'$, $B' \equiv b \equiv B \mod n'$, $C' \equiv c \equiv C \mod n'$. In particular, $B'$ is even. Thus there are coprime integers $X'$, $Y'$ not both odd, with
$A' = X'^2 - Y'^2$, $B' = 2 X' Y'$, $C' = X'^2 + Y'^2$, and these
are efficiently computable from $A', B', C'$ by $X' = \sqrt{(A' + C')/2}$, $Y' = B'/(2X')$. We then have $2 X'^2 = A' + C' \equiv A + C = 2 X^2 \mod 2n$ so that $X'^2 \equiv X^2 \mod n$. Now if $n$ is composite, we should have with probability bounded away from $0$, $X' \not \equiv \pm X \mod n$, e.g. if $n = pq$ we might have taken $X'', Y''$ instead of $X, Y$, where $X'' \equiv X \mod 2p$, $Y'' \equiv Y \mod 2p$, $X'' \equiv -X \mod q$, $Y'' \equiv Y\mod q$, obtaining the same $a,b,c$.
But if so, $\gcd(X'-X,n)$ and $\gcd(X'+X,n)$ gives us a nontrivial factorization of $n$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/252523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A four-variable maximization problem We let function
\begin{equation}
\begin{aligned}
f(x_1,~x_2,~x_3,~x_4) ~&=~ \sqrt{(x_1+x_2)(x_1+x_3)(x_1+x_4)} \\
&+ \sqrt{(x_2+x_1)(x_2+x_3)(x_2+x_4)} \\
&+ \sqrt{(x_3+x_1)(x_3+x_2)(x_3+x_4)} \\
&+ \sqrt{(x_4+x_1)(x_4+x_2)(x_4+x_3)},
\end{aligned}
\end{equation}
where variables $x_1,~x_2,~x_3,~x_4$ are positive and satisfy $x_1+x_2+x_3+x_4 ~=~ 1$. We want to prove that $f(x_1,~x_2,~x_3,~x_4)$ attains its global maximum when $x_1=x_2=x_3=x_4=\frac{1}{4}$.
This looks a difficult problem even if it is at high-school level. Any clues? Your ideas are greatly appreciated.
| Using the inequality of the means for 2 variables, $(\sqrt{ab}\leq\frac{a+b}{2})$ for positive $a$ and $b$, equality only when $a=b$ we have
\begin{equation}
\begin{aligned}
&\sqrt{(x_1+x_2)(x_1+x_3)(x_1+x_4)} + \sqrt{(x_2+x_1)(x_2+x_3)(x_2+x_4)} \\&=\sqrt{(x_1+x_2)}(\sqrt{(x_1+x_3)(x_1+x_4)}+\sqrt{(x_2+x_3)(x_2+x_4)})\\ &\leq \sqrt{(x_1+x_2)}(\frac{(x_1+x_3)+(x_1+x_4)}{2}+\frac{(x_2+x_3)+(x_2+x_4)}{2})\\&=\sqrt{(x_1+x_2)}(x_1+x_2+x_3+x_4)\\&=\sqrt{(x_1+x_2)}
\end{aligned}
\end{equation}
with equality attained only when $x_3=x_4$.
Similarly or by swapping $x_1$ and $x_3$, $x_2$ and $x_4$ we have
\begin{equation}
\begin{aligned}
&\sqrt{(x_3+x_1)(x_3+x_2)(x_3+x_4)} + \sqrt{(x_4+x_1)(x_4+x_2)(x_4+x_3)} \\&\leq\sqrt{(x_3+x_4)}
\end{aligned}
\end{equation}
with equality attained only when $x_1=x_2$.
Adding the two inequalities we obtain
\begin{equation}
\begin{aligned}
f(x_1,~x_2,~x_3,~x_4) ~&\leq \sqrt{(x_1+x_2)}+\sqrt{(x_3+x_4)}
\end{aligned}
\end{equation}
By Jensen's Inequality since $-\sqrt{x}$ is convex for $0\leq x \leq 1$ we have
\begin{equation}
\begin{aligned}
\frac{\sqrt{(x_1+x_2)}+\sqrt{(x_3+x_4)}}{2}\leq \sqrt{\frac{(x_1+x_2)+(x_3+x_4)}{2}}=\sqrt{\frac{1}{2}}
\end{aligned}
\end{equation}
with equality attained only when $(x_1+x_2)=(x_3+x_4)=\frac{1}{2}$.
Hence
\begin{equation}
\begin{aligned}
f(x_1,~x_2,~x_3,~x_4) ~&\leq \sqrt{2}
\end{aligned}
\end{equation}
with equality only attained if $x_1=x_2$, $x_3=x_4$ and $(x_1+x_2)=(x_3+x_4)=\frac{1}{2}$ which is equivalent to $x_1=x_2=x_3=x_4=\frac{1}{4}.$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/346297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Limit of alternated row and column normalizations Let $E_0$ be a matrix with non-negative entries.
Given $E_n$, we apply the following two operations in sequence to produce $E_{n+1}$.
A. Divide every entry by the sum of all entries in its column (to make the matrix column-stochastic).
B. Divide every entry by the sum of all entries in its row (to make the matrix row-stochastic).
For example:
$E_0=\begin{pmatrix}
\frac{2}{5} & \frac{1}{5} & \frac{2}{5} & 0 & 0\\
\frac{1}{5} & 0 & \frac{7}{10} & \frac{1}{10} & 0\\
0 & 0 & 0 & \frac{3}{10} & \frac{7}{10}
\end{pmatrix}\overset{A}{\rightarrow}\begin{pmatrix}
\frac{2}{3} & 1 & \frac{4}{11} & 0 & 0\\
\frac{1}{3} & 0 & \frac{7}{11} & \frac{1}{4} & 0\\
0 & 0 & 0 & \frac{3}{4} & 1
\end{pmatrix}\overset{B}{\rightarrow}\begin{pmatrix}
\frac{22}{67} & \frac{33}{67} & \frac{12}{67} & 0 & 0\\
\frac{44}{161} & 0 & \frac{12}{23} & \frac{33}{161} & 0\\
0 & 0 & 0 & \frac{3}{7} & \frac{4}{7}
\end{pmatrix}=E_1$
What is the limit of $E_n$ as $n \to \infty$?
Additional remarks.
In my problem, the matrix has $c\in \{1,2,\dots,5\}$ rows and $r=5$ columns (note that the two letters are reversed, but in the original context of this problem these letters $r$ and $c$ do not actually stand for rows and columns). So $E_0$ can be $1\times 5$, $2\times 5$, ... or $5\times 5$.
We denote with $(e_n)_{ij}$ the entries of $E_{n}$; hence $(e_n)_{ij}\in[0;1]$ and $\forall i \sum_{j=1}^{r}(e_n)_{ij}=1$ for $n>0$.
I managed to express $(e_{n+1})_{ij}$ as a function of $(e_{n})_{ij}$ :
$$(e_{n+1})_{ij}=\frac{\frac{(e_{n})_{ij}}{\sum_{k=1}^{c}(e_n)_{kj}}}{\sum_{l=1}^{r}\frac{(e_n)_{il}}{\sum_{k=1}^{c}(e_n)_{kl}}}$$
What I can't seem to find now is an expression $(e_{n})_{ij}$ as a function of $(e_{0})_{ij}$, to be able to calculate $\underset{n \to +\infty }{lim}(e_n)_{ij}$
I wrote code to compute this iteration; when I ran it with the previous example $E_0$, I found out that:
$E_0=\begin{pmatrix}
\frac{2}{5} & \frac{1}{5} & \frac{2}{5} & 0 & 0\\
\frac{1}{5} & 0 & \frac{7}{10} & \frac{1}{10} & 0\\
0 & 0 & 0 & \frac{3}{10} & \frac{7}{10}
\end{pmatrix}\overset{n \rightarrow+\infty}{\rightarrow}E_n=\begin{pmatrix}
\frac{7}{25} & \frac{3}{5} & \frac{3}{25} & 0 & 0\\
\frac{8}{25} & 0 & \frac{12}{25} & \frac{1}{5} & 0\\
0 & 0 & 0 & \frac{2}{5} & \frac{3}{5}
\end{pmatrix}$
Not only do the row sums equal $1$, but the column sums equal $\frac{3}{5}$: it seems that in this process column sums converge to $\frac{c}{r}$.
I'm not a mathematician so I was looking for a simple inductive proof. I tried to express $E_2$ (and so on) as a function of $E_0$, but it quickly gets overwhelming, starting from $E_2$...
| When $E_0$ is square (i.e., $r = c$) this procedure is called Sinkhorn iteration or the Sinkhorn-Knopp algorithm (see this Wikipedia page). You can find a wealth of results by Googling those terms, the most well-known of which is that if $E_0$ has strictly positive entries (and again, is square) then the limit of $E_n$ indeed exists and is doubly stochastic.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/349274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
$\varepsilon$-net of a $d$-dimensional unit ball formed by power set of $V = \{+1, 0 -1\}^d$ I have a set of $d$-dimensional vectors $V = \{+1, 0, -1\}^d $. Then $P(V)$ constitutes the power set of $V$. I now construct a set of unit vectors $V_{\mathrm{sum}}$ from the power set $P(V)$ such that
$$
V_{\mathrm{sum}} = \left\{\frac{\bar{v}}{\|\bar{v}\|} \quad \Bigg| \quad \bar{v} = \sum_{v \in S} v, \quad \forall S \in P(V)\right\}
$$
That is, each subset $S \in P(V)$ contributes to a vector in $V_{\mathrm{sum}}$ formed as a sum of all the vectors in the subset $S$ and then taking the unit vector in that direction.
Note that there could be duplicates. For example, for $d = 3$, the vector $(\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}})$ can be formed as a sum of vectors of any of the following subsets $$S_1 = \{(1,0,0),(0,1,0),(0,0,1)\},\\ S_2 = \{(1,1,0
),(1,0,1),(0,1,1)\},\\ S_3 = \{(1,1,1)\}.$$
and many more possibilities.
Now I want to find the maximal isolation of a vector from $\,V_{\mathrm{sum}}\,$ from the remaining vectors of $\,V_{\mathrm{sum}},\,$ i.e. the maximum of Euclidean distance between any vector in $V_{\mathrm{sum}}$ to its closest vector in $V_{\mathrm{sum}}$. Is there an easy way to upper bound this max distance?
In other words, if I consider $V_{\mathrm{sum}}$ to be an $\varepsilon$-net to the surface of the unit ball in $d$-dimensions, then I want to find an upper bound on $\varepsilon$. Any weak upper bound on $\varepsilon$ should suffice. The goal is to show that $V_{\mathrm{sum}}$ forms a better $\varepsilon$-net than the unit vectors formed from the vectors in $V$.
| Since the notation quickly becomes cumbersome for any $S \in 2^{V}$ define
\begin{equation}
v_S \overset{\text{def}}{=} \sum_{v \in S} v,
\end{equation}
and let
\begin{equation}
\hat v_S \overset{\text{def}}{=} \frac{v_S}{\|v_S\| },
\end{equation}
If we fix a $v_S \in \text{span}(V)$
then the goal is to find/bound
\begin{equation}
\min_{w_T \in V_\text{sum}} |\hat v_S- \hat w_T| =
\end{equation}
\begin{equation}
= \min_{w \in V_\text{sum}}\left|\frac{1}{\|w_T \| \|v_S \| } \right|\left| \| w_T \| v_S - \|v_S \| w_T \right|
\end{equation}
and then use that to find/bound the value of
\begin{equation}
\max_{v_S \in V_\text{sum}}\min_{w_T \in V_\text{sum}} |v_S-w_T| .
\end{equation}
To that end notice that
*
*if $S = \{0\}$ then $\hat v_T$ doesn't make sense so that we can
always assume that $\exists v' \in S$ such that $|v'_i|=1 $ (as a matter of fact we can further assume $0 \not\in S$ since it has no effect)
*furthermore if $\forall v' \in S$ we have that $-v' \in S$ then we have that $v_S = 0$ so that we can also assume that $\exists v' \in S$ such that $-v' \not\in S$
*taking this idea even further we have that if $ v \in S$ and $- v \in S $ then $v_S = v_{S \setminus \{ v, -v\}}$ and therefore we can assume that $v \in S \implies -v \notin S$
*generalizing this concept further we have that if $ T \subset S$ and $v_T = 0 $ then $v_S = v_{S \setminus T}$ and therefore we can assume that $(\not\exists T \subset S)(w_T = 0 )$
Therefore if we define the support of $v$ as the following
\begin{equation}
\text{supp}(v) = \{i \in [n] \ | \ v_i \neq 0\},
\end{equation}
we can use the preceding claims to deduce the following:
Lemma $(\forall v_S \in \text{span}(V))(\exists m \in [n])$ such that both
*
*$(v_S)_m = \min\{ |(v_S)_i| \ | \ i \in [n] \}$
*either $e_m \not\in S$ or $-e_m \not\in S$
where \begin{equation}
(e_m)_i = \begin{cases} 1 & \text{if } i = m \\
0 & \text{o.w.}\end{cases} .
\end{equation}
(Proof): By the previous claims we can assume W.L.O.G. that $v_S$ is reduced; i.e. \begin{equation}
(\not\exists T \subset S)(w_T = 0 ).
\end{equation} Let $m$ satisfy $(v_S)_m = \min\{ |(v_S)_i| \ | \ i \in [n] \}$ then by assumption either $e_m \not\in $ or $-e_m \not\in S$. QED
Therefore W.L.O.G. assume that $S$ satisfies the properties above and let $m \in [n]$ the index that satisfies the properties of the lemma and define
\begin{equation}
T = S \cup \{e_m \},
\end{equation}
so that
\begin{equation}
(w_T)_i = \begin{cases} v_i \pm 1 & \text{if } i = m \\
v_i & \text{o.w.}\end{cases} .
\end{equation}
Notice that if $\| v_S \|= \sqrt k$ then $\|w_T \|= \sqrt{k \pm \epsilon}$ for some $\epsilon \leq |2v_m + 1|$; therefore we have that
\begin{equation}
\min_{w_T \in V_\text{sum}} |\hat v_S- \hat w_T| \leq \frac{1}{\sqrt k \sqrt{k \pm \epsilon} } \left| \sqrt{k \pm \epsilon} v_S - \sqrt{k} w_T \right|
\end{equation}
\begin{equation}
= \frac{1 }{ \sqrt k \sqrt{k \pm \epsilon} } \sqrt{ \left(\sqrt{k} v_m - \sqrt{k \pm \epsilon}(w_T )_m \right)^2+ ( \sqrt{k \pm \epsilon} - \sqrt{k})^2 \sum_{i \neq m } v_i^2 }
\end{equation}
\begin{equation}
= \frac{1 }{ \sqrt k \sqrt{k \pm \epsilon} } \sqrt{ \left(\sqrt{k} v_m - \sqrt{k \pm \epsilon}v _m \pm \sqrt{k \pm \epsilon} \right)^2+ ( \sqrt{k \pm \epsilon} - \sqrt{k})^2 \sum_{i \neq m } v_i^2 }.
\end{equation}
But notice that
\begin{equation}
\left(\sqrt{k} v_m - \sqrt{k \pm \epsilon}v _m \pm \sqrt{k \pm \epsilon} \right)^2 =
\end{equation}
\begin{equation}
= \left(\sqrt{k} v_m - \sqrt{k \pm \epsilon}v _m \pm \sqrt{k \pm \epsilon} \right)^2 -\left(\sqrt{k} v_m - \sqrt{k \pm \epsilon}v _m \right)^2 + \left(\sqrt{k} v_m - \sqrt{k \pm \epsilon}v _m \right)^2
\end{equation}
\begin{equation}
= 2\left(\sqrt{k} - \sqrt{k \pm \epsilon} \right) \left(k \pm \epsilon \right)v_m + \left(\sqrt{k} - \sqrt{k \pm \epsilon} \right)^2 v_m^2;
\end{equation}
and therefore have that
\begin{equation}
\min_{w_T \in V_\text{sum}} |\hat v_S- \hat w_T| \leq
\end{equation}
\begin{equation}
\leq \frac{1 }{ \sqrt k \sqrt{k \pm \epsilon} } \sqrt{ 2\left(\sqrt{k} - \sqrt{k \pm \epsilon} \right) \left(k \pm \epsilon \right)v_m + \left(\sqrt{k} - \sqrt{k \pm \epsilon} \right)^2 v_m^2
+ ( \sqrt{k \pm \epsilon} - \sqrt{k})^2 \sum_{i \neq m } v_i^2 }
\end{equation}
\begin{equation}
= \frac{1 }{ \sqrt k \sqrt{k \pm \epsilon} } \sqrt{ 2\left(\sqrt{k} - \sqrt{k \pm \epsilon} \right) \left(k \pm \epsilon \right)v_m
+ ( \sqrt{k \pm \epsilon} - \sqrt{k})^2 \sum_{i \in [n]} v_i^2 }
\end{equation}
\begin{equation}
= \frac{1 }{ \sqrt k \sqrt{k \pm \epsilon} } \sqrt{ 2\left(\sqrt{k} - \sqrt{k \pm \epsilon} \right) \left(k \pm \epsilon \right)v_m
+ ( \sqrt{k \pm \epsilon} - \sqrt{k})^2 k }
\end{equation}
Since $x \geq 0 \land y \geq 0 \implies \sqrt {x+y} \leq \sqrt x + \sqrt y $ we further get that
\begin{equation}
\min_{w_T \in V_\text{sum}} |\hat v_S- \hat w_T| \leq \frac{ \sqrt{\left(\sqrt{k} - \sqrt{k \pm \epsilon} \right) \left(k \pm \epsilon \right)} }{ \sqrt k \sqrt{k \pm \epsilon} }\sqrt{2v_m} + \frac{\left| \sqrt{k \pm \epsilon} - \sqrt{k}\right| }{ \sqrt k \sqrt{k \pm \epsilon} } \sqrt{k},
\end{equation}
which W.L.O.G., after possibly relableing $k \leftarrow k-\epsilon$, we have
\begin{equation}
\max_{v_S \in V_\text{sum}} \min_{w_T \in V_\text{sum}} |\hat v_S- \hat w_T| \leq \frac{ \sqrt{\left(\sqrt{k} - \sqrt{k + \epsilon} \right) \left(k + \epsilon \right)} }{ \sqrt k \sqrt{k + \epsilon} }\sqrt{2v_m } + \frac{ \sqrt{k + \epsilon} - \sqrt{k} }{ \sqrt{k + \epsilon} }
\end{equation}
\begin{equation}
= \left(\frac{\sqrt{\epsilon} }{\sqrt 2 k^{\frac{7}{4}}} + \mathcal{O} \left(\frac{1}{k^{\frac{11}{4}}} \right) \right)\sqrt{2v_m } + \left(\frac{\epsilon}{2k^{\frac{3}{2}}} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{2}}} \right)
\right)
\end{equation}
\begin{equation}
= \frac{\sqrt{\epsilon} }{\sqrt 2 k^{\frac{7}{4}}} \sqrt{2v_m } + \frac{\epsilon}{2k^{\frac{3}{2}}} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{2}}}
\right) = \frac{\sqrt{\epsilon} }{ k^{\frac{7}{4}}} \sqrt{v_m } + \frac{\epsilon}{2k^{\frac{3}{2}}} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{2}}}
\right).
\end{equation}
by expanding the Puiseux series.
But recall that either $\| v_S \|= k^{\frac{1}{2}}$ or $\| v_S \|= (k+\epsilon)^{\frac{1}{2}}$ (depending on whether we relabeled) by definition so that
\begin{equation}
|v_m| \leq \frac{1}{|\text{supp}(v_S)|}k^{\frac{1}{2}}
\end{equation}
by the pigeon-hole principle and therefore
\begin{equation}
\epsilon < 2|v_m|+1 \leq \frac{2}{|\text{supp}(v_S)|}k^{\frac{1}{2}}+1
\end{equation}
and therefore
\begin{equation}
\max_{v_S \in V_\text{sum}} \min_{w_T \in V_\text{sum}} |\hat v_S- \hat w_T|
= \frac{\sqrt{\epsilon} }{ k^{\frac{7}{4}}} \sqrt{v_m } + \frac{\epsilon}{2k^{\frac{3}{2}}} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{2}}}
\right)
\end{equation}
\begin{equation}
= \frac{\sqrt{\epsilon} }{ \sqrt{|\text{supp}(v_S)|} k^{\frac{7}{4}}} k^{\frac{1}{4}} + \frac{k^{\frac{1}{2}}}{|\text{supp}(v_S)| k^{\frac{3}{2}}} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{2}}}
\right)
\end{equation}
\begin{equation}
= \frac{\sqrt{\epsilon} }{ \sqrt{|\text{supp}(v_S)|} k^{\frac{3}{2}}} + \frac{1}{|\text{supp}(v_S)| k} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{2}}}
\right)
\end{equation}
\begin{equation}
= \left( \frac{2}{|\text{supp}(v_S)|}k^{\frac{1}{2}}+1\right)^{\frac{1}{2}} \frac{1 }{ \sqrt{|\text{supp}(v_S)|} k^{\frac{3}{2}}} + \frac{1}{|\text{supp}(v_S)| k} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{2}}}
\right) ,
\end{equation}
and once again applying the rule $x \geq 0 \land y \geq 0 \implies \sqrt {x+y} \leq \sqrt x + \sqrt y $ we get that
\begin{equation}
\max_{v_S \in V_\text{sum}} \min_{w_T \in V_\text{sum}} |\hat v_S- \hat w_T|
=
\end{equation}
\begin{equation}
\left( \frac{2}{|\text{supp}(v_S)|}k^{\frac{1}{2}}\right)^{\frac{1}{2}} \frac{1 }{ \sqrt{|\text{supp}(v_S)|} k^{\frac{3}{2}}} +\frac{1 }{ \sqrt{|\text{supp}(v_S)|} k^{\frac{3}{2}}} +\frac{1 }{ |\text{supp}(v_S)| k} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{2}}}
\right)
\end{equation}
\begin{equation}
= \frac{\sqrt{2}k^{\frac{1}{4}} }{ |\text{supp}(v_S)| k^{\frac{3}{2}}} +\frac{1 }{ |\text{supp}(v_S)| k} + \mathcal{O} \left(\frac{1}{k^{\frac{3}{2}}}
\right)
\end{equation}
\begin{equation}
= \frac{\sqrt{2}}{ |\text{supp}(v_S)| k^{\frac{5}{4}}} +\frac{1 }{ |\text{supp}(v_S)| k} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{2}}}
\right)
\end{equation}
\begin{equation}
= \frac{1 }{ |\text{supp}(v_S)| k} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{4}}}
\right)
\end{equation}
Therefore the bound you are looking for is \begin{equation} \max_{v_S \in V_\text{sum}} \min_{w_T \in V_\text{sum}} |\hat v_S-
\hat w_T| = \frac{1 }{ |\text{supp}(v_S)| k} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{4}}}
\right)
\end{equation}
In particular since $|\text{supp}(v_S)|$ is an integer and $|\text{supp}(v_S)|> 1$ we can weaken this to\begin{equation} \max_{v_S \in V_\text{sum}} \min_{w_T \in V_\text{sum}} |\hat v_S-
\hat w_T| = \frac{1 }{ k} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{4}}}
\right)
\end{equation}
Or recalling that $k = \| v_S \|^2$ we can equivalently right this as \begin{equation} \max_{v_S \in V_\text{sum}} \min_{w_T \in V_\text{sum}} |\hat v_S-
\hat w_T| = \frac{1 }{ |\text{supp}(v_S)| \| v_S \|^2} + \mathcal{O} \left(\frac{1}{\| v_S \|^{\frac{5}{2}}}
\right)
\end{equation}
and\begin{equation} \max_{v_S \in V_\text{sum}} \min_{w_T \in V_\text{sum}} |\hat v_S-
\hat w_T| = \frac{1 }{ \| v_S \|^2 } + \mathcal{O} \left(\frac{1}{\| v_S \|^{\frac{5}{2}}}
\right)
\end{equation}
But most importantly we have that the vectors in $V_\text{sum}$ get arbitrarily close for large $n$; i.e. by choosing say $S = \{e_i \ | \ i \in [n]\}$ we have that
\begin{equation} \lim_{n \to \infty }\max_{v_S \in V_\text{sum}} \min_{w_T \in V_\text{sum}} |\hat v_S-
\hat w_T| =
\end{equation}
\begin{equation} =\lim_{n \to \infty } \frac{1 }{ |\text{supp}(v_S)| k} + \mathcal{O} \left(\frac{1}{k^{\frac{5}{4}}} \right) \leq \lim_{n \to \infty } \frac{1 }{ |n| n} + \mathcal{O} \left(\frac{1}{n^{\frac{5}{4}}} \right) = 0
\end{equation}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/360487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Looking for a combinatorial proof for a Catalan identity Let $C_n=\frac1{n+1}\binom{2n}n$ be the familiar Catalan numbers.
QUESTION. Is there a combinatorial or conceptual justification for this identity?
$$\sum_{k=1}^n\left[\frac{k}n\binom{2n}{n-k}\right]^2=C_{2n-1}.$$
| Expanding my previous comment into an answer at the OP's request. We can write
$$
\frac{k}{n}\binom{2n}{n-k}=A_k-B_k,
$$
where
$$
A_k=\binom{2n-1}{n-k}=\binom{2n-1}{n+k-1}, \qquad B_k=\binom{2n-1}{n+k}=\binom{2n-1}{n-k-1}.
$$
Then
$$
\sum_{k=1}^{n}\left(\frac{k}{n}\binom{2n}{n-k}\right)^2=\sum_{k=1}^{n}(A_k^2+B_k^2)-\sum_{k=1}^{n}(A_kB_k+B_kA_k).
$$
The first sum on the right is
$$
\sum_{k=1}^{n}\left(\binom{2n-1}{n-k}\binom{2n-1}{n+k-1}+\binom{2n-1}{n+k}\binom{2n-1}{n-k-1}\right)=\\
=\left(\sum_{k=0}^{2n-1}\binom{2n-1}{k}\binom{2n-1}{2n-1-k}\right)-\binom{2n-1}{n}\binom{2n-1}{n-1}=\binom{4n-2}{2n-1}-\binom{2n-1}{n}^2,
$$
and the second sum on the right is
$$
\sum_{k=1}^{n}\left(\binom{2n-1}{n-k}\binom{2n-1}{n+k}+\binom{2n-1}{n+k}\binom{2n-1}{n-k}\right)=\\
=\left(\sum_{k=0}^{2n-1}\binom{2n-1}{k}\binom{2n-1}{2n-k}\right)-\binom{2n-1}{n}^2=\binom{4n-2}{2n}-\binom{2n-1}{n}^2,
$$
so
$$
\sum_{k=1}^{n}\left(\frac{k}{n}\binom{2n}{n-k}\right)^2=\binom{4n-2}{2n-1}-\binom{4n-2}{2n}=C_{2n-1}.
$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/383314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 2
} |
Asymptotic analysis of $x_{n+1} = \frac{x_n}{n^2} + \frac{n^2}{x_n} + 2$
Problem: Let $x_1 = 1$ and $x_{n+1} = \frac{x_n}{n^2} + \frac{n^2}{x_n} + 2, \ n\ge 1$.
Find the third term in the asymptotic expansion of $x_n$.
I have posted it in MSE six months ago without solution for the third term
https://math.stackexchange.com/questions/3801405/the-limit-and-asymptotic-analysis-of-a-n2-n-from-a-n1-fraca-nn.
We have $\lim_{n\to \infty} (x_n - n) = \frac{1}{2}$ (see [1]; I also give a solution with the help of computer in the link above).
So the first two terms in the asymptotic expansion of $x_n$ are $x_n \sim n + \frac{1}{2}$.
Edit: In [1], the authors proved that $\frac{1}{4n-2} \le x_n - n - \frac{1}{2} \le \frac{2}{2n-3}$ for all $n\ge 3$.
For the third term, @Diger in MSE said $x_n \sim n + \frac{1}{2} + \frac{5}{8n}$ (see @Diger's answer in the link above).
However, I did some numerical experiment which does not support this result.
I am not convinced of the numerical evidence due to finite precision arithmetic.
I hope to prove or disprove it analytically.
Numerical Experiment: If $x_n \sim n + \frac{1}{2} + \frac{5}{8n}$,
then it should hold $16n(x_{2n} - 2n - \frac{1}{2}) \approx 5$ and $8(2n+1)(x_{2n+1} - (2n+1) - \frac{1}{2}) \approx 5$
for large $n$. When $n=1500$, I use Maple to get
$16n(x_{2n} - 2n - \frac{1}{2}) \approx 4.368$ and $8(2n+1)(x_{2n+1} - (2n+1) - \frac{1}{2}) \approx 5.642$.
When $n$ is larger (e.g., $n=10000$), the numerical result seems unreliable.
I ${\color{blue}{\textbf{GUESS}}}$ that
$$x_{2n} \sim 2n + \frac{1}{2} + \frac{q_1}{2n},$$
$$x_{2n+1} \sim (2n+1) + \frac{1}{2} + \frac{q_2}{2n+1}$$
where $q_1 + q_2 = \frac{5}{4}$ and $q_1 \ne q_2$ (if $q_1 = q_2$, then it is $x_n \sim n + \frac{1}{2} + \frac{5}{8n}$).
(Some numerical experiment shows $q_1 \approx \frac{61}{112}, q_2 \approx \frac{79}{112}$. But I am not convinced of it.)
Edit: I give more analysis for my guess as an answer.
Any comments and solutions are welcome and appreciated.
Reference
[1] Yuming Chen, Olaf Krafft and Martin Schaefer, “Variation of a Ukrainian Olympiad Problem: 10982”,
The American Mathematical Monthly, Vol. 111, No. 7 (Aug. - Sep., 2004), pp. 631-632
| Consider the substitutions
\begin{equation*}
x_n=n+1/2+y_n/n,\quad y_n=u_n+5/8.
\end{equation*}
Then $u_1=-9/8$ and
\begin{equation*}
u_{n+1}=f_n(u_n)
\end{equation*}
for $n\ge1$, where
\begin{equation*}
f_n(u):=\frac{-64 n^4 u-8 n^3 (4 u-13)+n^2 (56 u+115)+n (96 u+76)+4 (8 u+5)}{8 n^2 \left(8 n^2+4 n+8 u+5\right)}.
\end{equation*}
Define $c_n(u)$ by the identity
\begin{equation*}
f_n(u)=-u+\frac{13}{8n}+\frac{c_n(u)}{n^2},
\end{equation*}
so that
\begin{equation*}
c_n(u)=\frac{n^2 \left(64 u^2+96 u+63\right)+n (11-8 u)+4 (8 u+5)}{8 \left(8 n^2+4 n+8 u+5\right)}.
\end{equation*}
Then for $n\ge1$
\begin{equation*}
u_{n+1}+u_n=\frac{13}{8n}+\frac{c_n(u)}{n^2} \tag{1}
\end{equation*}
and for $n\ge2$
\begin{equation*}
u_{n+1}=f_n(f_{n-1}(u_{n-1}))=u_{n-1}-\frac{13}{8n(n-1)}+\frac{c_n(u_n)}{n^2}
-\frac{c_{n-1}(u_{n-1})}{(n-1)^2}. \tag{2}
\end{equation*}
Note that
\begin{equation*}
u_{101}=-0.54\ldots,\quad u_{102}=0.56\ldots, \tag{3}
\end{equation*}
and
\begin{equation*}
0\le c_n(u)\le3
\end{equation*}
if $n\ge10$ and $u\in[-6/10,8/10]$. Therefore and because for natural $m\ge102$ we have
\begin{equation*}
\sum_{n=m}^\infty\Big(\frac{13}{8n(n-1)}+\frac3{(n-1)^2}\Big)<\frac5{m-2}\le0.05,
\end{equation*}
it follows from (2) and (3) by induction that for all $n\ge101$ we have $u_n\in[-6/10,8/10]$ and hence $0\le c_n(u_n)\le3$. So, again by (2), the sequences $(u_{2m})$ and $(u_{2m+1})$ are Cauchy-convergent and hence convergent. Moreover, by (1), $u_{n+1}+u_n\to0$.
Thus, indeed
\begin{equation*}
y_{n+1}+y_n\to5/4,
\end{equation*}
and the sequences $(y_{2m})$ and $(y_{2m+1})$ are convergent. (The limits of these two sequences can in principle be found numerically with any degree of accuracy -- controlled by (2), say.)
| {
"language": "en",
"url": "https://mathoverflow.net/questions/384047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Decoupling a double integral I came across this question while making some calculations.
QUESTION. Can you find some transformation to "decouple" the double integral as follows?
$$\int_0^{\frac{\pi}2}\int_0^{\frac{\pi}2}\frac{d\alpha\,d\beta}{\sqrt{1-\sin^2\alpha\sin^2\beta}}
=\frac14\int_0^{\frac{\pi}2}\frac{d\theta}{\sqrt{\cos\theta\,\sin\theta}}\int_0^{\frac{\pi}2}\frac{d\omega}{\sqrt{\cos\omega\,\sin\omega}}.$$
| (Thanks go to Etanche and Jandri)
\begin{align}J&=\int_0^{\frac{\pi}{2}}\int_0^{\frac{\pi}{2}} \frac{1}{\sqrt{1-\sin^2(\theta)\sin^2 \varphi}}d\varphi d\theta\\
&\overset{z\left(\varphi\right)=\arcsin\left(\sin(\theta)\sin \varphi\right)}=\int_0^{\frac{\pi}{2}} \left(\int_0^ \theta\frac{1}{\sqrt{\sin(\theta-z)\sin(\theta+ z)}}dz\right)d\theta\tag1\\
&=\frac{1}{2}\int_0^{\frac{\pi}{2}} \left(\int_{u}^{\pi-u}\frac{1}{\sqrt{\sin u\sin v}}dv\right)du \tag2\\
&=\frac{1}{2}\int_0^{\frac{\pi}{2}} \left(\int_{u}^{\frac{\pi}{2}}\frac{1}{\sqrt{\sin u\sin v}}dv\right)du+\underbrace{\frac{1}{2}\int_0^{\frac{\pi}{2}} \left(\int_{\frac{\pi}{2}}^{\pi-u}\frac{1}{\sqrt{\sin u\sin v}}dv\right)du}_{w=\pi-v}\\
&=\int_0^{\frac{\pi}{2}} \left(\int_{u}^{\frac{\pi}{2}}\frac{1}{\sqrt{\sin u\sin v}}dv\right)du\\
&\overset{u\longleftrightarrow v}=\int_0^{\frac{\pi}{2}} \left(\int_{0}^{u}\frac{1}{\sqrt{\sin u\sin v}}dv\right)du\\
&=\boxed{\frac{1}{2}\int_0^{\frac{\pi}{2}} \int_0^{\frac{\pi}{2}}\frac{1}{\sqrt{\sin u\sin v}}dudv}
\end{align}
and to obtain the form in the OP, finally substitute $u=2\theta $, $v=2\omega $.
$(1)$: $\displaystyle dz=\dfrac{\sqrt{\sin^2\theta-\sin^2 z}}{\sqrt{1-\sin^2 z}}d\varphi$, $\sin^2 a-\sin^2 b=\sin(a-b)\sin(a+b)$
$(2)$: Change of variable $u=\theta-z,v=\theta+z$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/384145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 2
} |
A counterexample to: $\frac{1-f(x)^2}{1-x^2}\le f'(x)$ — revisited Can we find a counterexample to the following assertion?
Assume that $f:[-1,1]\to [-1,1]$ an odd function of class $C^3$, and assume thaht $f$ is a concave increasing diffeomorphism of $[0,1]$ onto itself. Then my examples say that $$\frac{1-f(x)^2}{1-x^2}\le f'(x),\; x\in (0,1).$$
| I complete the reformulation given by Andrea Marino and give another counterexample.
First, the inequality of the beginning can be written
$$\forall x \in (0,1), \quad \frac{f'(x)}{1-f^2(x)} \ge \frac{1}{1-f^2(x)}$$
and means that the function $x \mapsto \arg\tanh(f(x))-\arg\tanh(x)$ is non-decreasing on $(0,1)$, or equivalently (composing with $\tanh$) that the function
$$g : x \mapsto \frac{f(x)-x}{1-xf(x)}$$
is non-decreasing on $(0,1)$.
Now, let
$$f(x) := \frac{1}{2} [x+1-(1-x)^3].$$
The function $f$ thus defined satisfies the assumptions. Let us compute the corresponding function $g$.
\begin{eqnarray*}
g(x) &=& \frac{[x+1-(1-x)^3]-2x}{2-x[x+1-(1-x)^3]} \\
&=& \frac{1-x-(1-x)^3}{2-x-x^2+x(1-x)^3} \\
&=& \frac{1-(1-x)^2}{2+x+x(1-x)^2} \\
&=& \frac{2x-x^2}{2+2x-2x^2+x^3} \\
\end{eqnarray*}
The quantities $u(x):=2x-x^2$ and $v(x):=2+2x-2x^2+x^3$ are positive on $[0,1]$ (since $x \ge x^2$ on $[0,1]$), and $u'(1)=2-2=0$ whereas $v'(1)=2-4+3=1$. Hence $g$ is decreasing at the neighborhood of $1$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/421985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Parametrization of integral solutions of $3x^2+3y^2+z^2=t^2$ and rational solutions of $3a^2+3b^2-c^2=-1$ 1/ Is it known the parameterisation over $\mathbb{Q}^3$ of the solutions of
$3a^2+3b^2-c^2=-1$
2/ Is it known the parameterisation over $\mathbb{Z}^4$ of the solutions of
$3x^2+3y^2+z^2=t^2$
References, articles or books are welcome
Sincerely, John
| For #1, we can take a particular solution such as $(a_0,b_0,c_0)=(0,0,1)$, and search parametric solution in the form: $(a,b,c)=(a_0+\alpha t, b_0+\beta t, c_0+t)$. Plugging it into the equation and solving for $t\ne 0$, we get:
$$t = \frac{2}{3\alpha^2 + 3\beta^2 - 1}.$$
So, we get rational parametrization with parameters $\alpha,\beta\in\mathbb Q$:
$$(a,b,c) = \bigg(\frac{2\alpha}{3\alpha^2 + 3\beta^2 - 1},\ \frac{2\beta}{3\alpha^2 + 3\beta^2 - 1},\ 1 + \frac{2}{3\alpha^2 + 3\beta^2 - 1}\bigg).$$
For #2, we can similarly parametrize $3\left(\frac{x}{t}\right)^2 + 3\left(\frac{y}{t}\right)^2 + \left(\frac{z}{t}\right)^2 = 1$ and then explicitly expand parameters as fractions, and set $t$ be the common denominator. This way we get parametrization:
$$(x,y,z,t) = \frac{p}{q}\bigg(-2uw,\ -2vw,\ 3u^2 + 3v^2 - w^2,\ 3u^2 + 3v^2 + w^2\bigg),$$
where parameters $u,v,w\in\mathbb Z$, and parameters $p,q\in\mathbb Z$ allow to scale the variables, with the requirement that $q$ represents a common divisor of the variables.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/422125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Irreducibility measure of integer polynomials (Partial) Question in short: Let $p$ be a monic integer polynomial of degree $n$. Is there a natural number $k$ with $0 \leq k \leq n$ such that $p+k$ is irreducible over the integers?
Longer version:
Let $p$ be a monic polynomial over the integers. Define the irreducibility measure $d(p)$ of $p$ as the smallest integer $k \geq 0$ such that $p+k$ is irreducible over the integers.
Define $M_n:=$ sup $\{ d(p) | deg(p)=n \}$ for $n \geq 2$. Here $deg(p)$ is the degree of $p$.
Question: Is it true that $M_n \leq n$? (Answer no, by Joachim König). Is there a good bound for $M_n$ ?
The question is based on some small computer experiments.
edit: Sorry I forgot the condition that the polynomials are monic (I did all computer experiments with that assumption. The answer by Joachim König gives a counterexample in the non-monic case).
| For $f=x^6 - 3x^5 - 2x^4 + 10x^3 + x^2 - 8x - 5$, one has
$$f=(x^3 - 2x^2 - 2x + 5)(x^3 - x^2 - 2x - 1),$$
$$f+1=(x-2)(x^5 - x^4 - 4x^3 + 2x^2 + 5x + 2),$$
$$f+2=(x^2-x-1)(x^4 - 2x^3 - 3x^2 + 5x + 3),$$
$$f+3=(x^2-2)(x^4 - 3x^3 + 4x + 1),$$
$$f+4=(x+1)(x^5 - 4x^4 + 2x^3 + 8x^2 - 7x - 1),$$
$$f+5=x(x^5 - 3x^4 - 2x^3 + 10x^2 + x - 8),$$
$$f+6=(x-1)(x^5 - 2x^4 - 4x^3 + 6x^2 + 7x - 1).$$
(Found more or less by brute force.)
PS: In order to not cause unnecessary confusion, the "answer" giving a counterexample for the ``non-monic case", currently mentioned in the OP, was $f(x)=6x^2+7x$; this was removed as a separate answer after the OP had been altered.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/441141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
What are fixed points of the Fourier Transform The obvious ones are 0 and $e^{-x^2}$ (with annoying factors), and someone I know suggested hyperbolic secant. What other fixed points (or even eigenfunctions) of the Fourier transform are there?
| $\bf{1.}$ A more complete list of particular self-reciprocal Fourier functions of the first kind, i.e. eigenfunctions of the cosine Fourier transform $\sqrt{\frac{2}{\pi}}\int_0^\infty f(x)\cos ax dx=f(a)$:
$1.$ $\displaystyle e^{-x^2/2}$ (more generally $e^{-x^2/2}H_{2n}(x)$, $H_n$ is Hermite polynomial)
$2.$ $\displaystyle \frac{1}{\sqrt{x}}$ $\qquad$ $3.$ $\displaystyle\frac{1}{\cosh\sqrt{\frac{\pi}{2}}x}$ $\qquad$ $4.$ $\displaystyle \frac{\cosh \frac{\sqrt{\pi}x}{2}}{\cosh \sqrt{\pi}x}$ $\qquad$$5.$ $\displaystyle\frac{1}{1+2\cosh \left(\sqrt{\frac{2\pi}{3}}x\right)}$
$6.$ $\displaystyle \frac{\cosh\frac{\sqrt{3\pi}x}{2}}{2\cosh \left( 2\sqrt{\frac{\pi}{3}} x\right)-1}$ $\qquad$ $7.$ $\displaystyle \frac{\cosh\left(\sqrt{\frac{3\pi}{2}}x\right)}{\cosh (\sqrt{2\pi}x)-\cos(\sqrt{3}\pi)}$ $\qquad$ $8.$ $\displaystyle \cos\left(\frac{x^2}{2}-\frac{\pi}{8}\right) $
$9.$ $\displaystyle\frac{\cos \frac{x^2}{2}+\sin \frac{x^2}{2}}{\cosh\sqrt{\frac{\pi}{2}}x}$ $\qquad$ $10.$ $\displaystyle \sqrt{x}J_{-\frac{1}{4}}\left(\frac{x^2}{2}\right)$ $\qquad$ $11.$ $\displaystyle \frac{\sqrt[4]{a}\ K_{\frac{1}{4}}\left(a\sqrt{x^2+a^2}\right)}{(x^2+a^2)^{\frac{1}{8}}}$
$12.$ $\displaystyle \frac{x e^{-\beta\sqrt{x^2+\beta^2}}}{\sqrt{x^2+\beta^2}\sqrt{\sqrt{x^2+\beta^2}-\beta}}$$\qquad$ $13.$ $\displaystyle \psi\left(1+\frac{x}{\sqrt{2\pi}}\right)-\ln\frac{x}{\sqrt{2\pi}}$, $\ \psi$ is digamma function.
Examples $1-5,8-10$ are from the chapter about self-reciprocal functions in Titschmarsh's book "Introduction to the theory of Fourier transform". Examples $11$ and $12$ can be found in Gradsteyn and Ryzhik. Examples $6$ and $7$ are from this question What are all functions of the form $\frac{\cosh(\alpha x)}{\cosh x+c}$ self-reciprocal under Fourier transform?. Some other self-reciprocal functions composed of hyperbolic functions are given in Bryden Cais's paper On the transformation of infinite series. Discussion of $13$ can be found in Berndt's article.
$\bf{2.}$ Self-reciprocal Fourier functions of the second kind, i.e. eigenfunctions of the sine Fourier transform $\sqrt{\frac{2}{\pi}}\int_0^\infty f(x)\sin ax dx=f(a)$:
$1.$ $\displaystyle \frac{1}{\sqrt{x}}$ $\qquad$ $2.$ $\displaystyle xe^{-x^2/2}$ (and more generally $e^{-x^2/2}H_{2n+1}(x)$)
$3.$ $\displaystyle \frac{1}{e^{\sqrt{2\pi}x}-1}-\frac{1}{\sqrt{2\pi}x}$ $\qquad$ $4.$ $\displaystyle \frac{\sinh \frac{\sqrt{\pi}x}{2}}{\cosh \sqrt{\pi}x}$ $\qquad$ $5.$ $\displaystyle \frac{\sinh\sqrt{\frac{\pi}{6}}x}{2\cosh \left(\sqrt{\frac{2\pi}{3}}x\right)-1}$
$6.$ $\displaystyle \frac{\sinh(\sqrt{\pi}x)}{\cosh \sqrt{2\pi} x-\cos(\sqrt{2}\pi)}$ $\qquad$ $7.$ $\displaystyle \frac{\sin \frac{x^2}{2}}{\sinh\sqrt{\frac{\pi}{2}}x}$ $\qquad$ $8.$ $\displaystyle \frac{xK_{\frac{3}{4}}\left(a\sqrt{x^2+a^2}\right)}{(x^2+a^2)^{\frac{3}{8}}}$
$9.$ $\displaystyle \frac{x e^{-\beta\sqrt{x^2+\beta^2}}}{\sqrt{x^2+\beta^2}\sqrt{\sqrt{x^2+\beta^2}+\beta}}$$\qquad$ $10.$ $\displaystyle \sqrt{x}J_{\frac{1}{4}}\left(\frac{x^2}{2}\right)$$\qquad$ $11.$ $\displaystyle e^{-\frac{x^2}{4}}I_{0}\left(\frac{x^2}{4}\right)$
$12.$ $\displaystyle \sin\left(\frac{3\pi}{8}+\frac{x^2}{4}\right)J_{0}\left(\frac{x^2}{4}\right) $$\qquad$ $13.$ $\displaystyle \frac{\sinh \sqrt{\frac{2\pi}{3}}x}{\cosh \sqrt{\frac{3\pi}{2}}x}$
Examples $1-5,7$ can be found in Titschmarsh's book cited above. $8-12$ can be found in Gradsteyn and Ryzhik. $13$ is from Bryden Cais, On the transformation of infinite series, where more functions of this kind are given.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/12045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66",
"answer_count": 4,
"answer_id": 1
} |
Product of sine For which $n\in \mathbb{N}$, can we find (reps. find explicitly) $n+1$ integers $0 < k_1 < k_2 <\cdots < k_n < q<2^{2n}$
such that
$$\prod_{i=1}^{n} \sin\left(\frac{k_i \pi}{q} \right) =\frac{1}{2^n} $$
P.S.: $n=2$ is obvious answer, $n=6 $ is less obvious but for instance we have $k_1 = 1$, $k_2 = 67$, $k_3 = 69$, $k_4 = 73$, $k_5 = 81$, $k_6 = 97$, and $q=130$.
| Consider the identity (as quoted by drvitek):
$$\prod_{k=1}^{n} \sin \left(\frac{(2k-1) \pi}{2n}\right) = \frac{2}{2^n},$$
This is completely correct but doesn't quite answer the question because the RHS is
not $1/2^n$ $\text{---}$ this is an issue related to the fact that $\zeta - \zeta^{-1}$ is not
a unit if $\zeta$ is a root of unity of prime power order.
Replace $n$ by $3n$, and take the ratio of the corresponding
products. Then one finds that
$$\prod_{(k,6) = 1}^{k < 6m} \sin \left(\frac{k \pi}{6m}\right)=
\sin \left(\frac{\pi}{6m}\right)
\sin \left(\frac{5 \pi}{6m}\right)
\sin \left(\frac{7 \pi}{6m}\right) \cdots
\sin \left(\frac{(6m-1) \pi}{6m}\right) = \frac{1}{2^{2m}}.$$
This provides the identity you request for $n = 2m$, since
$q = 6m < 2^{4m} = 2^{2n}$ is true for all $m \ge 1$.
There are other "obvious" identities that can be written down, but they tend to have length
$\phi(r)$ for some integer $r$, and $\phi(r)$ is always even (if $r > 2$).
For odd $n$, note the "exotic" identity:
$$\sin \left(\frac{2 \pi}{42}\right)
\sin \left(\frac{15 \pi}{42}\right)
\sin \left(\frac{16 \pi}{42}\right) = \frac{1}{8}.$$
Since $42 < 64$, this is an identity of the required form for $n = 3$.
On the other hand, none of the rational numbers
$1/21$, $5/14$, $8/21$ can be written in the form $k/6m$
where $(k,6) = 1$.
Hence
$$\sin \left(\frac{2 \pi}{42}\right)
\sin \left(\frac{15 \pi}{42}\right)
\sin \left(\frac{16 \pi}{42}\right)
\prod_{(k,6) = 1}^{k < 6m} \sin \left(\frac{k \pi}{6m}\right) = \frac{1}{2^{2m+3}},$$
when written under the common denominator $q = \mathrm{lcm}(42,6m)$, consists of distinct fractional multiples $k_i/q$ of $\pi$ with $0 < k_i < q$, and is thus
an identity of the required form for $n = 2m + 3$, after checking that
$$q = \mathrm{lcm}(42,6m) \le 42m \le 2^{4m+6} = 2^{2n}.$$
Thus the answer to your question is that such an identity holds for all $n > 1$.
(It trivially does not hold for $n = 1$.)
The first few identities constructed in this way are:
$$\sin \left(\frac{\pi}{6}\right) \sin \left(\frac{5 \pi}{6}\right) =
\frac{1}{4},$$
$$\sin \left(\frac{2 \pi}{42}\right)
\sin \left(\frac{15 \pi}{42}\right)
\sin \left(\frac{16 \pi}{42}\right) = \frac{1}{8},$$
$$\sin \left(\frac{\pi}{12}\right) \sin \left(\frac{5 \pi}{12}\right)
\sin \left(\frac{7 \pi}{12}\right) \sin \left(\frac{11 \pi}{12}\right) =
\frac{1}{16},$$
$$\sin \left(\frac{2 \pi}{42}\right) \sin \left(\frac{7 \pi}{42}\right)
\sin \left(\frac{15 \pi}{42}\right)
\sin \left(\frac{16 \pi}{42}\right)
\sin \left(\frac{35 \pi}{42}\right) = \frac{1}{32},$$
&. &.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/44211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Binary representation of powers of 3 I asked this question at Mathematics Stack Exchange but since I didn't got a satisfactory answer I decided to ask it here as well.
We write a power of 3 in bits in binary representation as follows.
For example $3=(11)$, $3^2=(1001)$ which means that we let the $k$-th bit from the right be $1$ if the binary representation of this power of 3 contains $2^{k-1}$, and $0$ otherwise.
*
*Prove that the highest power of 3 that has a palindromic binary representation is $3^3 = (11011)$.
*Prove that $3 = (11)$ is the only power of 3 with a periodic binary representation (in the sense that it consists of a finite sequence of $1$s and $0$s repeated two or more times, like "$11$" consists of two repetitions of the bitstring "$1$").
| Here is an extended hint for proving 2 (an almost complete proof is in the Update below). If $3^s$ base 2 is periodic, then you can represent it as $u(1+2^m+2^{2m}+...+2^{(t-1)m})=u\frac{2^{tm}-1}{2^m-1}$ for some numbers $u,m,t$. Therefore if $q_n(x)$ denotes the $n$-th cyclotomic polynomial, then $q_{tm}(2)$ must be a power of 3 (as a divisor of $3^s$). But it looks like $q_n(2)$ is never 0 modulo 9 (this should be possible to prove rigorously but I do not have time). Hence $q_{tm}(2)$ must be equal to 3 or 1 which gives a bound on $s$.
Update Since $2^m-1=0 \mod 9$ only when $m=0 \mod 6$. it is enough to consider $q_{6k}(2)$. Since $2^3=-1 \mod 9$, we only need remainders of cyclotomic polynomials modulo $x^3+1$ with coefficients modulo 9. Here are all 50 of them $$ \begin{array}{l} 1,3,{x}^{2},8x,8{x}^{2},1+8x,5+5{x}^{2},x+1,x+8,x+8
{x}^{2},\\\ 4x+5{x}^{2},{x}^{2}+1,1+8{x}^{2}+8x,2+3x+2{x}^{2},
2+5x+4{x}^{2},2+8x+{x}^{2},2+{x}^{2}+7x,\\\ 2+2{x}^{2}+8x,2+6
{x}^{2}+7x,3+x+8{x}^{2},3+5x+3{x}^{2},\\\ 3+6x+2{x}^{2},3+2
{x}^{2}+7x,4+6x+3{x}^{2},4+7x+4{x}^{2},4+2{x}^{2}+5x,4
+3{x}^{2}+5x,\\\ 4+4{x}^{2}+6x,5+2x+7{x}^{2},5+4x+4{x}^{2}
,5+4{x}^{2}+5x, \\\ 6+2x+6{x}^{2}, \\\ 6+4x+5{x}^{2},6+7x+2{x}^{
2},6+5{x}^{2}+3x,6+6{x}^{2}+4x,\\\ 6+7{x}^{2}+3x,7+2x+6{x}
^{2},7+4x+7{x}^{2},\\\ 7+6{x}^{2}+3x,8+2x+7{x}^{2},8+6x+8{
x}^{2},8+8x+{x}^{2},
\\\ 8+7{x}^{2}+x,8+8{x}^{2}+2x,{x}^{2}+x+1,{x}
^{2}+8x+1 \end{array}$$ None of these polynomials become 0 when evaluated at $2 \mod 9$.
As for the last claim about the size of $q_{mt}(2)$: every root of the cyclotomic polynomial is on the unit circle, so the values at 2 grow with the degree.
This almost completes the proof of 2 (one needs to show that the set of 50 polynomials is complete, but that can be done by induction).
| {
"language": "en",
"url": "https://mathoverflow.net/questions/92397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Distance between the product of marginal distributions and the joint distribution Given a joint distribution $P(A,B,C)$, we can compute various marginal distributions. Now suppose:
\begin{align}
P1(A,B,C) &= P(A) P(B) P(C) \\
P2(A,B,C) &= P(A,B) P(C) \\
P3(A,B,C) &= P(A,B,C)
\end{align}
Is it true that $d(P1,P3) \geq d(P2,P3)$ where $d$ is the total variation distance?
In other words, is it provable that $P(A,B) P(C)$ is a better approximation of $P(A,B,C)$ than $P(A) P(B) P(C)$ in terms of the total variation distance? Intuitively I think it's true but could not find out a proof.
| I just find the following counter-example. Suppose $A,B,C$ are discrete variables. $A,B$ can each take two values while $C$ can take three values.
The joint distribution $P(A,B,C)$ is:
\begin{array}{cccc}
A & B & C & P(A,B,C) \\
1 & 1 & 1 & 0.1/3 \\
1 & 1 & 2 & 0.25/3 \\
1 & 1 & 3 & 0.25/3 \\
1 & 2 & 1 & 0.4/3 \\
1 & 2 & 2 & 0.25/3 \\
1 & 2 & 3 & 0.25/3 \\
2 & 1 & 1 & 0.4/3 \\
2 & 1 & 2 & 0.25/3 \\
2 & 1 & 3 & 0.25/3 \\
2 & 2 & 1 & 0.1/3 \\
2 & 2 & 2 & 0.25/3 \\
2 & 2 & 3 & 0.25/3 \\
\end{array}
So the marginal distribution $P(A,B)$ is:
\begin{array}{ccc}
A & B & P(A,B) \\
1 & 1 & 0.2 \\
1 & 2 & 0.3 \\
2 & 1 & 0.3 \\
2 & 2 & 0.2 \\
\end{array}
The marginal distributions $P(A), P(B)$ and $P(C)$ are uniform.
So we can compute that:
\begin{align}
d(P1,P3) &= 0.1 \\
d(P2,P3) &= 0.4/3
\end{align}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/138031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Impossible Range for Minkowski-Like Sum of Squares Given coprime positive integers M,N, and a corresponding integer z outside of the range (for all integers x,a,b,c) of $Mx^2-N(a^2+b^2+c^2)$, is there any such z which is "deceptive", meaning that it is modularly inside the range of the smaller components, i.e., inside the range of both $Mx^2$ mod N and $-N(a^2+b^2+c^2)$ mod M? Feel free to replace $a^2+b^2+c^2$ with "All integers except for $4^a(8b+7)$" per Lagrange.
| There are no deceptive integers for indefinite forms in at least 4 variables, this begins with Siegel (1951). I have the 1961 article by Martin Kneser, it is famous for a variety of reasons. This article is by J. W.S. Cassels, presented in the International Congress in Singapore in 1981, the article is pages 9-26 in the book. Below is page 18, then page 19.
Watson 1955 is Representations of integers by indefinite quadratic forms, Mathematika, volume 2, pages 32-38. I think it is covered on page 114 of his 1960 book, but he is maddeningly terse.
The first examples of deceptive integers were found by Jones and Pall in 1939, see at TERNARY. The shortest one to write would be $$ 3 x^2 + 4 y^2 + 9 z^2 \neq w^2, $$ whenever all (positive) prime factors of $w$ are congruent to $1 \pmod 3.$
Let me give a proof for Siegel's 1951 indefinite ternary example. The main lemma from binary forms is that, given a (positive) prime $q \equiv \pm 3 \pmod 8,$ if $q | (a^2 - 2 b^2),$ then $q|a,b,$ and in fact $q^2 | (a^2-2 b^2).$ By induction there is some $l$ such that $q^{2l} \parallel (a^2 - 2 b^2).$ The double vertical bar means that we are showing the highest exponent of the prime such that the indicated power still divides the expression on the other side.
Now, suppose we have some $t$ such that all (positive) prime factors $p$ of $t$ satisfy $p \equiv \pm 1 \pmod 8.$ We will prove that $(2x+z)^2 - 2 y^2 + 16 z^2 \neq t^2. $ For, if we had $(2x+z)^2 - 2 y^2 + 16 z^2 = t^2, $ we would need to have odd $z$ because $t$ is odd. Next $$(2x+z)^2 - 2 y^2 = t^2 - 16 z^2 = (t+4z)(t-4z). $$
Both $t+4z, \; t- 4 z \equiv \pm 3 \pmod 8$ because $z$ is odd. There is some (positive) prime $q \equiv \pm 3 \pmod 8$ with $q^{2j+1} \parallel t+4z.$ However, from the earlier lemma, as $q^{2l} \parallel ( (2x+z)^2 - 2 y^2),$ we find that $q^{2k+1} \parallel (t-4z).$ That is, $q | t+4z$ and $q | t - 4 z,$ so that $q | t,$ which contradicts the hypotheses on $t.$
To show explicitly that Siegel's second form does represent all $q^2,$ for prime $q \equiv \pm 3 \pmod 8.$
Start with
$$ \left( \alpha^2 - 2 \beta^2 + 4 \gamma^2 \right)^2 = \left( \alpha^2 + 2 \beta^2 - 4 \gamma^2 \right)^2 - 2 \left( 2 \alpha \beta \right)^2 + 4 \left( 2 \alpha \gamma \right)^2$$
For $q \equiv 3 \pmod 8,$ we may take $\gamma = \beta, \; \; q = \alpha^2 + 2 \beta^2$ with both $\alpha,\beta$ odd, then
$$ q^2 = \left( \alpha^2 - 2 \beta^2 \right)^2 - 2 \left( 2 \alpha \beta \right)^2 + 16 \left( \alpha \beta \right)^2 $$ with
both $\alpha^2 - 2 \beta^2$ and $ \alpha \beta$ odd. So, we have written $$ q^2 = \zeta^2 - 2 \eta^2 + 16 \theta^2, \; \; \zeta \equiv \theta \pmod 2. $$
For $q \equiv 5 \pmod 8,$ we may take $ \beta = 0, \; \; q = \alpha^2 + 4 \gamma^2$ with both $\alpha,\gamma$ odd, then
$$ q^2 = \left( \alpha^2 - 4 \gamma^2 \right)^2 - 2 \left( 0 \right)^2 + 16 \left( \alpha \gamma \right)^2 $$ with
both $\alpha^2 - 4 \gamma^2$ and $ \alpha \gamma$ odd. So, we have written $$ q^2 = \zeta^2 - 2 \eta^2 + 16 \theta^2, \; \; \zeta \equiv \theta \pmod 2. $$
In either case, we may write $q^2 = (2x+z)^2 - 2 y^2 + 16 z^2$ with $$ x = (\zeta - \theta)/2, \; \; y = \eta, \; \; z = \theta. $$
On the other hand, Siegel's two forms agree on all other numbers except the specific squares mentioned. On page 19 below, Cassels is mostly talking about primitive representations, which has its own distinctions.
==================
=================
==================
| {
"language": "en",
"url": "https://mathoverflow.net/questions/138426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
State of knowledge of $a^n+b^n=c^n+d^n$ vs. $a^n+b^n+c^n=d^n+e^n+f^n$ As far as I understand, both of the Diophantine equations
$$a^5 + b^5 = c^5 + d^5$$
and
$$a^6 + b^6 = c^6 + d^6$$
have no known nontrivial solutions, but
$$24^5 + 28^5 + 67^5 = 3^5+64^5+62^5$$
and
$$3^6+19^6+22^6 = 10^6+15^6+23^6$$
among many other solutions are known, when the number of summands is increased
from $2$ to $3$.
My information here is at least a decade out of date,
and I am wondering if the resolution of Fermat's Last Theorem has clarified
this situation,
with respect to sums of an equal number of powers...?
| Label the equation,
$$x_1^k+x_2^k+\dots+x_m^k = y_1^k+y_2^k+\dots+y_n^k$$
as a $(k,m,n)$. Let type of primitive solutions be polynomial identity $P(n)$, or elliptic curve $E$. Then results (mostly) for the balanced case $m=n$ are,
I. Table 1
$$\begin{array}{|c|c|c|}
(k,m,n)& \text{# of known solutions}& \text{Type}\\
3,2,2& \infty&P(n)\\
4,2,2& \infty&P(n),E\\
5,3,3& \infty&P(n),E\\
6,3,3& \infty&P(n),E\\
7,4,4& \text{many} &E\,?\\
7,4,5& \infty&P(n)+E\\
8,4,4& 1&-\\
9,5,5& \text{many}&-\\
9,6,6& \infty & E\\
10,5,5& 0&-\\
\end{array}$$
Note: For $(7,4,5)$, see this MSE answer.
II. Table 2. (For multi-grades)
$$\begin{array}{|c|c|c|}
(k,m,n)& \text{# of known solutions}&\text{Type}\\
5,4,4& \infty&P(n),E\\
6,4,4& \infty&P(n),E\\
7,5,5& \infty&P(n),E\\
8,5,5& \infty&E\\
9,6,6& \text{many}&E\,?\\
10,6,6& \infty&E\\
11,7,7& 0&-\\
12,7,7& 0&-\\
\end{array}$$
Note: A multi-grade is simultaneously valid for multiple $k$. For example, the $(9,6,6)$ is for $k=1,3,5,7,9$ while the $(10,6,6)$ is for $k=2,4,6,8,10$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/150428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 2,
"answer_id": 1
} |
A Problem Concerning Odd Perfect Number Briefly, prove that every odd number having only three distinct prime factors cannot be a perfect number.
I know there are results much stronger than the one above, but I am looking for an answer without computer cracking (which means the computation can be carried out by a person), thanks.
| Here is a proof, perhaps not the simplest. Let $n$ be an odd perfect number with three distinct prime factors. As observed in the comments, $n$ is of the form $3^a5^bp^c$, where $p\in\{7,11,13\}$.
It is a simple fact (observed by Euler) that exactly one of the exponents $a$, $b$, $c$ is odd. Let $q^r\parallel n$ be the corresponding prime power (i.e. $r$ is odd), then $q+1$ divides $\sigma(q^r)$, hence also $\sigma(n)=2n$. Therefore $(q+1)/2$ divides $n/q^r$, which forces $q\not\in\{3,7,11,13\}$, i.e. $q=5$. Hence $b$ is odd, while $a$ and $c$ are even. Now we can rewrite the equation $\sigma(n)=2n$ as
$$ 3^{a-1}5^bp^c=(1+3+\dots + 3^a)(1+5^2+\dots + 5^{b-1})(1+p+\dots+p^c),$$
where the middle sum on the right hand sum only contains even powers of $5$. In particular, $5$ divides $1+3+\dots + 3^a$ or $1+p+\dots+p^c$. The first possibility is excluded by $a$ even. Similarly, the second possibility is excluded for $p\in\{7,13\}$ by $c$ even, whence $p=11$. Now the condition becomes
$$ 3^{a-1}5^b11^c=(1+3+\dots + 3^a)(1+5^2+\dots + 5^{b-1})(1+11+\dots+11^c).$$
Looking at this equation modulo $4$, we get $3\equiv (b+1)/2\pmod{4}$, i.e. $b\equiv 5\pmod{8}$. In particular, $b\geq 5$. Combining this with
$$ \frac{5}{3}=\left(1+\frac{1}{3}+\dots+\frac{1}{3^a}\right)\left(1+\frac{1}{5^2}+\dots+\frac{1}{5^{b-1}}\right)\left(1+\frac{1}{11}+\dots+\frac{1}{11^c}\right),$$
we get that
$$ \frac{5}{3}>\left(1+\frac{1}{3}+\dots+\frac{1}{3^a}\right)\left(1+\frac{1}{5^2}+\frac{1}{5^4}\right)\left(1+\frac{1}{11}+\frac{1}{11^2}\right).$$
This shows that $a<4$, i.e. $a=2$. Hence $\sigma(3^a)=13$ divides $\sigma(n)=2n=2\cdot 3^a5^b11^c$, which is a contradiction.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/179504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Failing of heuristics from circle method The heuristic from circle method for integral points on diagonal cubic surfaces $x^3+y^3+z^3=a$ ($a$ is a cubic-free integer) seems to fit well with numerical computations by ANDREAS-STEPHAN ELSENHANS AND JORG JAHNEL. The only known exception is the surface $x^3+y^3+z^3=2$. Circle method predicts that the number of integral points $(x,y,z)$ with $\max(\vert x\vert,\vert y \vert,\vert z\vert)<N$ is $\approx 0.16\log N$. But parametric solutions $(1+6t^3,1-6t^3,-6t^2)$ are missing.
Question: Why the heuristic fails for $x^3+y^3+z^3=2$?
| Note as a caveat that the heuristics can fail in the other direction as well: Dietmann and Elsholtz ( http://www.math.tugraz.at/~elsholtz/WWW/papers/papers26de08.pdf ) have shown that if $p\equiv 7\pmod{8}$ is prime, then $p^2$ cannot be represented as $x^2+y^2+z^4$. This strongly contradicts the heuristic of the circle method, since $\frac{1}{2}+\frac{1}{2}+\frac{1}{4}>1$ should imply that every sufficiently large integer $n$ is either representable as $n=x^2+y^2+z^4$, or there is some local constraint, i.e. a modulus $q$ such that $n\equiv x^2+y^2+z^4\pmod{q}$ is unsolvable.
This theorem is one of those cases where formulating the theorem is much more difficult than proving it. Write $x^2+y^2+z^4=p^2$ as $x^2+y^2=(p-z^2)(p+z^2)$. Now $p-z^2$ is $3\pmod{4}$ or $6\pmod{8}$, hence divisible by some prime $q\equiv 3\pmod{4}$ to the first power. But a sum of two squares cannot be divisible by such a prime with odd exponent, thus $q$ divides $p+z^2$ as well. Now $q|(p-z^2, p+z^2)$, thus $q$ divides $2p$ and $2z^2$, hence $p=q$ and $p|z$. But then $z^4$ is larger than $p^2$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/180300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 0
} |
Gradual monotonic morphing between two natural numbers Let $a < b$ be two natural numbers. I will use these as an example:
\begin{align*}
a & = 2^5 \cdot 3^2 \cdot 5^2 = 7200\\\
b & = 2^3 \cdot 3^5 \cdot 7^1 = 13608
\end{align*}
I seek to "morph" $a$ to $b$ via $a{=}n_0,n_1,n_2,\ldots,n_k{=}b$
such that
*
*Each step is upward: $n_{i-1} < n_i < n_{i+1}$ (monotonic).
*$\textrm{gcd}(a,b) \mid n_i$. (So the core common factors are retained.)
*$n_i \mid \textrm{lcm}(a,b)$. (So no other prime factors may be introduced.)
*$k$ is maximized, i.e., the number of steps is maximized. (This is the sense of "gradual.")
So in the case of the example, we need to multiply $a$
by $2^{-2} \cdot 3^3 \cdot 5^{-2} \cdot 7^1 = 189/100$ to reach $b$, in discrete,
increasing steps.
We can always achieve this in a single $k{=}1$ upward step.
Try 1: Starting with $2^{-1} \cdot 3^1=3/2$, and repeating $2^{-1} \cdot 3^1$, seems promising,
but that leaves $3^1 \cdot 5^{-2} \cdot 7^1 = 21/15 < 1$, and so the 3rd step is downward.
Try 2: Another try is to start with $5^{-1} \cdot 7^1=7/5$, and then $3^2 \cdot 5^{-1}=9/5$,
but then what remains is $2^{-2} \cdot 3^1 = 3/4 < 1$, again a downward move.
Try 3,4: It appears there are two $2$-step solutions:
\begin{align*}
2^{-1} \cdot 3^1 = 3/2 > 1 &\;\textrm{followed by}\; 2^{-1} \cdot 3^2 \cdot 5^{-2} \cdot 7^1 = 63/50 > 1\\\
2^{-2} \cdot 7^1 = 7/4 > 1 &\;\textrm{followed by}\; 3^3 \cdot 5^{-2} = 27/25 > 1
\end{align*}
The corresponding "morphs" are
\begin{align*}
7200 \to & 10800 \to 13608\\\
2^5 \cdot 3^2 \cdot 5^2 \to & 2^4 \cdot 3^3 \cdot 5^2 \to 2^3 \cdot 3^5 \cdot 7^1\\\
7200 \to & 12600 \to 13608\\\
2^5 \cdot 3^2 \cdot 5^2 \to & 2^3 \cdot 3^2 \cdot 5^2 \cdot 7^1 \to 2^3 \cdot 3^5 \cdot 7^1
\end{align*}
Q.Given factorizations of $a$ & $b$, can an optimal (maximum number of
intermediate $n_i$'s) morph be directly calculated from the factorizations, or
must one resort to a combinatorial optimization?
There is a sense in which I seek a particular path in a graph, but I am not seeing
that clearly...
| If I understand the question right, then $k$ is simply the number of divisors $d$ of $\text{lcm}(a,b)$ such that $a\le d\le b$ and $d\mid\gcd(a,b)$. (So in finding a longest chain, we may assume that $a$ and $b$ are relatively prime.)
In your example, a longest chain would be
\begin{equation}
7200\to 7560\to 7776\to 9072\to 9720\to 10080\to 10800\to 12600\to 12960\to 13608
\end{equation}
I doubt that the length can be determined without some sort of actually computing the divisors. One can interpret these divisors (via their exponents) as lattice points in a polytope: Suppose that $a$ and $b$ are relatively prime, and $p_1,p_2,\dots,p_r$ are the primes dividing $ab$. Then $k$ is the number of integral $e_i$ such that
\begin{equation}
\log a\le\sum_{i=1}^re_i\log p_i\le\log b.
\end{equation}
Taking the volume of the polytope as an approximation of the number of the lattice points in this polytope, we see that $k$ is about (in a vague sense of course)
\begin{equation}
\frac{1}{r!\prod\log p_i}((\log b)^r-(\log a)^r).
\end{equation}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/224300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
An inequality of a cyclic polygon I am looking for a proof of the inequality as follows:
Let $A_1A_2....A_n$ be the regular polygon incribed in a circle $(O)$ with radius $R$. Let $B_1B_2....B_n$ be a polygon incribed the circle $(O)$. We let $x_{ij}=A_iA_j$ and $y_{ij}=B_{i}B_{j}$. Let $f(x)=x^m$ (where $m=1, m=2$), I conjecture that:
$$\sum_{i<j} f(x_{ij}) \ge \sum_{i<j} f(y_{ij})$$
The case $ m = 1 $ was proved in our paper in here
Example:
*
*$n=3$, let $ABC$ be a triangle with sidelength $a, b, c$ then we have the inequality as follows:
$$a^2+b^2+c^2 \le (\sqrt{3}R)^2+(\sqrt{3}R)^2+(\sqrt{3}R)^2=9R^2$$
and
$$a+b+c \le 3\sqrt{3}R$$
*
*$n=4$, let $ABCD$ be a cyclic quadrilateral with sidelength $a=AB$, $b=BC$, $c=CD$, $d=DA$, $e=AC$, $f=BD$ then we have the inequality as follows:
$$a^2+b^2+c^2+d^2+e^2+f^2 \le (\sqrt{2}R)^2+(\sqrt{2}R)^2+(\sqrt{2}R)^2+(\sqrt{2}R)^2+(2R)^2+(2R)^2=16R^2$$
and
$$a+b+c+d+e+f \le 4(\sqrt{2}+1)R$$
See also:
*
*Strengthened version of Isoperimetric inequality with n-polygon
| As for the sum of squares (m=2), denoting the vectors $\overline{OB_i}=b_i$ we get $\sum_{i<j} y_{ij}^2=\sum_{i<j} (b_i-b_j)^2=n\sum_i b_i^2-(\sum b_i)^2=n^2R^2-(\sum b_i)^2$ that is maximal if and only if $\sum b_i=0$ --- so, in particular, for a regular polygon.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/304456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$(x + y + z)....(x + y\omega_n^{n-1} + z\omega_n) = x^n + y^n + z^n - P(x,y,z)$ To find $P$ $$(x + y + z)(x + y\omega_n + z\omega_n^{n-1})(x + y\omega_n^2 + z\omega_n^{n-2})....(x + y\omega_n^{n-1} + z\omega_n) = x^n + y^n + z^n - P(x,y,z)$$ where $\omega_n$ is an nth root of unity.
The question is to find the polynomial $P$.
I have tried to manually multiply the terms of LHS and then equate the coefficients to get the polynomial but that's too cumbersome:
$(x + y + z)(x + y\omega_n + z\omega_n^{n-1})(x + y\omega_n^2 + z\omega_n^{n-2})....(x + y\omega_n^{n-1} + z\omega_n) = x^n(1 + [Y + Z])(1 + [Y\omega_n + Z\omega_n^{n-1}])(1 + [Y\omega_n^2 + Z\omega_n^{n-2}])....(1 + [Y\omega_n^{n-1} + Z\omega_n])$ where $Y=\frac {y}{x}$ and $Z=\frac {z}{x}$
Hence, we can apply the formula:
$(1+\alpha)(1+\beta)(1+\gamma)...... = 1 + [\alpha + \beta + \gamma + ...] + [\alpha\beta + \beta\gamma + ....] + ....$
I hope someone can help.
| As suggested in the first comment, computationally it looks like $$P\equiv P_n=\dfrac{x^n}{t^n}L_n(t)-x^n=\dfrac{x^n}{t^n}(L_n(t)-t^n), $$
where $L_n$ is the $n$th Lucas polynomial in $t:=\dfrac{{ix}}{\sqrt{yz^{\phantom l}}}$.
E.g. for $n=6$, $$P=6x^4yz-9x^2y^2z^2+2xy^3z^3$$ while $$L_6(t)=t^6+6t^4+9t^2+2.$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/313350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Asymptotic expansion of hypergeometric function near $z=1$ Given the hypergeometric function $_2F_1[a,b,c,z]$
in the interval $z\in(1,\infty)$. What is the proper asymptotic expansion of the aforesaid function near $z=1$, when one is approaching from $z>1$.
| Mathematica says:
$$\left(\frac{\pi \csc ((-a-b+c) \pi ) \Gamma (c)}{\Gamma (a+b-c+1) \Gamma (c-a) \Gamma
(c-b)}-\frac{a b \pi \csc ((-a-b+c) \pi ) \Gamma (c) (z-1)}{\Gamma (a+b-c+2) \Gamma
(c-a) \Gamma (c-b)}+\frac{a (a+1) b (b+1) \pi \csc ((-a-b+c) \pi ) \Gamma (c) (z-1)^2}{2
\Gamma (a+b-c+3) \Gamma (c-a) \Gamma (c-b)}+O\left((z-1)^3\right)\right)+(z-1)^{-a-b+c}
e^{2 i \pi (-a-b+c) \left\lfloor -\frac{\arg (z-1)}{2 \pi }\right\rfloor }
\left(\frac{(-1)^{-a-b+c+1} \pi \csc ((-a-b+c) \pi ) \Gamma (c)}{\Gamma (a) \Gamma (b)
\Gamma (-a-b+c+1)}+\frac{(-1)^{-a-b+c+1} (a-c) (c-b) \pi \csc ((-a-b+c) \pi ) \Gamma (c)
(z-1)}{\Gamma (a) \Gamma (b) \Gamma (-a-b+c+2)}-\frac{\left((-1)^{-a-b+c} (a-c-1) (a-c)
(c-b) (-b+c+1) \pi \csc ((-a-b+c) \pi ) \Gamma (c)\right) (z-1)^2}{2 (\Gamma (a) \Gamma
(b) \Gamma (-a-b+c+3))}+O\left((z-1)^3\right)\right)$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/323608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Conjecture: $a^n+b^n+c^n\ge x^n+y^n+z^n$ let $a,b,c,x,y,z>0$ such $x+y+z=a+b+c,abc=xyz$,and $a>\max\{x,y,z\}$,
I conjecture $$a^n+b^n+c^n\ge x^n+y^n+z^n,\forall n\in N^{+}$$
Maybe this kind of thing has been studied, like I found something relevant, but I didn't find the same one.[Schur convexity and Schur multiplicative convexity for a class of symmetric function]
I found sometime maybe The sum of squared logarithms conjecture
| Yes, it is true. Without loss of generality, $a\geq b \geq c$. Let $a+b+c=s,\ abc=p$ then $b=\frac{(s \ - \ a) \ + \ d}{2}, \ c=\frac{(s \ - \ a) \ - \ d}{2}$ with $d=b-c=\sqrt{(s-a)^2- \frac{4p}{a}}$.
If we consider $b,c$ as variables in $a,s,p$, we find that
$$
\begin{eqnarray}
\frac{\partial (a^n+b^n+c^n)}{\partial a} &=& na^{n-1}+nb^{n-1}\frac{1}{2}\left(-1+\frac{\partial d}{\partial a}\right)+nc^{n-1}\frac{1}{2}\left(-1-\frac{\partial d}{\partial a}\right)\\
&=& \frac{n}{2}\left(\left(2a^{n-1}-b^{n-1}-c^{n-1}\right)+\frac{\partial d}{\partial a}\left(b^{n-1}-c^{n-1}\right) \right).
\end{eqnarray}
$$
Hence it is enough to prove the following:
$$
\frac{\partial d}{\partial a}+\frac{2a^{n-1}-b^{n-1}-c^{n-1}}{b^{n-1}-c^{n-1}}\geq 0
$$
We have
$$
\frac{\partial d}{\partial a}=\frac{a-s+2p/a^2}{d}=\frac{-b-c+2bc/a}{b-c}=\frac{2bc(b^{n-2}+b^{n-3}c+\dots)/a-(b+c)(b^{n-2}+b^{n-3}c+\dots)}{b^{n-1}-c^{n-1}} \
$$
Hence,
$$
\frac{\partial d}{\partial a}+\frac{2a^{n-1}-b^{n-1}-c^{n-1}}{b^{n-1}-c^{n-1}}
=\frac{2bc(b^{n-2}+b^{n-3}c+\cdots)/a+2a^{n-1}-(b+c)(b^{n-2}+b^{n-3}c+\dots)-(b^{n-1}+c^{n-1})}{b^{n-1}-c^{n-1}}
$$
Taking the derivative with respect to $a$ yields $$\frac{\partial^2 d}{\partial a^2} = \frac{2(n-1)a^{n-2} - \frac{2bc}{a^2(b^{n-2} \ + \ b^{n-3}c \ + \ \cdots)}}{b^{n-1} - c^{n-1}}$$ which is greater than or equal to zero. Hence the expression above is minimal if we choose $a$ to be minimal, i.e. $a=b$. In this case, we have: $$\frac{\partial d}{\partial a}=-1=-\frac{2a^{n-1}-b^{n-1}-c^{n-1}}{b^{n-1}-c^{n-1}}$$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/327017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
A four-variable maximization problem We let function
\begin{equation}
\begin{aligned}
f(x_1,~x_2,~x_3,~x_4) ~&=~ \sqrt{(x_1+x_2)(x_1+x_3)(x_1+x_4)} \\
&+ \sqrt{(x_2+x_1)(x_2+x_3)(x_2+x_4)} \\
&+ \sqrt{(x_3+x_1)(x_3+x_2)(x_3+x_4)} \\
&+ \sqrt{(x_4+x_1)(x_4+x_2)(x_4+x_3)},
\end{aligned}
\end{equation}
where variables $x_1,~x_2,~x_3,~x_4$ are positive and satisfy $x_1+x_2+x_3+x_4 ~=~ 1$. We want to prove that $f(x_1,~x_2,~x_3,~x_4)$ attains its global maximum when $x_1=x_2=x_3=x_4=\frac{1}{4}$.
This looks a difficult problem even if it is at high-school level. Any clues? Your ideas are greatly appreciated.
| By Cauchy-Schwarz for sums (with $x_i=\sqrt{a+b}$ and $y_i=\sqrt{a+c}\sqrt{a+d}$, also, I am using cyclic sum notation):
\begin{split}
f(a,b,c,d)&=\sum_{\text{cyc}} \sqrt{(a+b)(a+c)(a+d)}\\&\le \left(\left(\sum_{\text{cyc}} a+b\right) \cdot \left(\sum_{\text{cyc}} (a+c)\cdot (a+d)\right)\right)^{\frac12}\\
&=\sqrt{\big(2(a+b+c+d)\big)\cdot\big((a+b+c+d)^2\big)}\\
&=\sqrt{2}.
\end{split}
Equality occurs if and only if $(a+b,b+c,c+d,d+a)$ is a constant multiple of $((a+c)\cdot (a+d),\dots)$ which, together with $a+b+c+d=1$ implies $a=b=c=d=\frac14$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/346297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Limit of alternated row and column normalizations Let $E_0$ be a matrix with non-negative entries.
Given $E_n$, we apply the following two operations in sequence to produce $E_{n+1}$.
A. Divide every entry by the sum of all entries in its column (to make the matrix column-stochastic).
B. Divide every entry by the sum of all entries in its row (to make the matrix row-stochastic).
For example:
$E_0=\begin{pmatrix}
\frac{2}{5} & \frac{1}{5} & \frac{2}{5} & 0 & 0\\
\frac{1}{5} & 0 & \frac{7}{10} & \frac{1}{10} & 0\\
0 & 0 & 0 & \frac{3}{10} & \frac{7}{10}
\end{pmatrix}\overset{A}{\rightarrow}\begin{pmatrix}
\frac{2}{3} & 1 & \frac{4}{11} & 0 & 0\\
\frac{1}{3} & 0 & \frac{7}{11} & \frac{1}{4} & 0\\
0 & 0 & 0 & \frac{3}{4} & 1
\end{pmatrix}\overset{B}{\rightarrow}\begin{pmatrix}
\frac{22}{67} & \frac{33}{67} & \frac{12}{67} & 0 & 0\\
\frac{44}{161} & 0 & \frac{12}{23} & \frac{33}{161} & 0\\
0 & 0 & 0 & \frac{3}{7} & \frac{4}{7}
\end{pmatrix}=E_1$
What is the limit of $E_n$ as $n \to \infty$?
Additional remarks.
In my problem, the matrix has $c\in \{1,2,\dots,5\}$ rows and $r=5$ columns (note that the two letters are reversed, but in the original context of this problem these letters $r$ and $c$ do not actually stand for rows and columns). So $E_0$ can be $1\times 5$, $2\times 5$, ... or $5\times 5$.
We denote with $(e_n)_{ij}$ the entries of $E_{n}$; hence $(e_n)_{ij}\in[0;1]$ and $\forall i \sum_{j=1}^{r}(e_n)_{ij}=1$ for $n>0$.
I managed to express $(e_{n+1})_{ij}$ as a function of $(e_{n})_{ij}$ :
$$(e_{n+1})_{ij}=\frac{\frac{(e_{n})_{ij}}{\sum_{k=1}^{c}(e_n)_{kj}}}{\sum_{l=1}^{r}\frac{(e_n)_{il}}{\sum_{k=1}^{c}(e_n)_{kl}}}$$
What I can't seem to find now is an expression $(e_{n})_{ij}$ as a function of $(e_{0})_{ij}$, to be able to calculate $\underset{n \to +\infty }{lim}(e_n)_{ij}$
I wrote code to compute this iteration; when I ran it with the previous example $E_0$, I found out that:
$E_0=\begin{pmatrix}
\frac{2}{5} & \frac{1}{5} & \frac{2}{5} & 0 & 0\\
\frac{1}{5} & 0 & \frac{7}{10} & \frac{1}{10} & 0\\
0 & 0 & 0 & \frac{3}{10} & \frac{7}{10}
\end{pmatrix}\overset{n \rightarrow+\infty}{\rightarrow}E_n=\begin{pmatrix}
\frac{7}{25} & \frac{3}{5} & \frac{3}{25} & 0 & 0\\
\frac{8}{25} & 0 & \frac{12}{25} & \frac{1}{5} & 0\\
0 & 0 & 0 & \frac{2}{5} & \frac{3}{5}
\end{pmatrix}$
Not only do the row sums equal $1$, but the column sums equal $\frac{3}{5}$: it seems that in this process column sums converge to $\frac{c}{r}$.
I'm not a mathematician so I was looking for a simple inductive proof. I tried to express $E_2$ (and so on) as a function of $E_0$, but it quickly gets overwhelming, starting from $E_2$...
| The paragraph below applies to a different problem, where row normalization is split out from column normalization, so I have an $"E_{n+1/2}"$ which will be sometimes different from both $E_n$ and $E_{n+1}$. (If the starting matrix "looks like" an order k square stochastic matrix, then it will be invariant under both normalizations.)
Note that some cases will not converge to a single limit. Given a c by r matrix of all ones, normalizing by column (of r entries) results in all entries being 1/r, while normalizing by rows gives all entries being 1/c, so the sequence will fluctuate between these two. Except when the entries are all zero, I would expect a similar oscillation with any other starting nonzero binary matrix. You might be able to establish oscillation for matrices with more distinct values.
Getting back to the posted problem, the transformations have an invariance under permutations of rows, similarly of columns. Thus if the input looks like an upper two by two diagonal block matrix and a lower two by three nonzero matrix, the upper block may converge on a stochastic two by two block, while the lower will be influenced (if it converges at all) by a ratio of 3/2. So the block structure will influence the results, and the ratio c/r might not apply.
Gerhard "Goes This Way And That" Paseman, 2019.12.28.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/349274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Perfect squares between certain divisors of a number Let $n$ be a positive integer. We will call a divisor $d(<\sqrt{n})$ of $n$ special if there exists no perfect squares between $d$ and $\frac{n}{d}$. Prove that $n$ can have at-most one special divisor.
My progress: I boiled down the problem to the following:
Suppose $k^2\le a,b,c,d\le (k+1)^2$, then $ab=cd\implies \{a,b\}=\{c,d\}$. But I can't seem to prove this.
Arriving here isn't difficult so I am omitting any further details(one more reason being I am not sure if I am on the correct path).
| Here is an elementary approach. We want to rule out finding distinct $a,b,c,d$ in short interval (specifically $[n^2+1,n^2+2n]$) with $ad=bc.$ We may assume that $a$ is the smallest and that $b<c$. Then $a<b<c<d.$ I will show that $a+2\sqrt{a}+1\leq d$ so that if $n^2 \leq a$ then $(n+1)^2 \leq d.$
Claim: There are integers $u<v$ and $x<y$ with $$a,b,c,d=ux,uy,vx,vy.$$
Proof: Let $u=\gcd(a,b)$ so $a=ux$ and $b=uy$ with $\gcd(x,y)=1.$ Then $uxd=uyc$ so $xd=yc$ and thus (since $x$ and $y$ are co-prime) there is $v$ with $c=xv$ and $d=yv.$
ASIDE We might as well use $v=u+1$ and $y=x+1$ since $$a,b',c',d'=ux,u(x+1),(u+1)x,(u+1)(x+1)$$ give $ad'=b'c'$ with $a <b',c',d'\leq d.$ I'll comment a bit more about this at the end.
So we want to show
if $ad=bc$ with $n^2<a<b<c<d$ then $d>(n+1)^2.$
From the claim above, $n^2<a=ux$ and $d\geq (u+1)(x+1)=ux+u+x+1.$ But given $ux=a>n^2,$ we know $u+x\geq 2\sqrt{a}>2n.$ Thus $ux+u+x+1>n^2+2n+1$ as desired.
Consider this problem: Given $a$, find $a<b<c<d$ with $d$ minimal such that $ad=bc.$ The work above shows that the solution is to have $$a,b,c,d=ux,u(x+1),(u+1)x,(u+1)(x+1)$$ with $|u-x|$ minimal and that $d>a+2\sqrt{a}+1.$ If we allow $b=c$ then $$n^2\cdot (n+1)^2=(n^2+n)\cdot (n^2+n).$$ If we want $b < c$ then $(n^2-n)\cdot(n^2+n)=(n^2-1)\cdot(n^2).$ With $d-a\approx 2\sqrt{a}.$
If $ad=bc$ then $abcd$ is a perfect square. However this property is weaker and there are solutions such as $$a,b,c,d=2\cdot 120^2,3\cdot 98^2,30\cdot 31^2,5\cdot 76^2=$$ $$28800, 28812, 28830, 28880$$ with $d<a+\frac12\sqrt{a}$ and all four factors between $28561=169^2$ and $28900=170^2.$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/355600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Maximal (minimal) value of $S=x_1^2x_2+x_2^2x_3+\cdots+x_{n-1}^2x_n+x_n^2x_1$ on condition that $x_1^2+x_2^2+\cdots+x_n^2=1$ since $x_1^2+x_2^2+\cdots+x_n^2=1$ is sphere,a compact set,so $S$ has a maximal(minimal) value. But when I try to solve it using the Lagrangian multiplier method, I don't know how to solve these equations. Clearly $x_1=x_2=x_3=\cdots = x_n=\frac{1}{\sqrt{n}}$ is an extremal point, but I don't know if it's the maximal(minimal) value.
I want to know how to solve the problem. Also, could this problem be solved by elementary methods, like some inequality techniques?
| For $n=3$ we need to find $$\max_{a^2+b^2+c^2=1}(a^2b+b^2c+c^2a).$$
Indeed, let $\{|a|,|b|,|c|\}=\{x,y,z\}$, where $x\geq y\geq z\geq0$.
Thus, by Rearrangement and AM-GM we obtain:
$$\sum_{cyc}a^2b\leq|a|\cdot(|a||b|)+|b|\cdot(|b||c|)+|c|\cdot(|c||a|)\leq$$
$$\leq x\cdot xy+y\cdot xz+z\cdot yz=y(x^2+xz+z^2)\leq y\left(x^2+\frac{x^2+z^2}{2}+z^2\right)=$$
$$=\frac{3}{2}y(1-y^2)=\frac{3}{2\sqrt2}\sqrt{2y^2(1-y^2)^2}\leq\frac{3}{2\sqrt2}\sqrt{\left(\frac{2y^2+2-2y^2}{3}\right)^3}=\frac{1}{\sqrt3}.$$
The equality occurs for $a=b=c=\frac{1}{\sqrt3}$, which says that we got a maximal value.
For $n=4$ we need to find $$\max_{\sum\limits_{cyc}a^2=1}\sum_{cyc}a^2b.$$
Indeed, by C-S and AM-GM we obtain:
$$\sum_{cyc}a^2b\leq\sqrt{\sum_{cyc}a^2\sum_{cyc}a^2b^2}=\sqrt{(a^2+c^2)(b^2+d^2)}\leq\frac{1}{2}(a^2+c^2+b^2+d^2)=\frac{1}{2}.$$
The equality occurs for $a=b=c=d=\frac{1}{2},$ which says that we got a maximal value.
For $n\geq5$ we can use the Lagrange Multipliers method, but it does not give nice numbers.
For example, for $n=5$ the maximum occurs, when $(x_1,x_2,x_3,x_4,x_5)||(0.79...,3.24...,3.78...,2.48...,1),$ which gives a value $0.45...$
The following inequality is also true.
Let $a$, $b$ and $c$ be real numbers such that $a^2+b^2+c^2=1$. Prove that:
$$a^3b^2+b^3c^2+c^3a^2\leq\frac{1}{3\sqrt3}.$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/383431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
In search of a $q$-analogue of a Catalan identity Let $C_n=\frac1{n+1}\binom{2n}n$ be the all-familiar Catalan numbers. Then, the following identity has received enough attention in the literature (for example, Lagrange Inversion: When and How):
\begin{equation}
\label1
\sum_{k=0}^n\binom{2n-2k}{n-k}C_k=\binom{2n+1}n \qquad \iff \qquad
\sum_{i+j=n}\binom{2i}iC_j=\binom{2n+1}n. \tag1
\end{equation}
I like to ask
QUESTION. Is there a $q$-analogue of \eqref{1}? Possibly, a combinatorial proof of \eqref{1} would shed some light into this.
| Decided to make a cw post: it is sort of amusing.
Let $C_n(q)$ be defined by
$$\sum_{k=0}^n\binom{2n-2k}{n-k}_qC_k(q)q^{2n-2k}=\binom{2n+1}n_q,\qquad n=0,1,2,\dotsc.$$
Then
\begin{multline*}
C_n(q)=1+q+q^2+q^3+2 q^4+3 q^5+3 q^6+3 q^7+4 q^8+6 q^9+\dotsb\\\dotsb-7q^{(n+1)^2-6}-5q^{(n+1)^2-5}-3q^{(n+1)^2-4}-2q^{(n+1)^2-3}-q^{(n+1)^2-2}-q^{(n+1)^2-1}
\end{multline*}
where the “tail” is made from the partition numbers $1,1,2,3,5,7,11,15,22,30,42,\dotsc$ while the “head” satisfies
\begin{multline*}
1+q+q^2+q^3+2 q^4+3 q^5+3 q^6+3 q^7+4 q^8+6 q^9+7 q^{10}+6 q^{11}+6 q^{12}+8 q^{13}+\dotsb\\
=1/(1-q-q^4+q^6+q^{11}-q^{14}-q^{21}+q^{25}+q^{34}-q^{39}-q^{50}+q^{56}+\dotsb).
\end{multline*}
Cf.
\begin{align*}
&\qquad\qquad1+q-q^4-q^6+q^{11}+q^{14}-q^{21}-q^{25}+q^{34}+q^{39}-q^{50}-q^{56}+\dotsb\\
&=q^{-1}(1-\prod_{n\geqslant1}(1-q^n)).
\end{align*}
Have no idea how to prove these, or what happens in between ….
| {
"language": "en",
"url": "https://mathoverflow.net/questions/400793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Parametrization of integral solutions of $3x^2+3y^2+z^2=t^2$ and rational solutions of $3a^2+3b^2-c^2=-1$ 1/ Is it known the parameterisation over $\mathbb{Q}^3$ of the solutions of
$3a^2+3b^2-c^2=-1$
2/ Is it known the parameterisation over $\mathbb{Z}^4$ of the solutions of
$3x^2+3y^2+z^2=t^2$
References, articles or books are welcome
Sincerely, John
| on the second question $3x^2 + 3 y^2 + z^2 = t^2$ in integers:
One description is this: if $x,y$ are integers and not both odd, then $3 (x^2 + y^2) \neq 2 \pmod 4.$ As a result it may be expressed a few ways as $t^2 - z^2 = (t+z)(t-z)$ When odd, we may take $t-z$ to be any factor of $3 (x^2 + y^2)$ because $t+z$ will be the same $\pmod 2.$ When $ N =3 (x^2 + y^2)$ is divisible by $4,$ we take only divisors $d$ where both $d$ and $N/d$ are even. Further conditions make the quadruple primitive.
As far as integer parametrization, it turned out that two evident recipes were enough. Those (primitive) quadruples with $x,y$ even appear as ( we need $j+k+l+m$ odd )
$$ x = 2(jl - km) \; , \; \; y = 2 (jm +kl) $$
$$ z = 3 j^2 + 3 k^2 - l^2 - m^2 \; , \; \; t = 3 j^2 + 3 k^2 + l^2 + m^2 $$
Those with $x+y$ odd appear as
$$ x = j^2 + k^2 - l^2 - m^2 \; , \; \; y = 2 (jl-km) $$
$$ z = 4(jm+kl) + (j^2 + k^2 + l^2 + m^2 ) $$
$$ t = 2(jm+kl) + 2(j^2 + k^2 + l^2 + m^2 ) $$
The closest comparison I have is for ternary quadratic forms: Mordell, in Diophantine Equations, page 47, Theorem 4, says the the isotropic vectors of an integer ternary come as a finite set of recipes, each one of type $x = a_1 p^2 + b_1 pq + c_1 q^2 \; , \; \; \; $ $y = a_2 p^2 + b_2 pq + c_2 q^2 \; , \; \; \; $ $z = a_3 p^2 + b_3 pq + c_3 q^2 \; , \; \; \; $
The interesting bit is that quaternary forms need recipes with four variables. A famous example is Pythagorean Quadruples , $x^2 + y^2 + z^2 = m^2$. There is a nice writeup by Robert Spira in 1962; he says the first correct proof (that all quadruples come from the given parametrization ) was Dickson in 1920.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/422125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Analogue of Fermat's "little" theorem Let $p$ be a prime, and consider $$S_p(a)=\sum_{\substack{1\le j\le a-1\\(p-1)\mid j}}\binom{a}{j}\;.$$
I have a rather complicated (15 lines) proof that $S_p(a)\equiv0\pmod{p}$. This must be
extremely classical: is there a simple direct proof ?
| Here's a straightforward proof, using generating functions, though it's not as elegant as Ofir's.
We have
$$
\sum_{a=0}^\infty\binom{a}{j}x^a =\frac{x^j}{(1-x)^{j+1}}.
$$
Setting $j=(p-1)k$ and summing on $k$ gives
$$
\sum_{a=0}^\infty x^a \sum_{p-1\mid j}\binom{a}{j} = \frac{(1-x)^{p-1}}{(1-x)^p -x^{p-1}(1-x)}.
$$
We subtract $\sum_{a=0}^\infty x^a \binom{a}{0} = 1/(1-x)$ and
$$\sum_{\substack{p-1\mid a\\ a>0}}x^a\binom{a}{a} = \frac{x^{p-1}}{1-x^{p-1}}$$
to get
$$
\sum_{a=0}^\infty S_p(a) x^a = \frac{(1-x)^{p-1}}{(1-x)^p -x^{p-1}(1-x)}
-\frac{1}{1-x} -\frac{x^{p-1}}{1-x^{p-1}}.
$$
Modulo $p$, we may replace $(1-x)^{p-1}$ with $(1-x^p)/(1-x)$ and $(1-x)^p$ with $1-x^p$, obtaining
\begin{align*}
\sum_{a=0}^\infty S_p(a) x^a &\equiv \frac{(1-x^p)/(1-x)}{1-x^p -x^{p-1}(1-x)}
-\frac{1}{1-x} -\frac{x^{p-1}}{1-x^{p-1}}\pmod{p}\\
&=0.
\end{align*}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/424694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 3,
"answer_id": 2
} |
Is there a simple expression for $\sum_{k =1}^{(p-3)/2} \frac{1\cdot 3\cdots (2k-1)}{2\cdot 4 \cdots 2k\cdot(2k+1)} \bmod p$? Let $p \equiv 1 \pmod 4$ be a prime and $E_n$ denote the $n$-th Euler number. While investigating $E_{p-1} \pmod{p^2}$ I have encountered this summation (modulo $p$)
\begin{align*}
\sum_{k =1}^{\frac{p-3}{2}} \frac{a_k}{2k+1} \pmod p.
\end{align*}
where $a_k = \frac{1\cdot 3\cdots (2k-1)}{2\cdot 4 \cdots 2k}$.
Another way to express this sum (this is how I first saw it) is
\begin{align}
\sum_{k = 1}^{\frac{p-3}{2}} {2k \choose k} \frac{1}{4^k(2k + 1)}.
\end{align}
Like the questions says, I am wondering if a simple expression is known for this sum, or if sums like it have been studied before. Any references or insights would be appreciated!
Some observations: Using Wilson's theorem, we have
\begin{align}
\frac{1\cdot 3\cdots (p-2)}{2\cdot 4 \cdots (p-1)} \equiv \frac{-1}{(2\cdot 4 \cdots (p-1))^2} \equiv \frac{-1}{2^{p-1}\left(\frac{p-1}{2}\right)^2!} \equiv 1 \pmod p
\end{align}
($p \equiv 1 \pmod 4 \implies \left(\frac{p-1}{2}\right)^2!\equiv -1 \pmod p$). Hence
\begin{align}
\frac{1\cdot 3\cdots (2k-1)}{2\cdot 4 \cdots 2k} \equiv \frac{1\cdot 3\cdots (p-2(k+1))}{2\cdot 4 \cdots (p-(2(k+1)-1))} \pmod p
\end{align}
then $a_k \equiv a_{\frac{p-1}{2}-k} \pmod p$
and the sum collapses a bit,
\begin{align*}
\sum_{k =1}^{\frac{p-3}{2}} \frac{a_k}{2k+1} \equiv \sum_{k =1}^{\frac{p-3}{4}} \frac{a_k}{2k+1} - \sum_{k =1}^{\frac{p-3}{4}} \frac{a_{\frac{p-1}{2}-k}}{2k} \equiv -\sum_{k =1}^{\frac{p-3}{4}} \left(\frac{a_k}{(2k)(2k+1)} \right) \pmod p
\end{align*}
but this doesn't seem to help. The symmetry of $a_k$ described above gives $\sum_{k =1}^{\frac{p-3}{2}}(-1)^k a_k \equiv 0 \pmod p$, which may be of interest to others.
| It is known that $$\sum_{k=0}^\infty\frac{\binom{2k}k}{(2k+1)4^k}=\frac{\pi}2.$$ Motivated by this, I proved in my paper On congruences related to central binomial coefficients [J. Number Theory 131(2011), 2219-2238] the following result (as part (i) of Theorem 1.1 in the paper):
Let $p$ be any odd prime. Then
$$\sum_{k=0}^{(p-3)/2}\frac{\binom{2k}k}{(2k+1)4^k}\equiv(-1)^{(p+1)/2}\frac{2^{p-1}-1}p\pmod{p^2}$$
and
$$\sum_{k=(p+1)/2}^{p-1}\frac{\binom{2k}k}{(2k+1)4^k}\equiv pE_{p-3}\pmod{p^2},$$
where $E_0,E_1,E_2,\ldots$ are the Euler numbers.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/428183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Partition of $(2^{n+1}+1)2^{2^{n-1}+n-1}-1$ into parts with binary weight equals $2^{n-1}+n$ Let $\operatorname{wt}(n)$ be A000120, i.e., number of $1$'s in binary expansion of $n$ (or the binary weight of $n$).
Let $a(n,m)$ be the sequence of numbers $k$ such that $\operatorname{wt}(k)=m$. In other words, $a(n,m)$ is the $n$-th number with binary weight equals $m$.
I conjecture that
$$a(1,2^{n-1}+n)+\sum\limits_{i=1}^{2^{n-1}}a(i+2,2^{n-1}+n)=(2^{n+1}+1)2^{2^{n-1}+n-1}-1$$
I also conjecture that numbers of the form $(2^{n+1}+1)2^{2^{n-1}+n-1}-1$ have only $2$ partitions into parts with binary weight equals $2^{n-1}+n$, exactly the number itself and the sum above.
Is there a way to prove it?
| Notice that for $i\in\{0,1,\dots,2^{n-1}+n\}$ we have
$$a(i+1,2^{n-1}+n) = 2^{2^{n-1}+n+1} - 1 - 2^{2^{n-1}+n-i}.$$
Then the sum in question can be easily computed:
\begin{split}
& a(1,2^{n-1}+n)+\sum_{i=1}^{2^{n-1}}a(i+2,2^{n-1}+n)\\
&= \sum_{i=0}^{2^{n-1}+1} a(i+1,2^{n-1}+n) - (2^{2^{n-1}+n+1} - 1 - 2^{2^{n-1}+n-1}) \\
&=(2^{n-1}+1)(2^{2^{n-1}+n+1} - 1) - 2^{n-1}(2^{2^{n-1}+2}-1) + 2^{2^{n-1}+n-1} \\
&= (2^{n+1}+1)2^{2^{n-1}+n-1} - 1.
\end{split}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/430476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Transforming matrix to off-diagonal form I wonder if one can write the following matrix in the form $A = \begin{pmatrix} 0 & B \\ B^* & 0 \end{pmatrix}.$
The matrix I have is of the form
$$ C = \begin{pmatrix} 0 & a & b & 0 & 0 & 0 \\
\bar a & 0 & 0 &b & 0& 0\\
\bar b & 0 & 0 & a & f & 0 \\
0 & \bar b & \bar a & 0 & 0 &f \\
0 & 0 & \bar f & 0 & 0 & a\\
0 & 0 & 0 & \bar f & \bar a & 0
\end{pmatrix}.$$
The reason I believe it should be possible is that the eigenvalues of $A$ are symmetric with respect to zero $\pm \vert a \vert, \pm \sqrt{ \vert a \vert^2+ \vert b \vert^2 + \vert f \vert^2 \pm \vert a \vert^2( \vert b \vert^2 + \vert f \vert^2)}$ where in the latter case all sign combinations are allowed.
Hence, I wonder if there exists $U$ such that
$$A = UCU^{-1}$$
| The $C$ is a particular block matrix, $C\in \mathbb{M}_3(\mathbb{M}_2(\mathbb{C}))$. For $V$ unitary let $V\begin{pmatrix}0&a\\\bar a&0\end{pmatrix}V^*=\begin{pmatrix}s&0\\0&-s\end{pmatrix}$, $P$ the perfect shuffle matrix (permutation), $D=\text{diag}(1,1,1,1,-1,1)$ and $U=\frac{1}{\sqrt{2}}\begin{pmatrix}I&I\\-I&I\end{pmatrix}\in \mathbb{M}_2(\mathbb{M}_3(\mathbb{C}))$.
Applying $DP\begin{pmatrix}V&0&0\\0&V&0\\0&0&V\end{pmatrix}C \begin{pmatrix}V&0&0\\0&V&0\\0&0&V\end{pmatrix}^*P^*D$ gives $\begin{pmatrix}s&b&0&0&0&0\\\bar b&s&f&0&0&0\\0&\bar f&s&0&0&0\\0&0&0&-s&-b&0\\0&0&0&-\bar b&-s&-f\\0&0&0&0&-\bar f&-s\end{pmatrix}=G$, $UGU^*$ has the required form.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/436391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is it possible to solve this integral? I can't manage to solve this integral. Does it have an analytical solution?
$$\int\left(\frac{e^{x}(a-1)-1+\frac{1}{a}}{e^{x}(1-b)+1-\frac{1}{a}}\right)e^{-\frac{(x-(\mu-\frac{\sigma^{2}}{2})t)^{2}}{2t\sigma^{2}}}dx$$
| $$\int\left(\frac{e^{x}(a-1)-1+\frac{1}{a}}{e^{x}(1-b)+1-\frac{1}{a}}\right)e^{-\frac{(x-c)^2}{d^2}}\,dx$$
If $b=1$, the integral is
$$-\frac{ \sqrt{\pi }}{2} \, d \left(a e^{c+\frac{d^2}{4}} \text{erf}\left(\frac{2
c+d^2-2 x}{2 d}\right)+\text{erf}\left(\frac{x-c}{d}\right)\right)$$
If $b$ is "close" to $1$
$$\frac{e^{x}(a-1)-1+\frac{1}{a}}{e^{x}(1-b)+1-\frac{1}{a}}=\sum_{n=0}^\infty \left(\frac{a}{a-1}\right)^n\, e^{nx}\,(a e^x-1)\,(b-1)^n$$
$$I_n=\int e^{nx}\,(a e^x-1)\,e^{-\frac{(x-c)^2}{d^2}}\,dx$$
$$I_n=\frac{\sqrt{\pi }}{2} \, d \,e^{c n+\frac{d^2 n^2}{4}}\left(\text{erf}\left(\frac{2 c+d^2 n-2 x}{2 d}\right)-a e^{c+\frac{1}{4} d^2
(2 n+1)} \text{erf}\left(\frac{2 c+d^2 (n+1)-2 x}{2 d}\right)\right)$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/440112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Fibonacci identity using generating function There are many nice ways of showing that $f_0^2+f_1^2+\cdots+f_n^2=f_{n+1}f_n$. I was wondering if there is a way of showing this using the generating function $F(x)=\frac{1}{1-x-x^2}=\sum_{i\geq0}f_ix^i$. In other words, is there any operation (perhaps the Hadamard product) that can be applied to $F(x)$ that would yield the identity above?
What about other identities that involve sums and squares, like $f_1f_2+\cdots +f_nf_{n+1}=f_{n+1}^2$ for $n$ odd?
| Here are the details on proving these identities using Hadamard products of generating functions. (You can find explanations of how to compute Hadamard products of rational functions here.)
I'll write $U(x)*V(x)$ for the Hadmard product of $U(x)$ and $V(x)$:
$$\sum_{n=0}^\infty u_n x^n *\sum_{n=0}^\infty v_n x^n
=\sum_{n=0}^\infty u_n v_n x^n.$$
Let
$$F=\sum_{n=0}^\infty f_n x^n =\frac{1}{1-x-x^2}$$
and let
$$F_1 = \sum_{n=0}^\infty f_{n+1}x^n = (F-1)/x = \frac{1-x}{1-x-x^2}.$$
Then we have
$$
\sum_{n=0}^\infty f_{n}^2x^n=F*F=\frac{1-x}{1-2x-2x^2+x^3}
$$
and
$$
\sum_{n=0}^\infty f_{n}f_{n+1}x^n = F*F_1 = \frac{1}{1-2x-2x^2+x^3}.
$$
Thus $(F*F)/(1-x) =F*F_1$.
This proves the first identity $f_0^2 + \cdots + f_n^2 = f_{n+1}f_n$.
The second identity is stated incorrectly. It should be
$$f_0f_1 + f_1f_2 +\cdots +f_n f_{n+1}=
\begin{cases}
f_{n+1}^2,&\text{if $n$ is even}\\
f_{n+1}^2 -1,&\text{if $n$ is odd}.
\end{cases}
$$
The generating function for the left side is
$$\frac{F*F_1}{1-x}=\frac{1}{(1-x)(1-2x-2x^2+x^3)}.$$
We have
$$\sum_{n=0}^\infty f_{n+1}^2 x^n = F_1*F_1= \frac{1+2x-x^2}{1-2x-2x^2+x^3}$$
and we find that
$$\frac{F*F_1}{1-x} -F_1*F_1 = -\frac{x}{1-x^2},$$
which proves the corrected second identity.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/11972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is there a good reason why $a^{2b} + b^{2a} \le 1$ when $a+b=1$? The following problem is not from me, yet I find it a big challenge to give a nice (in contrast to 'heavy computation') proof. The motivation for me to post it lies in its concise content.
If $a$ and $b$ are nonnegative real numbers such that $a+b=1$, show that $a^{2b} + b^{2a}\le 1$.
| Maybe there is a small trick yet.
For $a + b = 1$ we can write the sum as
$$a^{2(1-a)} + b^{2(1-b)} = (\frac{a}{a^a})^2+ (\frac{b}{b^b})^2 $$.
Obviously the sum is 1 for $(a, b) = (0,1), (\frac{1}{2},\frac{1}{2}), (1, 0)$. The question is whether at $(\frac{1}{2},\frac{1}{2})$ there is a maximum or a minimum, i.e. whether the second derivative of
$$(\frac{x}{x^x})^2+ (\frac{1-x}{(1-x)^{1-x}})^2$$
is positive or negative. (By symmetry only minimum or maximum can occur there.)
Using the fact that, for $x = \frac{1}{2}$,
$$\frac{d^2}{dx^2}(\frac{1-x}{(1-x)^{1-x}}) = \frac{d^2}{(-dx)^2}(\frac{x}{x^x})^2 = \frac{d^2}{dx^2}(\frac{x}{x^x})^2$$
it is sufficient to prove
$$2\frac{d^2}{dx^2}(\frac{x}{x^x})^2 < 0$$
for $x = \frac{1}{2}$ which in fact is easily demonstrated.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/17189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 8,
"answer_id": 6
} |
Diophantine question This came up when I did a brand-new (or maybe it's just "birdtracks" in disguise :-)
graph-based construction of the E8 family. x,y,z are dimensions and thus integer. x<0 actually doesn't hurt. (For $y=x,z=(3 (-2 + o) o)/(10 + o)$ the standard E8 setup results, o must now divide 360 etc. pp.) So here is the equation:
$3 x (2 + x) (-2 + x + x^2 - 2 y) y (-x + x^2 - 2 z) z=q^2$
With rational y or z I wouldn't pester MO - I'd solve it on the spot. But here some nasty division properties are involved, and I'm lousy in number theory. Is there a finite, easy describable solution list?
| Let $P(x,y,z)=3 x (2 + x) (-2 + x + x^2 - 2 y) y (-x + x^2 - 2 z) z$ be the product which we wish to be a square. As noted,
*
*$P(1,y,z)=\left(6yz\right)^2.$
A nice parametric solution is
*
*$P(2r,3r,r+1)=\left(12r(r+1)(2r^2-2r-1)\right)^2$
Also, for fixed $x,y$ or $x,z$ we are left with a Pell Equation.
The parametric solution arises from first setting $y=x+z-1$ to make the two quadratic factors equal $P(x,x+z-1,z)=(x^2-x-2z)^23x(x+2)z(x+z-1).$ There are many other parametric solutions such as
*
*$P(r^2,r^2+2,3)=\left(3r(r^2+2)^2(r^2-3)\right)^2$ and
*$P(3r^2-2,3r^2-2,1)=\left(3r(3r^2-4)(3r^2-2)(3r^2-1)\right)^2.$
Although none that I know of with the right-hand side the square of a fourth degree polynomial in $r$.
Most solutions to $P=\Box$ do not have $y=x+z-1$. There are $1442$ choices of $2 \le,x \le 2000,1 \le z \le 2000$ which make $P(x,x+z-1,z)$ a square. Of them $1000$ have $x=2(z-1).$
For fixed $x$ and $z \gt \frac{x^2-x}2$ we have $P(x,y,z)=Ay(y-B)$ for some constants. If $A \gt 0$ has square-free part $\alpha \gt 1$ then we are left with a potentially solvable Pell equations. We see that $\gcd(y,y-B)$ divides $B$ so we seek solutions $$\{y,y-B\}=\{m\alpha_1u^2,m\alpha_2v^2\}$$ where $m\cdot \gcd(\alpha_1,\alpha_2)$ divides $B$ and $\alpha_1\alpha_2=\alpha.$
For example $x,z=3,5$ reduces to having $P(3,y,5)=2\cdot30^2\cdot y\cdot(y-5) $ a square.
There are no solutions to $\{y-5,y\}=\{u^2,2v^2\}$ (in either order) but $\{y-5,y\}=\{5u^2,10v^2\}$ means considering $P(3,5w,5)=2\cdot150^2 \cdot (w-1)\cdot w.$ We need $(w-1)w=2\Box$ with familiar solutions $(w-1)w=1\cdot2,8\cdot 9,49 \cdot 50,288 \cdot 289\cdots.$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/121017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is the $n$-th prime $p_n$ expressible as the difference of coprime $A, B$ such that the set of prime divisors of $AB$ is $\{p_1, \dots, p_{n-1}\}$? We define recursively
$p_1=2,p_2=3$
and
$$p_{n}= \min_{(A,B)\in F_{n-1}}|A-B| $$
Where
$$
\begin{split}
F_n=\{(A,B) |&\gcd (A,B)=1,\quad |A-B| \not =1,
\\\
&\text{both $A$ and $B$ are products of powers of $p_i$ for $i\le n$},
\\\
&\text{for each $i\le n$, either $p_i |A$ or $p_i |B$}\}
\end{split}
$$
Is always $p_n$ the $n-th$ prime?
| Barry Cipra has already computed the first few values.
The next couple of numbers $p_n$ are
$13 = 5 \cdot 11 - 2 \cdot 3 \cdot 7$,
$17 = 2 \cdot 7 \cdot 13 - 3 \cdot 5 \cdot 11$,
$19 = 2^2 \cdot 3 \cdot 5 \cdot 17 - 7 \cdot 11 \cdot 13 $,
$23 = 7 \cdot 11 \cdot 17^2 - 2 \cdot 3^2 \cdot 5 \cdot 13 \cdot 19$,
$29 = 3 \cdot 11 \cdot 13^2 \cdot 19 \cdot 23 - 2^{12} \cdot 5 \cdot 7 \cdot 17$.
I don't find such expression for 31 in numbers $\leq 10^{12}$.
The smallest I find is $47 = 3 \cdot 7 \cdot 19^2 \cdot 23 \cdot 29 -
2^5 \cdot 5 \cdot 11 \cdot 13^2 \cdot 17$.
If the abc conjecture is true, there are at most finitely many ways to express the $n$-th
prime as a difference of coprime numbers which are divisible only by the first $n-1$
primes.
Addendum: For a more extensive table, see http://www.fermatquotient.com/DiverseMinimas/S=M-N_min (found by Google'ing for the numbers obtained above).
The data supports the assumption that the OP's assertion is unlikely to be true.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/124071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
A question of terminology regarding integer partitions I am wondering if there is a standard notation and name for the following. Let $\lambda$ be a partition $\lambda_1\geq \lambda_2\geq\cdots\geq \lambda_r\geq 1$ of $n$ into $r$ parts. Then we can form a partition $\mu$ of $r$ by keeping track of how many times each integer occurs in $\lambda$. For example the partition (3,3,2,2,2,1,1) of 14 into 7 parts has associated partition $\mu=(3,2,2)$ of $7$ because one integer appears 3 times and 2 integers appear twice. If $\mu$ is a partition of $r$, let $C_{n,\mu}$ be the number of partitions $\lambda$ of $n$ into $r$ parts such the partition associated to $\lambda$ is $\mu$.
Question. What is $C_{n,\mu}$ called and what is the standard notation? Is the some formula or combinatorial rule to compute it?
| Let $\mu=(\mu_1,\dots,\mu_m)$ and $\gamma_i$ be the number of parts equal $i$ in $\mu$. Then
$$\sum_{i=1}^r \gamma_i = m\quad\text{and}\quad\sum_{i=1}^r i\cdot\gamma_i = r.$$
Then $C_{n,\mu}$ equals $\frac{1}{\gamma_1!\cdots \gamma_r!}$ times the number of solutions to
$$(\star)\qquad \mu_1\cdot y_1 + \dots + \mu_m\cdot y_m = n$$
in pairwise distinct positive integers $y_1,\dots, y_n$.
Without of the distinctness requirement, the number of solutions to $(\star)$ has generating function:
$$F(\mu,x) = \frac{x^{\mu_1+\dots+\mu_m}}{(1-x^{\mu_1})\cdots (1-x^{\mu_m})}.$$
To enforce the distinctness, one can use inclusion-exclusion.
For any unordered partition $P=(P_1,\dots,P_k)$ of the set $[m]$, we define $\mu_P$ as "contraction" of $\mu$, where parts of $\mu$ with indices from the same part of $P$ are summed up and represent a single element of $\mu_P$.
By inclusion-exclusion, we have $C_{n,\mu}$ equal the coefficient of $x^n$ in
$$\frac{1}{\gamma_1!\cdots \gamma_r!} \sum_{P} (-1)^{m-k}\cdot (|P_1|-1)!\cdots (|P_k|-1)!\cdot F(\mu_P,x).$$
Example. For $\mu=(3,2,2)$, we have $m=3$, $r=3+2+2=7$, $\gamma=(0,2,1,0,0,0,0,0)$. The have the following set partitions of $[3]=\{1,2,3\}$ and the corresponding contracted partitions $\mu_P$:
$P=\{\{1,2,3\}\}\qquad \mu_P=(7)$
$P=\{\{1,2\},\{3\}\}\qquad \mu_P=(5,2)$
$P=\{\{1,3\},\{2\}\}\qquad \mu_P=(5,2)$
$P=\{\{2,3\},\{1\}\}\qquad \mu_P=(4,3)$
$P=\{\{1\},\{2\},\{3\}\}\qquad \mu_P=(3,2,2)$
So, the generating function for $C_{n,(3,2,2)}$ is
$$\frac{1}{2} (2\cdot F((7),x) - F((5,2),x) - F((5,2),x) - F((4,3),x) + F((3,2,2),x))$$
$$={\frac {{x}^{13} \left( 3\,{x}^{6}+6\,{x}^{5}+8\,{x}^{4}+8\,{x}^{3}+6\,{x}^{2}+3\,x+1 \right) (1-x)^2}{ (1-x^3) (1-x^4) (1-x^5) (1-x^7) }}$$
$$=x^{13} + x^{14} + 2\cdot x^{15} + x^{16} + 2\cdot x^{17} + 2\cdot x^{18} + 3\cdot x^{19} + 3\cdot x^{20} + 6\cdot x^{21} + 3\cdot x^{22} + \dots.$$
For example, the coefficient of $x^{21}$ enumerates the following six partitions of 21: $(7,7,2,2,1,1,1)$, $(6,6,3,3,1,1,1)$, $(5,5,4,4,1,1,1)$, $(5,5,3,3,3,1,1)$, $(4,4,3,3,3,2,2)$, and $(5,5,5,2,2,1,1)$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/203748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find all solution $a,b,c$ with $(1-a^2)(1-b^2)(1-c^2)=8abc$ Two years ago, I made a conjecture on stackexchange:
Today, I tried to find all solutions in integers $a,b,c$ to
$$(1-a^2)(1-b^2)(1-c^2)=8abc,\quad a,b,c\in \mathbb{Q}^{+}.$$
I have found some solutions, such as
$$(a,b,c)=(5/17,1/11,8/9),(1/7,5/16,9/11),(3/4,11/21,1/10),\cdots$$
$$(a,b,c)=\left(\dfrac{4p}{p^2+1},\dfrac{p^2-3}{3p^2-1},\dfrac{(p+1)(p^2-4p+1)}{(p-1)(p^2+4p+1)}\right),\quad\text{for $p>2+\sqrt{3}$ and $p\in\mathbb {Q}^{+}$}.$$
Here is another simple solution:
$$(a,b,c)=\left(\dfrac{p^2-4p+1}{p^2+4p+1},\dfrac{p^2+1}{2p^2-2},\dfrac{3p^2-1}{p^3-3p}\right).$$
My question is: are there solutions of another form (or have we found all solutions)?
| @Allan methods it's nice! here is my answer:
since
$$\left(\dfrac{1-a^2}{2a}\right)\left(\dfrac{1-b^2}{2b}\right)\left(\dfrac{1-c^2}{2c}\right)=1$$
so let
$$\dfrac{x}{y}=\dfrac{1-a^2}{2a},\;\dfrac{y}{z}=\dfrac{1-b^2}{2b},\;\dfrac{z}{x}=\dfrac{1-c^2}{2c}$$
and solving for $a,b,c$,
$$a = \frac{-x+\sqrt{x^2+y^2}}{y},\;\;b = \frac{-y+\sqrt{y^2+z^2}}{z},\;\;
c = \frac{-z+\sqrt{x^2+z^2}}{x}$$
it is easy to see
$x^2+y^2,y^2+z^2,z^2+x^2$ must be square. so we use Euler bricks solution
$$x=u|4v^2-w^2|,y=v|4u^2-w^2|,z=4uvw$$
then it is not hard to find to give solution
$$(a,b,c)=\left(\dfrac{p^2-4p+1}{p^2+4p+1},\dfrac{p^2+1}{2p^2-2},\dfrac{3p^2-1}{p^3-3p}\right).$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/208485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 0
} |
Tricky two-dimensional recurrence relation I would like to obtain a closed form for the recurrence relation
$$a_{0,0} = 1,~~~~a_{0,m+1} = 0\\a_{n+1,0} = 2 + \frac 1 2 \cdot(a_{n,0} + a_{n,1})\\a_{n+1,m+1} = \frac 1 2 \cdot (a_{n,m} + a_{n,m+2}).$$
Even obtaining a generating function for that seems challenging. Is there a closed form for the recurrence relation or at least for the generating function? Alternatively, is there a closed form for $a_{n,0}$?
| For $n\geqslant0$ let $F_n(t)=\sum_{m\in\mathbb Z}a_{n,m}t^m$, where we are going to define $a_{n,m}$ for negative $m$ in such a way that $a_{n+1,m}=\frac{a_{n,m-1}+a_{n,m+1}}2$ for all $n\geqslant0$ and all $m\in\mathbb Z$ (so that $F_{n+1}(t)=\frac{t+t^{-1}}2F_n(t)$) and moreover the remaining requirements $a_{0,0}=1$, $a_{0,m}=0$ for $m>0$ and $a_{n+1,0}=2+\frac12(a_{n,0}+a_{n,1})$ hold. The latter are equivalent to $a_{n,-1}=4+a_{n,0}$ for all $n$. Eliminating all variables in favor of $a_{0,m}$ with $m<0$ then gives $a_{0,-1}=5$ and $a_{0,-m}=8m-4$ for $m>1$, so that $F_0(t)=\frac{(1+t^{-1})^3}{(1-t^{-1})^2}$. Then
$$
F_n(t)=\left(\frac{t+t^{-1}}2\right)^n\frac{(1+t^{-1})^3}{(1-t^{-1})^2}.
$$
Expanding into powers of $t^{-1}$ gives (for $m\geqslant0$)
$$
a_{n,m}=2^{-n}\left((3-2(-1)^{n+m})\binom n{\lfloor\frac{n-m}2\rfloor}+4\sum_{k=1}^{\lfloor\frac{n-m}2\rfloor}(4k-(-1)^{n+m})\binom n{\lfloor\frac{n-m}2\rfloor-k}\right).
$$
Must be summable, giving in particular the expressions by Per Alexandersson. At any rate, the generating function for $a_{n,0}$ is given by
\begin{multline*}
\sum_{n=0}^\infty a_{n,0}t^n=\frac{1+t}{1-t}\left(\sqrt{\frac{1+t}{1-t}}-1\right)/t\\=(3\cdot1-2)+(9\cdot\frac12-2)t+(11\cdot\frac12-2)t^2+(17\frac{1\cdot3}{2\cdot4}-2)t^3+(19\frac{1\cdot3}{2\cdot4}-2)t^4\\+(25\frac{1\cdot3\cdot5}{2\cdot4\cdot6}-2)t^5+(27\frac{1\cdot3\cdot5}{2\cdot4\cdot6}-2)t^6\\+...+\left((8n+1)\frac{1\cdot3\cdot...\cdot(2n-1)}{2\cdot4\cdot...\cdot2n}-2\right)t^{2n-1}+\left((8n+3)\frac{1\cdot3\cdot...\cdot(2n-1)}{2\cdot4\cdot...\cdot2n}-2\right)t^{2n}+...
\end{multline*}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/235041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
For which $n$ is there a permutation such that the sum of any two adjacent elements is a prime? For which $n$ is it possible to find a permutation of the numbers from $1$ to $n$ such that the sum of any two adjacent elements of the permutation is a prime?
For example: For $n=4$ the permutation $(1,2,3,4)$ has sums $1+2=3$, $2+3=5$, $3+4=7$,and $4+1=5$.
For $n=16$ the permutation $(1,2,3,4,7,6,5,14,15,16,13,10,9,8,11,12)$ has sums of adjacent elements $(3,5,7,11,13,11,19,29,31,29,23,19,17,19,23,13)$.
| Here are constructions with few sums.
The permutation $(12, 1, 10, 3, 8, 5, 6, 7, 4, 9, 2, 11)$ has adjacent sums equal to $11$, $13$, or $23$.
The permutation $(12, 1, 10, 3, 8, 5, 6, 7, 4, 9, 2, 11, 18, 13, 16, 15, 14, 17)$ has adjacent sums equal to $11$, $13$,$29$, or $31$.
If you have two pairs of twin primes $p$, $p+2$, $q$, $q+2$ with $q \gt 2p$ then the permutation of $q-p$
$$(p+1, 1, p-1, 3, ... p-2, 2, p, \\ q-p, p+2, q-p-2, p+4, ... p+5, q-p-3, p+3, q-p-1 )$$
has sums in $\{p,p+2,q,q+2\}$.
Heuristically, all but finitely many even numbers $n$ [Edit: that are divisible by $6$, as Aaron Meyerowitz and Mario Carneiro pointed out] should be expressible as a difference between a twin prime $p$ and a twin prime $q$ so that $q \gt 2p$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/241569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 5,
"answer_id": 2
} |
Determinant of a specific $4 \times 4$ symmetric matrix In a recent research work, I have come across the following nice identity, where the entries $a,b,x$ belong to an arbitrary commutative unital ring:
$$\begin{vmatrix}
2 & a & b & ab-x \\
a & 2 & x & b \\
b & x & 2 & a \\
ab-x & b & a & 2
\end{vmatrix}=(x^2-abx+a^2+b^2-4)^2.$$
Note that if the ring has characteristic $2$ then the formula is an obvious application of the Pfaffian.
The only way I have been able to check this identity is through a tedious computation (of course, running any formal computing software will do).
My question: Is there any elegant way to prove it?
| Here's a method for calculating the determinant, explaining at least why it ends up as a product. I don't know if there's any significance to your determinant being a square.
Define
$$H=
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1 & 0 & 0 \\
1 & -1 & 0 & 0 \\
0 & 0 & 1 & 1 \\
0 & 0 & 1 & -1 \\
\end{pmatrix}.
$$
(The tensor product of a one-dimensional Hadamard matrix with a two-by-two identity matrix.)
Then $\det H=1$ and for any $a,b,c,d,e,f,g,h$,
$$
H
\begin{pmatrix}
a & b & c & d \\
b & a & d & c \\
e & f & g & h \\
f & e & h & g
\end{pmatrix}
H\\
=\begin{pmatrix}
a+b & 0 & c+d & 0 \\
0 & a-b & 0 & c-d \\
e+f & 0 & g+h & 0 \\
0 & e-f & 0 & g-h
\end{pmatrix}$$
which is (similar to)
$$\begin{pmatrix}
a+b & c+d \\
e+f & g+h \\
\end{pmatrix}
\oplus
\begin{pmatrix}
a-b & c-d \\
e-f & g-h \\
\end{pmatrix}.
$$
Plugging in a rotated version of your matrix gives
$$\begin{vmatrix}
2 & x & b & a \\
x & 2 & a & b \\
b & a & 2 & ab-x \\
a & b & ab-x & 2
\end{vmatrix}
\\=
\begin{vmatrix}
2+x & a+b \\
a+b & 2+ab-x \\
\end{vmatrix}
\cdot
\begin{vmatrix}
2-x & b-a \\
b-a & 2-ab+x \\
\end{vmatrix}
\\
=(4-x^2+abx-a^2-b^2)(4-x^2+abx-a^2-b^2).
$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/247898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Upper triangular $2\times2$-matrices over a Baer *-ring Let $A$ be a Baer $*$-ring. Let us denote $B$ by the space of all upper triangular matrices
$\left(\begin{array}{cc}
a_1& a_2 \\
0 & a_4
\end{array}\right)$ where $a_i$'s are in $A$. Is $B$ a Baer *-ring too?
As for the involution on $B$, I mean any involution that makes the mapping $$a\to \left(\begin{array}{cc}
a& 0 \\
0 & 0
\end{array}\right)$$ forms an embedding.
| There exist two involutions on ring $B$ as follows:
$\begin{pmatrix}
a&b\\
0&c
\end{pmatrix}^{\ast_1}= \begin{pmatrix}
c^{\ast}&b^{\ast}\\
0&a^{\ast}
\end{pmatrix},$
and
$\begin{pmatrix}
a&b\\
0&c
\end{pmatrix}^{\ast_2}= \begin{pmatrix}
c^{\ast}&-b^{\ast}\\
0&a^{\ast}
\end{pmatrix}$.
An involution is proper if $aa^{\ast}=0$, then $a=0$. If the $\ast$-ring $R$ is Baer $\ast$, then $\ast$ is a involution proper.
The involutions $\ast_1$ and $\ast_2$ are not proper, because
$\begin{pmatrix}
1&0\\
0&0
\end{pmatrix}\begin{pmatrix}
1&0\\
0&0
\end{pmatrix}^{\ast_1}=\begin{pmatrix}
1&0\\
0&0
\end{pmatrix}\begin{pmatrix}
1&0\\
0&0
\end{pmatrix}^{\ast_2}=\begin{pmatrix}
1&0\\
0&0
\end{pmatrix}\begin{pmatrix}
0&0\\
0&1
\end{pmatrix}=0$ and $\begin{pmatrix}
1&0\\
0&0
\end{pmatrix}\not=0$.
Hence $B=T_2(A)$ is not a Baer $\ast$-ring.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/307440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits