Math Challenge - September 2021

Click For Summary

Discussion Overview

This thread presents a series of mathematical challenges and problems across various topics including the gamma function, combinatorics, stochastic processes, semisimple modules, topological groups, metric spaces, and inequalities. Participants engage in solving these problems, providing proofs and exploring different mathematical concepts.

Discussion Character

  • Homework-related
  • Mathematical reasoning
  • Technical explanation
  • Exploratory

Main Points Raised

  • Some participants propose that the gamma function can be defined through properties of a function ##f## that is positive and satisfies certain conditions, leading to the conclusion that ##f(x)=\Gamma(x)## for all ##x>0##.
  • Others discuss a sequence of real numbers and establish bounds on the size of subsets defined by distances between elements, suggesting that for any integer ##r>1##, the size of the subset ##|T_r|## is less than a linear function of ##|T_1|##.
  • A challenge is raised regarding the probability of the difference between two independent random variables being bounded, with participants providing inequalities relating probabilities for different bounds.
  • Participants explore the properties of finite groups and fields, discussing conditions under which the group algebra ##\mathbb{F}G## is semisimple.
  • There is a discussion on the continuity of operations in topological groups, with proofs provided for various properties of such groups.
  • Some participants present results concerning the density of certain sets in complete metric spaces, emphasizing the relationship between sequences of functions and their images.
  • Several inequalities are proposed and proved, including bounds related to random variables and their expectations.
  • Participants engage in combinatorial problems involving prime factorization and the distribution of primes, with various claims about the order of primes in factorials being discussed.
  • There are discussions about the properties of Haar spaces in the context of continuous functions and their zeros, with conditions specified for simple and double zeros.
  • Lastly, a problem involving closed sets under multiplication is presented, with participants tasked to show that at least one of two disjoint subsets must be closed under multiplication.

Areas of Agreement / Disagreement

Participants generally present individual solutions and proofs without reaching a consensus on all points. Multiple competing views and approaches exist, particularly in the context of combinatorial and probabilistic problems.

Contextual Notes

Some claims depend on specific definitions and assumptions that are not universally agreed upon. The proofs and results presented may have limitations based on the conditions specified in each problem.

Who May Find This Useful

This discussion may be of interest to students and enthusiasts of mathematics, particularly those engaged in problem-solving and exploring advanced topics in combinatorics, probability, and algebra.

  • #91
MathematicalPhysicist said:
Is there a prize for the highest grades?
Well, we would have to ask @Greg Bernhardt. Maybe we could create a badge for it.
MathematicalPhysicist said:
Otherwise I wouldn't care if I got graded or not.
I have written a TeX-file and created a pdf listing all of my problems and solutions up to the end of the year. It has 530 pages. Additionally, we have had many problems from other members. This means: the counting has to be done by somebody else! I suggest an Excel sheet. :cool:
 
  • Wow
Likes   Reactions: Not anonymous
Physics news on Phys.org
  • #92
fresh_42 said:
15. For a partition ##\pi## of ##N := \{1, 2, ..., 9\}##, let ##\pi(x)## be the number of elements in the part containing ##x##. Prove that for any two partitions ##\pi_1## and ##\pi_2##, there are two distinct numbers ##x## and ##y## in ##N## such that ##\pi_j(x) = \pi_j(y)## for ##j=1,2##.

Claim 1: Suppose a partition ##\pi_1## of ##N = \{1,2,...,9\}## has 4 distinct numbers, say ##a_1, a_2, a_3, a_4##, such that ##\pi_1(a_1) = \pi_1(a_2) = \pi_1(a_3) = \pi_1(a_4)##. Then there cannot be another partition ##\pi_2## of ##N## such that ##\pi_1(x) = \pi_1(y)## but ##\pi_2(x) \neq \pi_2(y)## for all distinct ##x, y \in N##. In other words, for any other partition ##\pi_2## of ##N##, we can find distinct ##x, y \in N## such that ##\pi_j(x) = \pi_j(y)## for ##j=1,2##

Proof by contradiction: Suppose there exists a partition ##\pi_2## of the form claimed to not exist above. Then we must have ##\pi_2(a_i) \neq \pi_2(a_j)## for all ##i, j \in \{1,2,3,4\}## with ##i \neq j##. Hence ##\pi_2(a_1), \pi_2(a_2), \pi_2(a_3), \pi_2(a_4)## must all be distinct, i.e. each of ##a_1, a_2, a_3, a_4## must belong to a different part in ##\pi_2## and this implies that ##\pi_2## has at least 4 parts of distinct cardinalities. Thus, the number of numbers in ##\pi_2## must be at least ##1+2+3+4 = 10##, but this contradicts the fact that ##\pi_2## is a partition of ##N## and hence must have only 9 distinct numbers. This proves that any other partition of ##\pi_2## of ##N## must be such that there will exist at least 2 distinct numbers ##x, y \in \{a_1, a_2, a_3, a_4\}## such that ##\pi_2(x) = \pi_2(y)##. And since by definition of ##a_i##'s, ##\pi_1(x) = \pi_1(y)##, we have found ##x,y## such that ##\pi(x) = \pi(y)## in both ##\pi_1## and ##\pi_2##.


Now consider the following cases for ##\pi_1##.

Case 1: ##\pi_1## has at least 1 part with a cardinality of 4 or more (i.e. the part is a subset of ##N## with at least 4 elements) or has at least 2 parts with the same cardinality and these equal-sized parts have 2 or more elements.

In this case, it is easy to see that there are at least 4 distinct numbers in ##N##, say ##a_1, a_2, a_3, a_4##, such that ##\pi_1(a_1) = \pi_1(a_2) = \pi_1(a_3) = \pi_1(a_4)##. We simply take any 4 numbers from the largest cardinality part in ##\pi_1## as that part is stated to have 4 or more elements. This ##\pi_1## clearly satisfies the condition of claim 1.

Case 2: All parts in ##\pi_1## have a cardinality of 3 or less and at most one part in ##\pi_1## can have a cardinality of 2 or 3.

In this case, since at most one part can have a cardinality greater than 1 and the maximum allowed cardinality of a part is 3, ##\pi_1## must have at least 4 parts whose cardinality is 1 (single element parts). This is because the maximum number of elements contained in the parts of size more than one is ##2+3=5## (one part of size 2, one part of size 3) and the remaining ##9-5=4## numbers must be covered by single-element parts. Here again, ##\pi_1## clearly satisfies the condition of claim 1.

Cases (1) and (2) cover all possible ways to form a partition ##\pi_1## of ##N##. Since in both cases ##\pi_1## meets the condition stated in claim 1, it follows that it is not possible to have another partition ##\pi_2## of ##N## such that ##\pi_1(x) \neq \pi_2(y)## for all distinct ##x, y \in N##. In other words, for any other partition ##\pi_2## of ##N##, we can find distinct ##x, y \in N## such that ##\pi(x) = \pi(y)## in both ##\pi_1## and ##\pi_2##. Hence proved.
 
  • Like
Likes   Reactions: fresh_42
  • #93
The well known Maschke's theorem for Problem 4 :)
 
  • #94
nuuskur said:
The well known Maschke's theorem for Problem 4 :)
But Maschke is easy enough to prove, fulfills the requirement to learn something, and last but not least doesn't provide a counterexample for ##\operatorname{char}\mathbb{F}\,\mid\,|G| .##
 
  • #95
IIRc the result is the group algebra is semisimple if and only if \mathrm{char}F \not\mid |G|.
 
  • #96
nuuskur said:
IIRc the result is the group algebra is semisimple if and only if \mathrm{char}F \not\mid |G|.
Sure. Otherwise, there wouldn't be a counterexample.
 
  • #97
It's funny, number 3 is just a continuous version of number 2 (if your random variables take discrete values with probabilities that can be expressed as integer ratios as each other, then #3 turns into #2 I think) but somehow, #3 seemed obviously true to me and #2 seemed like a crazy result when I first read it.Anyway, here's a proof for #3.

Edit: ahh my dog posted this when I started typing. I'll remove this when I finish writing the proof.

It suffices to show for ##r>1##, that ##P(r-1 < |X-Y| \leq r) \leq 2P(|X-Y| \leq 1)##, since then
$$P(|X-Y| \leq r) = P(| X-Y| \leq 1) + \sum_{k=2}^r P(r-1 < |X-Y| \leq r) \leq P(| X-Y| \leq 1) + 2(r-1)P(| X-Y| \leq 1).$$
 
  • #98
Office_Shredder said:
It's funny, number 3 is just a continuous version of number 2 (if your random variables take discrete values with probabilities that can be expressed as integer ratios as each other, then #3 turns into #2 I think) but somehow, #3 seemed obviously true to me and #2 seemed like a crazy result when I first read it.Anyway, here's a proof for #3.

Edit: ahh my dog posted this when I started typing. I'll remove this when I finish writing the proof.

It suffices to show for ##r>1##, that ##P(r-1 < |X-Y| \leq r) \leq 2P(|X-Y| \leq 1)##, since then
$$P(|X-Y| \leq r) = P(| X-Y| \leq 1) + \sum_{k=2}^r P(r-1 < |X-Y| \leq r) \leq P(| X-Y| \leq 1) + 2(r-1)P(| X-Y| \leq 1).$$
Yes, and it's a bit short. I post my solution which is a bit more detailed.

Fix an integer ##m,## and let ##S:=(x_1,\ldots,x_m)## be a random sequence of ##m## elements, where each ##x_i## is chosen, randomly and independently, according to the distribution of ##X.## By the previous statement (problem #2)
$$
|S_r| <(2r-1)|S_1|.
$$
Therefore, the expectation of ##|S_r|## is smaller than that of ##(2r-1)|S_1|.## However, by the linearity of expectation it follows that the expectation of ##|S_b|## is precisely ##m+m(m-1)p_b## for every ##b>0.## Thus
$$
m+m(m-1)p_r< (2r-1)(m+(m-1)p_1),
$$
implying that for every integer ##m,##
$$
p_r< (2r-1)p_1+\dfrac{2r-2}{m-1} \stackrel{m\to \infty }{\longrightarrow } (2r-1)p_1
$$

It can be proven that even the strict inequality
$$\operatorname{prob}\left(|X-Y|\leq 2\right) < 3\cdot \operatorname{prob}\left(|X-Y|\leq 1\right)$$
holds, in which case we speak of the 1-2-3 theorem.
 
  • #99
Problem #8:

We have

\begin{align*}
P \left( \frac{1}{n} \sum_{k=1}^n X_k \geq a \right) & = P \left( \sum_{k=1}^n X_k \geq na \right)
\nonumber \\
& = P \left( e^{t \sum_{k=1}^n X_k} \geq e^{tna} \right)
\nonumber \\
& \leq e^{- tna} E (e^{t \sum_{k=1}^n X_k}) \qquad \qquad (\text{Chernoff bound})
\nonumber \\
& = e^{- tna} \prod_{k=1}^n E (e^{t X_k}) \qquad (\text{independent random variables}) \quad (*)
\end{align*}

Let ##E(X_k) = 0##, ##E(X_k^2) = \sigma_k^2 \leq \sigma^2## and that ##| X_k | \leq B##. We have for ##i > 2## that ##E (X_k^i) = E (X_k^{i-2} X_k^2) \leq B^{i-2} \sigma_k^2##, so that

\begin{align*}
R_k & = \sum_{i=2}^\infty \frac{t^{i-2} E(X_k^i)}{i! \sigma_k^2} \leq \sum_{i=2}^\infty \frac{t^{i-2} B^{i-2} \sigma_k^2}{i! \sigma_k^2}
\nonumber \\
& = \frac{1}{(t B)^2} \sum_{i=2}^\infty \frac{t^i B^i}{i!}
\nonumber \\
& = \frac{e^{tB} - 1 - tB}{(t B)^2}
\end{align*}

Then

\begin{align*}
E (e^{t X_k}) & = E \left( 1 + t X_k + \sum_{i=2}^\infty \frac{t^i X_k^i}{i!} \right)
\nonumber \\
& = 1 + t^2 \sigma_k^2 R_k
\nonumber \\
& \leq e^{t^2 \sigma_k^2 R_k}
\nonumber \\
& = \exp \left\{ t^2 \sigma_k^2 \frac{e^{tB} - 1 - tB}{(t B)^2} \right\}
\end{align*}

Using this in ##(*)##

\begin{align*}
P \left( \frac{1}{n} \sum_{k=1}^n X_k \geq a \right) & \leq e^{- tna} \prod_{k=1}^n \exp \left\{ t^2 \sigma_k^2 \frac{e^{tB} - 1 - tB}{(t B)^2} \right\}
\nonumber \\
& = e^{- tna} \exp \left\{ n t^2 \tilde{\sigma}^2 \frac{e^{tB} - 1 - tB}{(tb)^2}
\right\} \quad (**)
\end{align*}

(where we have defined ##\tilde{\sigma}^2 = \frac{1}{n} \sum_{k=1}^n \sigma_k^2 \leq \sigma^2##). We minimise the RHS of ##(**)## with respect to ##t##:

\begin{align*}
& \frac{d}{dt} \exp \left\{ n t^2 \tilde{\sigma}^2 \frac{e^{tB} - 1 - tB}{(tB)^2} - tna \right\}
\nonumber \\
& = \exp \left\{ n t^2 \tilde{\sigma}^2 \frac{e^{tB} - 1 - tB}{(tb)^2} - tna \right\} n (\tilde{\sigma}^2 \left( \frac{B e^{tB} - B}{B^2} \right) - a) = 0
\end{align*}

We have a tighter bound when ##t = \frac{1}{B} \log \left( \frac{aB}{\tilde{\sigma}^2} + 1 \right)##. Using this in ##(**)##:

\begin{align*}
P \left( \frac{1}{n} \sum_{k=1}^n X_k \geq a \right) & \leq \exp \left\{ n \tilde{\sigma}^2 \frac{ (\frac{aB}{\tilde{\sigma}^2} + 1) - 1 - \log \left( \frac{aB}{\tilde{\sigma}^2} + 1 \right)}{B^2} - \frac{na}{B} \log \left( \frac{aB}{\tilde{\sigma}^2} + 1 \right) \right\}
\nonumber \\
& = \exp \left\{ \frac{n \tilde{\sigma}^2}{B^2} \left[ \frac{aB}{\tilde{\sigma}^2} - \log \left( \frac{aB}{\tilde{\sigma}^2} + 1 \right) - \frac{aB}{\tilde{\sigma}^2} \log \left( \frac{aB}{\tilde{\sigma}^2} + 1 \right) \right] \right\}
\nonumber \\
& = \exp \left\{ - \frac{n \tilde{\sigma}^2}{B^2} f \left( \frac{aB}{\tilde{\sigma}^2} \right) \right\}
\end{align*}

where ##f(u) = (1+ u) \log (1+ u) - u \geq 0## for ##u \geq 0## (as ##f(0) = 0## and ##f'(u) = \log (1 + u) \geq 0##). We can use the inequality ##u - (1+u) \log (1+ u) \leq - \dfrac{3u^2}{2(u + 3)}## (##u > -1##), corresponding to problem #7, to obtain

\begin{align*}
P \left( \frac{1}{n} \sum_{k=1}^n X_k \geq a \right) & \leq \exp \left\{ - \frac{a^2}{\dfrac{2 \tilde{\sigma}^2}{n} + \dfrac{2Ba}{3n}}
\right\}
\nonumber \\
& \leq \exp \left\{ - \frac{a^2}{\dfrac{2 \sigma^2}{n} + \dfrac{2Ba}{3n}} \right\} \qquad (\sigma^2 \geq \tilde{\sigma}^2)
\end{align*}

Put

\begin{align*}
a = \sqrt{ \frac{2 \sigma^2 T}{n}} + \frac{2BT}{3n}
\end{align*}

then

\begin{align*}
a^2 = \frac{2 \sigma^2 T}{n} + 2 \sqrt{ \frac{2 \sigma^2}{n}} \frac{2B}{3n} T^{3/2} + \left( \frac{2B}{3n} \right)^2 T^2
\end{align*}

and

\begin{align*}
\frac{2 \sigma^2}{n} + \frac{2B}{3n} a = \frac{2 \sigma^2}{n} + \sqrt{ \frac{2 \sigma^2}{n}} \frac{2B}{3n} \sqrt{T} + \left( \frac{2B}{3n} \right)^2 T
\end{align*}

so that

\begin{align*}
- \frac{a^2}{\frac{2 \sigma^2}{n} + \frac{2B}{3n} a} \leq - \frac{a^2}{\frac{2 \sigma^2}{n} + \sqrt{ \frac{2 \sigma^2}{n}} \frac{2B}{3n} \sqrt{T} + \frac{2B}{3n} a} = - T
\end{align*}

So finally

\begin{align*}
P \left( \frac{1}{n} \sum_{k=1}^n X_k \geq \sqrt{ \frac{2 \sigma^2 T}{n}} + \frac{2BT}{3n} \right) \leq e^{-T} .
\end{align*}

 
Last edited:
  • Like
Likes   Reactions: fresh_42
  • #100
julian said:
Problem #8:

We have

\begin{align*}
P \left( \frac{1}{n} \sum_{k=1}^n X_k \geq a \right) & = P \left( \sum_{k=1}^n X_k \geq na \right)
\nonumber \\
& = P \left( e^{t \sum_{k=1}^n X_k} \geq e^{tna} \right)
\nonumber \\
& \leq e^{- tna} E (e^{t \sum_{k=1}^n X_k}) \qquad \qquad (\text{Chernoff bound})
\nonumber \\
& = e^{- tna} \prod_{k=1}^n E (e^{t X_k}) \qquad (\text{independent random variables}) \quad (*)
\end{align*}

Let ##E(X_k) = 0##, ##E(X_k^2) = \sigma_k^2 \leq \sigma^2## and that ##| X_k | \leq B##. We have for ##i > 2## that ##E (X_k^i) = E (X_k^{i-2} X_k^2) \leq B^{i-2} \sigma_k^2##, so that

\begin{align*}
R_k & = \sum_{i=2}^\infty \frac{t^{i-2} E(X_k^i)}{i! \sigma_k^2} \leq \sum_{i=2}^\infty \frac{t^{i-2} B^{i-2} \sigma_k^2}{i! \sigma_k^2}
\nonumber \\
& = \frac{1}{(t B)^2} \sum_{i=2}^\infty \frac{t^i B^i}{i!}
\nonumber \\
& = \frac{e^{tB} - 1 - tB}{(t B)^2}
\end{align*}

Then

\begin{align*}
E (e^{t X_k}) & = E \left( 1 + t X_k + \sum_{i=2}^\infty \frac{t^i X_k^i}{i!} \right)
\nonumber \\
& = 1 + t^2 \sigma_k^2 R_k
\nonumber \\
& \leq e^{t^2 \sigma_k^2 R_k}
\nonumber \\
& = \exp \left\{ t^2 \sigma_k^2 \frac{e^{tB} - 1 - tB}{(t B)^2} \right\}
\end{align*}

Using this in ##(*)##

\begin{align*}
P \left( \frac{1}{n} \sum_{k=1}^n X_k \geq a \right) & \leq e^{- tna} \prod_{k=1}^n \exp \left\{ t^2 \sigma_k^2 \frac{e^{tB} - 1 - tB}{(t B)^2} \right\}
\nonumber \\
& = e^{- tna} \exp \left\{ n t^2 \tilde{\sigma}^2 \frac{e^{tB} - 1 - tB}{(tb)^2}
\right\} \quad (**)
\end{align*}

(where we have defined ##\tilde{\sigma}^2 = \frac{1}{n} \sum_{k=1}^n \sigma_k^2 \leq \sigma^2##). We minimise the RHS of ##(**)## with respect to ##t##:

\begin{align*}
& \frac{d}{dt} \exp \left\{ n t^2 \tilde{\sigma}^2 \frac{e^{tB} - 1 - tB}{(tB)^2} - tna \right\}
\nonumber \\
& = \exp \left\{ n t^2 \tilde{\sigma}^2 \frac{e^{tB} - 1 - tB}{(tb)^2} - tna \right\} n (\tilde{\sigma}^2 \left( \frac{B e^{tB} - B}{B^2} \right) - a) = 0
\end{align*}

We have a tighter bound when ##t = \frac{1}{B} \log \left( \frac{aB}{\tilde{\sigma}^2} + 1 \right)##. Using this in ##(**)##:

\begin{align*}
P \left( \frac{1}{n} \sum_{k=1}^n X_k \geq a \right) & \leq \exp \left\{ n \tilde{\sigma}^2 \frac{ (\frac{aB}{\tilde{\sigma}^2} + 1) - 1 - \log \left( \frac{aB}{\tilde{\sigma}^2} + 1 \right)}{B^2} - \frac{na}{B} \log \left( \frac{aB}{\tilde{\sigma}^2} + 1 \right) \right\}
\nonumber \\
& = \exp \left\{ \frac{n \tilde{\sigma}^2}{B^2} \left[ \frac{aB}{\tilde{\sigma}^2} - \log \left( \frac{aB}{\tilde{\sigma}^2} + 1 \right) - \frac{aB}{\tilde{\sigma}^2} \log \left( \frac{aB}{\tilde{\sigma}^2} + 1 \right) \right] \right\}
\nonumber \\
& = \exp \left\{ - \frac{n \tilde{\sigma}^2}{B^2} f \left( \frac{aB}{\tilde{\sigma}^2} \right) \right\}
\end{align*}

where ##f(u) = (1+ u) \log (1+ u) - u \geq 0## for ##u \geq 0## (as ##f(0) = 0## and ##f'(u) = \log (1 + u) \geq 0##). We can use the inequality ##u - (1+u) \log (1+ u) \leq - \dfrac{3u^2}{2(u + 3)}## (##u > -1##), corresponding to problem #7, to obtain

\begin{align*}
P \left( \frac{1}{n} \sum_{k=1}^n X_k \geq a \right) & \leq \exp \left\{ - \frac{a^2}{\dfrac{2 \tilde{\sigma}^2}{n} + \dfrac{2Ba}{3n}}
\right\}
\nonumber \\
& \leq \exp \left\{ - \frac{a^2}{\dfrac{2 \sigma^2}{n} + \dfrac{2Ba}{3n}} \right\} \qquad (\sigma^2 \geq \tilde{\sigma}^2)
\end{align*}

Put

\begin{align*}
a = \sqrt{ \frac{2 \sigma^2 T}{n}} + \frac{2BT}{3n}
\end{align*}

then

\begin{align*}
a^2 = \frac{2 \sigma^2 T}{n} + 2 \sqrt{ \frac{2 \sigma^2}{n}} \frac{2B}{3n} T^{3/2} + \left( \frac{2B}{3n} \right)^2 T^2
\end{align*}

and

\begin{align*}
\frac{2 \sigma^2}{n} + \frac{2B}{3n} a = \frac{2 \sigma^2}{n} + \sqrt{ \frac{2 \sigma^2}{n}} \frac{2B}{3n} \sqrt{T} + \left( \frac{2B}{3n} \right)^2 T
\end{align*}

so that

\begin{align*}
- \frac{a^2}{\frac{2 \sigma^2}{n} + \frac{2B}{3n} a} \leq - \frac{a^2}{\frac{2 \sigma^2}{n} + \sqrt{ \frac{2 \sigma^2}{n}} \frac{2B}{3n} \sqrt{T} + \frac{2B}{3n} a} = - T
\end{align*}

So finally

\begin{align*}
P \left( \frac{1}{n} \sum_{k=1}^n X_k \geq \sqrt{ \frac{2 \sigma^2 T}{n}} + \frac{2BT}{3n} \right) \leq e^{-T} .
\end{align*}

Very good!

I knew the Chernoff bound as Markov inequality ...

The Markov inequality says that for a monotone increasing function ##f: \mathbb{R}\longrightarrow [0,\infty )## and a constant ##a\in \mathbb{R}##
$$
f(a)\cdot P(X\geq a) \leq E(f(X))
$$
and in particular for $f(a)>0$
$$
P(X\geq a) \leq \dfrac{E(f(X))}{f(a)}.
$$

... but I also thought nobody would solve this one. The name of the inequality in the statement (#8) is Bernstein inequality.
 
Last edited:
  • #101
It helped that I solved problem #7 first. I thought problem #7 was a bit easy, but now I realize it was setting you up to solve problem #8.
 
Last edited:
  • Like
Likes   Reactions: fresh_42

Similar threads

  • · Replies 61 ·
3
Replies
61
Views
10K
  • · Replies 114 ·
4
Replies
114
Views
12K
  • · Replies 67 ·
3
Replies
67
Views
12K
  • · Replies 61 ·
3
Replies
61
Views
12K
  • · Replies 42 ·
2
Replies
42
Views
11K
  • · Replies 86 ·
3
Replies
86
Views
14K
  • · Replies 93 ·
4
Replies
93
Views
16K
  • · Replies 61 ·
3
Replies
61
Views
13K
  • · Replies 102 ·
4
Replies
102
Views
11K
  • · Replies 33 ·
2
Replies
33
Views
9K