|
Teoriya Veroyatnostei i ee Primeneniya, 1967, Volume 12, Issue 4, Pages 619–633
(Mi tvp750)
|
|
|
|
This article is cited in 5 scientific papers (total in 5 papers)
On estimation of an unknown mean of a multivariate normal distribution
N. N. Chentsov Moscow
Abstract:
The problem of the asymptotically best point estimation is discussed for a simple example (named in the title of the paper). For the family (1) of normal distributions, the natural invariant loss functions are considered, and corresponding functionals of risk function that describe the quality (the uncertainty) of decision rules are introduced.
Theorem 2. {\it Let $\Pi=\Pi_N$ be any decision rule using $N$ independent observations for finding of an estimator $\alpha$ of an unknown parameter $\mathbf a$ of distribution $\Phi_\mathbf a\in\mathfrak N$ $($see $(1))$. Let $R_\Pi(\,\cdot\,)$ be the corresponding risk according to the Gauss loss function $L(\alpha,\mathbf a)=\|\alpha-\mathbf a\|^2=(\alpha_1-\mathbf a_1)^2+\dots+(\alpha_s-\mathbf a_s)^2$. Let us mesure the uncertainty of decision rules by some monotone functional $Q[R_\Pi(\,\cdot\,)]=Q(\Pi)$ of a risk function which is a) convex, b) invariant under Euclidean motions of parameter space, c) is calibrated by $(6)$. Then $Q(\Pi_N)\ge s/N$}.
Theorem 3. Let $\Pi=\Pi_N$ be any decision rule for estimation of an unknown parameter $\mathbf a$ of $\Phi_\mathbf a\in\mathfrak N$, and let $R_\Pi(\mathbf a)$ be the risk function according to $L(\alpha,\mathbf a)=\|\alpha-\mathbf a\|^2$. If in addition there is a Riemann-integrable statistical weight function $p(\mathbf a)$ of a priori possible values of $\mathbf a$, then
$$
Q_p(\Pi)=\int\dots\int R_\Pi(\mathbf a)p(\mathbf a)\,da_1\dots da_s\ge(1-\rho_p(N))s/N
$$
where the correction term $\rho(N)=o(1)$ depends on the density $p(\mathbf a)$ only.
The propositions which are analogous to above mentioned in other statements of the problem are considered. A sufficiently general law is formulated: $\lim\limits_NN\cdot\inf\limits_\Pi Q(\Pi_N)=s$, where $s$ is the dimension of the a priori information. The limits of its validity are discussed. The demonstration methods are based on the inequality of theorem 1 for the mean cubic values of the risk function $R(\mathbf a)$. The statement of theorem 1 is an integral consequence of the information inequality. The paper adjoins [6].
Received: 27.02.1967
Citation:
N. N. Chentsov, “On estimation of an unknown mean of a multivariate normal distribution”, Teor. Veroyatnost. i Primenen., 12:4 (1967), 619–633; Theory Probab. Appl., 12:4 (1967), 560–574
Linking options:
https://www.mathnet.ru/eng/tvp750 https://www.mathnet.ru/eng/tvp/v12/i4/p619
|
Statistics & downloads: |
Abstract page: | 280 | Full-text PDF : | 152 | First page: | 2 |
|