Аннотация:
This talk presents a new method of clustering in high dimension which is stable against outliers and does not require to know the number of clusters or their shape. Clustering structure of the data is described by a $n \times n$ matrix of weights whose elements are iteratively updates. The parameters of the method are calibrated by the ‘propagation’ conditions and the results describe a separation distance between clusters which ensures cluster identification with a high probability. Numerical complexity of the method is of order $d \times x n^2$ and it is feasible even in high dimension. The performance of the method is illustrated by the simulation study and a practical example. [Joint work with Elmar Diederichs (MIPT, Moscow)]