This is a wrapper around the Python class sklearn.cluster.KMeans.
Super classes
rgudhi::PythonClass -> rgudhi::SKLearnClass -> rgudhi::BaseClustering -> KMeans
Methods
Method new()
The KMeans class constructor.
Arguments
n_clustersAn integer value specifying the number of clusters to form as well as the number of centroids to generate. Defaults to
2L.initEither a string or a numeric matrix of shape \(\mathrm{n_{clusters}} \times \mathrm{n_{features}}\) specifying the method for initialization. If a string, choices are:
"k-means++": selects initial cluster centroids using sampling based on an empirical probability distribution of the points’ contribution to the overall inertia. This technique speeds up convergence, and is theoretically proven to be \(\mathcal{O}(\log(k))\)-optimal. See the description ofn_initfor more details;"random": choosesn_clustersobservations (rows) at random from data for the initial centroids.
Defaults to
"k-means++".n_initAn integer value specifying the number of times the k-means algorithm will be run with different centroid seeds. The final results will be the best output of
n_initconsecutive runs in terms of inertia. Defaults to10L.max_iterAn integer value specifying the maximum number of iterations of the k-means algorithm for a single run. Defaults to
300L.tolA numeric value specifying the relative tolerance with regards to Frobenius norm of the difference in the cluster centers of two consecutive iterations to declare convergence. Defaults to
1e-4.verboseAn integer value specifying the level of verbosity. Defaults to
0Lwhich is equivalent to no verbose.random_stateAn integer value specifying the initial seed of the random number generator. Defaults to
NULLwhich uses the current timestamp.copy_xA boolean value specifying whether the original data is to be modified. When pre-computing distances it is more numerically accurate to center the data first. If
copy_xisTRUE, then the original data is not modified. Ifcopy_xisFALSE, the original data is modified, and put back before the function returns, but small numerical differences may be introduced by subtracting and then adding the data mean. Note that if the original data is not C-contiguous, a copy will be made even ifcopy_xisFALSE. If the original data is sparse, but not in CSR format, a copy will be made even ifcopy_xisFALSE. Defaults toTRUE.algorithmA string specifying the k-means algorithm to use. The classical EM-style algorithm is
"lloyd". The"elkan"variation can be more efficient on some datasets with well-defined clusters, by using the triangle inequality. However it’s more memory-intensive due to the allocation of an extra array of shape \(\mathrm{n_{samples}} \times \mathrm{n_{clusters}}\). Defaults to"lloyd".