Clustering is a core problem within a wide range of research disciplines ranging from machine learning and data mining to classical statistics. A group of clustering approaches so-called nonparametric methods, aims to cluster a set of entities into a beforehand unspecified and unknown number of clusters, making potentially expensive pre-analysis of data obsolete. In this paper, the recently, by Cote and Larochelle introduced infinite Restricted Boltzmann Machine that has the ability to self-regulate its number of hidden parameters is adapted to the problem of clustering by the introduction of two basic cluster membership assumptions. A descriptive study of the influence of several regularization and sparsity settings on the clustering behavior is presented and results are discussed. The results show that sparsity is a key adaption when using the iRBM for clustering that improves both the clustering performances as well as the number of identified clusters.