- K-Means Clustering
- Divisive Clustering
- Agglomerative Clustering
The hierarchy within the final cluster has the following properties:
- Clusters generated in early stages are nested in those generated in later stages.
- Clusters with different sizes in the tree can be valuable for discovery.
Agglomerative hierarchical clustering starts with every single object in a single cluster. Then, in each successive iteration, it merges the closest pair of clusters by satisfying some similarity criteria, until all of the data is in one cluster.
It can produce an ordering of the objects, which may be informative for data display.
Smaller clusters are generated, which may be helpful for discovery.
No provision can be made for a relocation of objects that may have been 'incorrectly' grouped at an early stage. The result should be examined closely to ensure it makes sense.
- Use of different distance metrics for measuring distances between clusters may generate different results. Performing multiple experiments and comparing the results is recommended to support the veracity of the original results.
HAC is used widely in the field of finance such as stock market prediction which is an appealing application not only for research but commercial applications as well.Stock market prediction is based on structured data such as price,trading volumes and accounting volumes.Cluster analysis can be used quantitative and qualitative information in financial reports to predict stock price movements.First we convert the data into clusters using HAC and then use representative feature vectors(centroid of each cluster) to predict the stock price movements.
Author: Vrishti Garg
Posted by: Finance 2