Hierarchical clustering

In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters.

Machine learningClustering

Contributor(s)

Initial contribute: 2020-12-17

Classification(s)

Method-focused categoriesData-perspectiveIntelligent computation analysis

Detailed Description

English {{currentDetailLanguage}} English

Quoted from: https://en.wikipedia.org/wiki/Hierarchical_clustering

Strategies for hierarchical clustering generally fall into two types:

  • Agglomerative: This is a "bottom-up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.
  • Divisive: This is a "top-down" approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.

In general, the merges and splits are determined in a greedy manner. The results of hierarchical clustering are usually presented in a dendrogram.

The standard algorithm for hierarchical agglomerative clustering (HAC) has a time complexity of  and requires  memory, which makes it too slow for even medium data sets. However, for some special cases, optimal efficient agglomerative methods (of complexity ) are known: SLINK for single-linkage and CLINK for complete-linkage clustering. With a heap, the runtime of the general case can be reduced to , an improvement on the aforementioned bound of , at the cost of further increasing the memory requirements. In many cases, the memory overheads of this approach are too large to make it practically usable.

Except for the special case of single-linkage, none of the algorithms (except exhaustive search in ) can be guaranteed to find the optimum solution.

Divisive clustering with an exhaustive search is , but it is common to use faster heuristics to choose splits, such as k-means.

Metric

The choice of an appropriate metric will influence the shape of the clusters, as some elements may be relatively closer to one another under one metric than another. For example, in two dimensions, under the Manhattan distance metric, the distance between the origin (0,0) and (.5, .5) is the same as the distance between the origin and (0, 1), while under the Euclidean distance metric the latter is strictly greater.

Some commonly used metrics for hierarchical clustering are:

Names Formula
Euclidean distance
Squared Euclidean distance
Manhattan distance
Maximum distance
Mahalanobis distance  where S is the Covariance matrix

For text or other non-numeric data, metrics such as the Hamming distance or Levenshtein distance are often used.

A review of cluster analysis in health psychology research found that the most common distance measure in published studies in that research area is the Euclidean distance or the squared Euclidean distance.

Linkage criteria

The linkage criterion determines the distance between sets of observations as a function of the pairwise distances between observations.

Some commonly used linkage criteria between two sets of observations A and B are:

Names Formula
Maximum or complete-linkage clustering
Minimum or single-linkage clustering
Unweighted average linkage clustering (or UPGMA)
Weighted average linkage clustering (or WPGMA)
Centroid linkage clustering, or UPGMC  where  and  are the centroids of clusters s and t, respectively.
Minimum energy clustering

where d is the chosen metric. Other linkage criteria include:

  • The sum of all intra-cluster variance.
  • The increase in variance for the cluster being merged (Ward's criterion).
  • The probability that candidate clusters spawn from the same distribution function (V-linkage).
  • The product of in-degree and out-degree on a k-nearest-neighbour graph (graph degree linkage).
  • The increment of some cluster descriptor (i.e., a quantity defined for measuring the quality of a cluster) after merging two clusters.

模型元数据

{{htmlJSON.HowtoCite}}

Zhen Qian (2020). Hierarchical clustering, Model Item, OpenGMS, https://geomodeling.njnu.edu.cn/modelItem/29b51067-bfc6-4283-b45d-7e04aa383ac7
{{htmlJSON.Copy}}

Contributor(s)

Initial contribute : 2020-12-17

{{htmlJSON.CoContributor}}

QR Code

×

{{curRelation.overview}}
{{curRelation.author.join('; ')}}
{{curRelation.journal}}









{{htmlJSON.RelatedItems}}

{{htmlJSON.LinkResourceFromRepositoryOrCreate}}{{htmlJSON.create}}.

Drop the file here, orclick to upload.
Select From My Space
+ add

{{htmlJSON.authorshipSubmitted}}

Cancel Submit
{{htmlJSON.Cancel}} {{htmlJSON.Submit}}
{{htmlJSON.Localizations}} + {{htmlJSON.Add}}
{{ item.label }} {{ item.value }}
{{htmlJSON.ModelName}}:
{{htmlJSON.Cancel}} {{htmlJSON.Submit}}
名称 别名 {{tag}} +
系列名 版本号 目的 修改内容 创建/修改日期 作者
摘要 详细描述
{{tag}} + 添加关键字
* 时间参考系
* 空间参考系类型 * 空间参考系名称

起始日期 终止日期 进展 开发者
* 是否开源 * 访问方式 * 使用方式 开源协议 * 传输方式 * 获取地址 * 发布日期 * 发布者



编号 目的 修改内容 创建/修改日期 作者





时间分辨率 时间尺度 时间步长 时间范围 空间维度 格网类型 空间分辨率 空间尺度 空间范围
{{tag}} +
* 类型
图例


* 名称 * 描述
示例描述 * 名称 * 类型 * 值/链接 上传


{{htmlJSON.Cancel}} {{htmlJSON.Submit}}
Title Author Date Journal Volume(Issue) Pages Links Doi Operation
{{htmlJSON.Cancel}} {{htmlJSON.Submit}}
{{htmlJSON.Add}} {{htmlJSON.Cancel}}

{{articleUploading.title}}

Authors:  {{articleUploading.authors[0]}}, {{articleUploading.authors[1]}}, {{articleUploading.authors[2]}}, et al.

Journal:   {{articleUploading.journal}}

Date:   {{articleUploading.date}}

Page range:   {{articleUploading.pageRange}}

Link:   {{articleUploading.link}}

DOI:   {{articleUploading.doi}}

Yes, this is it Cancel

The article {{articleUploading.title}} has been uploaded yet.

OK
{{htmlJSON.Cancel}} {{htmlJSON.Confirm}}
NyHUlzgRtV6j