Sklearn - Birch

Implements the Birch clustering algorithm. It is a memory-efficient, online-learning algorithm provided as an alternative to MiniBatchKMeans. It constructs a tree data structure with the cluster centroids being read off the leaf. These can be either the final cluster centroids or can be provided as input to another clustering algorithm such as AgglomerativeClustering.

sklearnclusteringbirch
  12

Contributor

contributed at 2021-01-07

Authorship

Authorship is unclear, you can claim the item.

Classification(s)

Method-focused categoriesData-perspectiveGeoinformation analysis

Model Description

English {{currentDetailLanguage}} English

Quoted from:https://scikit-learn.org/stable/modules/clustering.html#birch

The Birch builds a tree called the Clustering Feature Tree (CFT) for the given data. The data is essentially lossy compressed to a set of Clustering Feature nodes (CF Nodes). The CF Nodes have a number of subclusters called Clustering Feature subclusters (CF Subclusters) and these CF Subclusters located in the non-terminal CF Nodes can have CF Nodes as children.

The CF Subclusters hold the necessary information for clustering which prevents the need to hold the entire input data in memory. This information includes:

  1. Number of samples in a subcluster.
  2. Linear Sum - An n-dimensional vector holding the sum of all samples
  3. Squared Sum - Sum of the squared L2 norm of all samples.
  4. Centroids - To avoid recalculation linear sum / n_samples.
  5. Squared norm of the centroids.

The Birch algorithm has two parameters, the threshold and the branching factor. The branching factor limits the number of subclusters in a node and the threshold limits the distance between the entering sample and the existing subclusters.

This algorithm can be viewed as an instance or data reduction method, since it reduces the input data to a set of subclusters which are obtained directly from the leaves of the CFT. This reduced data can be further processed by feeding it into a global clusterer. This global clusterer can be set by n_clusters. If n_clusters is set to None, the subclusters from the leaves are directly read off, otherwise a global clustering step labels these subclusters into global clusters (labels) and the samples are mapped to the global label of the nearest subcluster.

Algorithm description:

  1. A new sample is inserted into the root of the CF Tree which is a CF Node. It is then merged with the subcluster of the root, that has the smallest radius after merging, constrained by the threshold and branching factor conditions. If the subcluster has any child node, then this is done repeatedly till it reaches a leaf. After finding the nearest subcluster in the leaf, the properties of this subcluster and the parent subclusters are recursively updated.
  2. If the radius of the subcluster obtained by merging the new sample and the nearest subcluster is greater than the square of the threshold and if the number of subclusters is greater than the branching factor, then a space is temporarily allocated to this new sample. The two farthest subclusters are taken and the subclusters are divided into two groups on the basis of the distance between these subclusters.
  3. If this split node has a parent subcluster and there is room for a new subcluster, then the parent is split into two. If there is no room, then this node is again split into two and the process is continued recursively, till it reaches the root.

Birch or MiniBatchKMeans?

  1. Birch does not scale very well to high dimensional data. As a rule of thumb if n_features is greater than twenty, it is generally better to use MiniBatchKMeans.
  2. If the number of instances of data needs to be reduced, or if one wants a large number of subclusters either as a preprocessing step or otherwise, Birch is more useful than MiniBatchKMeans.

How to use partial_fit?

To avoid the computation of global clustering, for every call of partial_fit the user is advised

  1. To set n_clusters=None initially
  2. Train all data by multiple calls to partial_fit.
  3. Set n_clusters to a required value using brc.set_params(n_clusters=n_clusters).
  4. Call partial_fit finally with no arguments, i.e. brc.partial_fit() which performs the global clustering.

References:

  1. Tian Zhang, Raghu Ramakrishnan, Maron Livny BIRCH: An efficient data clustering method for large databases. https://www.cs.sfu.ca/CourseCentral/459/han/papers/zhang96.pdf
  2. Roberto Perdisci JBirch - Java implementation of BIRCH clustering algorithm https://code.google.com/archive/p/jbirch

Model Metadata

Name {{metadata.overview.name}}
Version {{metadata.overview.version}}
Model Type {{metadata.overview.modelType}}
Model Domain
{{domain}}
Sacle {{metadata.overview.scale}}

There is no overview about this model. You can click to add overview.

Purpose {{metadata.design.purpose}}
Principles
{{principle}}
Incorporated Models
{{incorporatedModel}}
Model part of larger framework: {{metadata.design.framework}}
Incorporated Models
{{process}}

There is no design info about this model. You can click to add overview.

Information {{metadata.usage.information}}
Initialization {{metadata.usage.initialization}}
Hardware Requirements {{metadata.usage.hardware}}
Software Requirements {{metadata.usage.software}}
Inputs
{{input}}
Outputs
{{output}}

There is no usage info about this model. You can click to add overview.

How to Cite

songjie (2021). Sklearn - Birch, Model Item, OpenGMS, https://geomodeling.njnu.edu.cn/modelItem/f359c3f4-6db8-4edb-bc1a-3f4e1f902f71
Copy

QR Code

Contributor(s)

Initial contribute: 2021-01-07

Authorship

Authorship is unclear, you can claim the item.

History

Last modifier : 
songjie
Last modify time : 
2021-01-12
Modify times : 
View History

QR Code

×

{{curRelation.overview}}
{{curRelation.author.join('; ')}}
{{curRelation.journal}}









You can link related {{typeName}} from repository to this model item, or you can create a new {{typeName.toLowerCase()}}.

Related Items
Related Items

You can link resource from repository to this model item, or you can create a new {{typeName.toLowerCase()}}.

Drop the file here, orclick to upload.
Select From My Space
+ add

These authorship information will be submitted to the contributor to review.

Cancel Submit
Model Classifications
Cancel Submit
Localizations + Add
{{ item.label }} {{ item.value }}
Model Name :
Cancel Submit
Name:
Version:
Model Type:
Model Domain:
Scale:
Purpose:
Principles:
Incorporated models:

Model part of

larger framework

Process:
Information:
Initialization:
Hardware Requirements:
Software Requirements:
Inputs:
Outputs:
Cancel Submit
Title Author Date Journal Volume(Issue) Pages Links Doi Operation
Cancel Submit
Add Cancel

{{articleUploading.title}}

Authors:  {{articleUploading.authors[0]}}, {{articleUploading.authors[1]}}, {{articleUploading.authors[2]}}, et al.

Journal:   {{articleUploading.journal}}

Date:   {{articleUploading.date}}

Page range:   {{articleUploading.pageRange}}

Link:   {{articleUploading.link}}

DOI:   {{articleUploading.doi}}

Yes, this is it Cancel

The article {{articleUploading.title}} has been uploaded yet.

OK
Cancel Confirm