Non-linear feature extraction techniques (in Machine Learning)

Feature extraction is used in dimensionality reduction to transform the data from a high-dimensional space to one with fewer dimensions. Non-linear feature extraction techniques are used when the relationships between the features do not follow a linear pattern.

Feature extraction methods are divided in non-linear or linear approaches. The 4 most common non-linear feature extraction techniques are:

  1. T-distributed stochastic neighbor embedding (t-SNE)
  2. Generalized discriminant analysis (GDA)
  3. Autoencoders
  4. Kernel principal component analysis (Kernel PCA)

1. T-distributed stochastic neighbor embedding (t-SNE)

T-distributed stochastic neighbour embedding, or t-SNE, is a non-linear dimension reduction technique that maps samples of multi-dimensional spaces to a 2-dimensional space by preserving the nearness of samples.


Subscribe to my Newsletter


2. Generalized discriminant analysis (GDA)

GDA is one of the non-linear dimensionality reduction techniques that reduce dimensionality using kernel methods. It maximizes the ratio of between-class scatter to within-class scatter in a similar fashion as the support-vector machines (SVM) theory does.

3. Autoencoders

The autoencoder trains a network to ignore insignificant data (noise) for a set of data.

Interesting read: Autoencoders For Dimensionality Reduction

4. Kernel principal component analysis (Kernel PCA)

The Kernel principal component analysis is an extension of the PCA that uses the kernel methods (pattern analysis algorithms) to reduce the dimensionality of non-linear features by maximizing the variation of the data.

This is it, next, you should follow this tutorial to find out about common Linear Feature Extraction Techniques used in Dimensionality reduction.

Enjoyed This Post?