openTSNE
A visualization of 44,808 single cell transcriptomes from the mouse retina [5] embedded using the multiscale kernel trick for preserving global structure.
The goal of this project is

Extensibility. We provide efficient defaults for the typical use case i.e. visualizing high dimensional data. We also make it very simple to use various tricks that have been introduced to improve the quality of tSNE embeddings. The library is designed to it's easy to implement and use your own components and encourages experimentation.

Speed. We provide two fast, parallel implementations of tSNE, which are comparable to their C++ counterparts in speed. Python does incur some overhead, so if speed is your only requirement, consider using FItSNE. The differences are often minute and become even less apparent when utilizing multiple cores.

Interactivity. This library was built for Orange, an interactive machine learning toolkit. As such, we provide a powerful API which can control all aspects of the tSNE algorithm and makes it suitable for interactive environments.

Ease of distribution. FItSNE, the reference C++ implementation for the interpolation based variant of tSNE, is not easy to install or distribute. It requires one to preinstall C libraries and requires manual compilation. This package is installable either through
pip
orconda
with a single command, making it very easy to include in other packages.
Detailed documentation on tSNE is available on Read the Docs.
Installation
Conda
openTSNE can be easily installed from condaforge
with
conda install channel condaforge opentsne
PyPi
openTSNE is also available through pip
and can be installed with
pip install opentsne
Note that openTSNE requires a C/C++ compiler. numpy
must also be installed.
In order for openTSNE to utilize multiple threads, the C/C++ compiler must also implement OpenMP
. In practice, almost all compilers implement this with the exception of older version of clang
on OSX systems.
To squeeze the most out of openTSNE, you may also consider installing FFTW3 prior to installation. FFTW3 implements the Fast Fourier Transform, which is heavily used in openTSNE. If FFTW3 is not available, openTSNE will use numpy's implementation of the FFT, which is slightly slower than FFTW. The difference is only noticeable with large data sets containing millions of data points.
Usage
We provide two modes of usage. One is somewhat familliar to scikitlearn's TSNE.fit
.
We also provide an advanced interface for finer control of the optimization, allowing us to interactively tune the embedding and make use of various tricks to improve the embedding quality.
Basic usage
We provide a basic interface somewhat similar to the one provided by scikitlearn.
from openTSNE import TSNE
from sklearn import datasets
iris = datasets.load_iris()
x, y = iris["data"], iris["target"]
tsne = TSNE(
n_components=2, perplexity=30, learning_rate=200,
n_jobs=4, angle=0.5, initialization="pca", metric="euclidean",
early_exaggeration_iter=250, early_exaggeration=12, n_iter=750,
neighbors="exact", negative_gradient_method="bh",
)
embedding = tsne.fit(x)
There are two parameters which you will want to watch out for:
neighbors
controls nearest neighbor search. If our data set is small,exact
is the better choice.exact
uses scikitlearn's KD trees. For larger data, approximate search can be orders of magnitude faster. This is selected withapprox
. Nearest neighbor search is performed only once at the beginning of the optmization, but can dominate runtime on large data sets, therefore this must be properly chosen.negative_gradient_method
controls which approximation technique to use to approximate pairwise interactions. These are computed at each step of the optimization. Van Der Maaten [2] proposed using the BarnesHut tree approximation and this has be the defacto standard in most tSNE implementations. This can be selected by passingbh
. Asymptotically, this scales as O(n log n) in the number of points works well for up to 10,000 samples. More recently, Linderman et al. [3] developed another approximation using interpolation which scales linearly in the number of points O(n). This can be selected by passingfft
. There is a bit of overhead to this method, making it slightly slower than BarnesHut for small numbers of points, but is very fast for larger data sets, while BarnesHut becomes completely unusable. For smaller data sets the difference is typically in the order of seconds, at most minutes, so a safe default is using the FFT approximation.
Our tsne
object acts as a fitter instance, and returns a TSNEEmbedding
instance. This acts as a regular numpy array, and can be used as such, but can be further optimized if we see fit or can be used for adding new points to the embedding.
We don't log any progress by default, but provide callbacks that can be run at any interval of the optimization process. A simple logger is provided as an example.
from openTSNE.callbacks import ErrorLogger
tsne = TSNE(callbacks=ErrorLogger(), callbacks_every_iters=50)
A callback can be any callable object that accepts the following arguments.
def callback(iteration, error, embedding):
...
Callbacks are used to control the optimization i.e. every callback must return a boolean value indicating whether or not to stop the optimization. If we want to stop the optimization via callback we simply return True
.
Additionally, a list of callbacks can also be passed, in which case all the callbacks must agree to continue the optimization, otherwise the process is terminated and the current embedding is returned.
Advanced usage
Recently, Kobak and Berens [4] demonstrate several tricks we can use to obtain better tSNE embeddings. The main critique of tSNE is that global structure is mainly thrown away. This is typically the main selling point for UMAP over tSNE. In the preprint, several techniques are presented that enable tSNE to capture more global structure. All of these tricks can easily be implemented using openTSNE and are shown in the notebook examples.
To introduce the API, we will implement the standard tSNE algorithm, the one implemented by TSNE.fit
.
from openTSNE import initialization, affinity
from openTSNE.tsne import TSNEEmbedding
init = initialization.pca(x)
affinities = affinity.PerplexityBasedNN(x, perplexity=30, method="approx", n_jobs=8)
embedding = TSNEEmbedding(
init, affinities, negative_gradient_method="fft",
learning_rate=200, n_jobs=8, callbacks=ErrorLogger(),
)
embedding.optimize(n_iter=250, exaggeration=12, momentum=0.5, inplace=True)
embedding.optimize(n_iter=750, momentum=0.8, inplace=True)
References

Maaten, Laurens van der, and Geoffrey Hinton. "Visualizing data using tSNE." Journal of machine learning research 9.Nov (2008): 25792605.

Van Der Maaten, Laurens. "Accelerating tSNE using treebased algorithms." The Journal of Machine Learning Research 15.1 (2014): 32213245.

Linderman, George C., et al. "Efficient Algorithms for tdistributed Stochastic Neighborhood Embedding." arXiv preprint arXiv:1712.09005 (2017).

Kobak, Dmitry, and Philipp Berens. "The art of using tSNE for singlecell transcriptomics." bioRxiv (2018): 453449.

Macosko, Evan Z., et al. "Highly parallel genomewide expression profiling of individual cells using nanoliter droplets." Cell 161.5 (2015): 12021214.