Counting 2,129 Big Data & Machine Learning Frameworks, Toolsets, and Examples...
Suggestion? Feedback? Tweet @stkim1

What is TiSpark?

TiSpark is a thin layer built for running Apache Spark on top of TiDB/TiKV to answer the complex OLAP queries. It takes advantages of both the Spark platform and the distributed TiKV cluster, at the same time, seamlessly glues to TiDB, the distributed OLTP database, to provide a Hybrid Transactional/Analytical Processing (HTAP) to serve as a one-stop solution for online transactions and analysis.

TiSpark Architecture


  • TiSpark integrates with Spark Catalyst Engine deeply. It provides precise control of the computing, which allows Spark read data from TiKV efficiently. It also supports index seek, which improves the performance of the point query execution significantly.

  • It utilizes several strategies to push down the computing to reduce the size of dataset handling by Spark SQL, which accelerates the query execution. It also uses the TiDB built-in statistical information for the query plan optimization.

  • From the data integration point of view, TiSpark + TiDB provides a solution runs both transaction and analysis directly on the same platform without building and maintaining any ETLs. It simplifies the system architecture and reduces the cost of maintenance.

  • In addition, you can deploy and utilize tools from the Spark ecosystem for further data processing and manipulation on TiDB. For example, using TiSpark for data analysis and ETL; retrieving data from TiKV as a machine learning data source; generating reports from the scheduling system and so on.

TiSpark depends on the existence of TiKV clusters and PDs. It also needs to setup and use Spark clustering platform.

A thin layer of TiSpark. Most of the logic is inside tikv-java-client library.

Uses as below

./bin/spark-shell --jars /wherever-it-is/tispark-0.1.0-SNAPSHOT-jar-with-dependencies.jar

import org.apache.spark.sql.TiContext
val ti = new TiContext(spark) 

// Mapping all TiDB tables from database tpch as Spark SQL tables

spark.sql("select count(*) from lineitem").show

Current Version



spark.sql("select ti_version()").show


Below configurations can be put together with spark-defaults.conf or passed in the same way as other Spark config properties.

Key Default Value Description
spark.tispark.pd.addresses PD Cluster Addresses, split by comma
spark.tispark.grpc.framesize 268435456 Max frame size of GRPC response
spark.tispark.grpc.timeout_in_sec 10 GRPC timeout time in seconds
spark.tispark.meta.reload_period_in_sec 60 Metastore reload period in seconds
spark.tispark.plan.allow_agg_pushdown true If allow aggregation pushdown (in case of busy TiKV nodes)
spark.tispark.plan.allow_index_double_read false If allow index double read (which might cause heavy pressure on TiKV)
spark.tispark.index.scan_batch_size 2000000 How many row key in batch for concurrent index scan
spark.tispark.index.scan_concurrency 5 Maximal threads for index scan retrieving row keys (shared among tasks inside each JVM)
spark.tispark.table.scan_concurrency 512 Maximal threads for table scan (shared among tasks inside each JVM)
spark.tispark.request.command.priority "Low" "Low", "Normal", "High" which impacts resource to get in TiKV. Low is recommended for not disturbing OLTP workload
spark.tispark.coprocess.streaming false Whether to use streaming for response fetching
spark.tispark.plan.unsupported_pushdown_exprs "" A comma separated list of expressions. In case you have very old version of TiKV, you might disable some of the expression push-down if not supported

Quick start

Read the Quick Start.

How to build

TiSpark depends on TiKV java client project which is included as a submodule. To build TiKV client:


To build TiSpark itself:

mvn clean package

Follow us



Mailing list

[email protected]

Google Group


TiSpark is under the Apache 2.0 license. See the LICENSE file for details.

Latest Releases
 Dec. 1 2017
 Oct. 16 2017