Counting 2,784 Big Data & Machine Learning Frameworks, Toolsets, and Examples...
Suggestion? Feedback? Tweet @stkim1

Last Commit
May. 22, 2018
Feb. 9, 2017

Dataflow kit

alt tag

Build Status GoDoc Go Report Card codecov

Dataflow kit is a Scraping framework for Gophers. DFK extracts structured data from web pages, following the specified extractors.

It can be used in many ways for data mining, data processing or archiving.

Dataflow kit benefits:

  • Scraping of JavaScript generated pages;
  • Data extraction from paginated websites;
  • Sсraping of websites behind login form;
  • Cookies and sessions handling;
  • Following links and detailed pages processing;
  • Managing delays between requests per domain;
  • Following robots.txt directives;
  • Caching support. The following storage type are available Diskv, Redis, Amazon AWS S3, Digital Ocean Spaces;
  • Save results as CSV, JSON, XML;

DFK consists of two general services for fetching and parsing web pages content.

Fetch service

fetch.d server is intended for html web pages content download. Depending on Fetcher type, web page content is downloaded using either Base Fetcher or Splash fetcher.

Base fetcher uses standard golang http client to fetch pages as is. It works faster than Splash fetcher. But Base fetcher cannot render dynamic javascript driven web pages.

Splash fetcher is intended for rendering dynamic javascript based content. It sends requests to Splash javascript rendering service.

Splash passes retrieved data to parse.d service.

Parse service

parse.d is the service that extracts data from downloaded web page following the rules described in configuration JSON file. Extracted data are returned in CSV, JSON or XML format.

Note: Sometimes Parse service cannot extract data from some pages retrieved by default Base fetcher. Empty results may be returned while parsing Java Script generated pages. Parse service then attempts to force Splash fetcher to render the same dynamic javascript driven content automatically. Have a look at which is a sampe of JavaScript driven web page.


Using dep

dep ensure -add[email protected]

or go get

go get -u



  1. Install Docker and Docker Compose

  2. Start services.

cd $GOPATH/src/ && docker-compose up

This command fetches docker images automatically and starts services.

  1. Launch parsing in the second terminal window by sending POST request to parse daemon. Some json configuration files for testing are available in /examples folder.
curl -XPOST --data-binary "@$GOPATH/src/"

Here is the sample json configuration file:

		  "selector":".product-container a",
			 "types":["text", "href"],
		  "selector":"#product-container img",

Read more information about scraper configuration JSON files at our GoDoc reference

Extractors and filters are described at

  1. To stop services just press Ctrl+C and run
cd $GOPATH/src/ && docker-compose down --remove-orphans --volumes

IMAFGE ALT CLI Dataflow kit web scraping framework

Click on image to see CLI in action.

Manual way

  1. Start Splash docker container

docker run -d -it --rm -p 5023:5023 -p 8050:8050 -p 8051:8051 scrapinghub/splash

Splash is used for fetching web pages to feed a Dataflow kit parser.

  1. Build and run fetch.d service
cd $GOPATH/src/ && go build && ./fetch.d
  1. In new terminal window build and run parse.d service
cd $GOPATH/src/ && go build && ./parse.d
  1. Launch parsing. See step 3. from the previous section.


Try Front-end with Point-and-click interface to Dataflow kit services. It generates JSON config file and sends POST request to DFK Parser

IMAFGE ALT Dataflow kit web scraping framework

Click on image to see Dataflow kit in action.


This is Free Software, released under the BSD 3-Clause License.


You are welcome to contribute to our project.

alt tag