Snowplow Implementation Example
Here we try to demonstrate the use of Snowplow to only track the events, using Kafka as the 'collector', releasing you from the pain of implement the full stack.
To run locally we start from the point that you have:
- Latest version of Python 3
How to run locally
Build the image from
stream-collector directory with:
docker build -t kafkasp .
Duplicate the docker-compose.exampe.yaml to docker-compose.yaml, replacing
DOCKER_IP_HERE with the proper IP.
Create a virtual environment called .venv with:
virtualenv -p python3 .venvthen,
Install the dependencies from
./.venv/bin/python3 -m pip install -r requirements.txt
Remember that here we are assuming the latest version of python 3
Send some events to Kafka:
Here, the 'Tracker' and 'Collector' (The two first Snowplow's components) show their faces.
Consume the messages produced by the collector, and make whatever you want!
./kafka_consumer_example.py consume DOCKER_IP_HERE:9092 topic_1 group_1
Tip: You can access kafka manager application just to check if the messages are comming.
The kafka manager was deployed at step 3, and (hopefully) listening in