Machine learning infrastructure for developers: build and deploy scalable TensorFlow applications on AWS without worrying about setting up infrastructure, managing dependencies, or orchestrating data pipelines.
Cortex is actively maintained by Cortex Labs. We're a venture-backed team of infrastructure engineers and we're hiring.
How it works
Data validation: validate data to prevent data quality issues early
- kind: raw_column name: col1 type: INT_COLUMN min: 0 max: 10
Data ingestion: connect to your data warehouse and ingest data at scale
- kind: environment name: dev data: type: csv path: s3a://my-bucket/data.csv schema: [@col1, @col2, ...]
Data transformation: use custom Python and PySpark code to transform data at scale
- kind: transformed_column name: col1_normalized transformer_path: normalize.py # Python / PySpark code input: @col1
Model training: train models with custom TensorFlow code
- kind: model name: my_model estimator_path: dnn.py # TensorFlow code target_column: @label_col input: [@col1_normalized, @col2_indexed, ...] hparams: hidden_units: [16, 8] training: batch_size: 32 num_steps: 10000
Prediction serving: deploy models as prediction APIs that scale horizontally
- kind: api name: my-api model: @my_model compute: replicas: 3
Deploying to AWS: deploy your pipeline to AWS and make prediction requests
$ cortex deploy Ingesting data ... Transforming data ... Training models ... Deploying API ... Ready! https://abc.amazonaws.com/my-api
End-to-end machine learning workflow: Cortex spans the machine learning workflow from feature management to model training to prediction serving.
Machine learning pipelines as code: Cortex applications are defined using a simple declarative syntax that enables flexibility and reusability.
Built for the cloud: Cortex can handle production workloads and can be deployed in any AWS account in minutes.