Michelangelo-AI is an open-source platform designed to streamline the development, deployment, and monitoring of machine learning models at scale. It offers a comprehensive suite of tools and services that facilitate the entire machine learning lifecycle, from data management to model serving.
Open Source Initiative
As part of our commitment to the ML community, we are open-sourcing an end-to-end lifecycle management system grounded in extensive operational expertise. Our goals are to:
- Drive standardization and interoperability across the ML ecosystem,
- Enable easy adoption of scalable ML solutions in new production use cases,
- Foster innovation and trust through collaboration with partner teams, and
- Cultivate a vibrant and responsible ML culture that empowers the community to build with confidence and speed.
We are incrementally open-sourcing Michelangelo's core capabilities, ensuring each release is production-proven and developer-ready. The documentation on this site reflects the current set of available features and will be continuously updated as new components are added to the open-source repository.
- Feature Management: Efficiently handle large datasets with built-in support for data ingestion, transformation, and storage.
- Model Training: Train models using various algorithms, including support for distributed training across multiple nodes.
- Model Evaluation: Assess model performance with a range of metrics and visualization tools.
- Model Deployment: Seamlessly deploy models to production environments with support for both batch and real-time inference.
- Monitoring and Logging: Continuously monitor model performance and log predictions to ensure reliability and accuracy.
Follow the Sandbox Setup guide to get a fully functional local environment running.
Here's a quick example of how to define and run an ML pipeline:
# Clone the repo and install dependencies
git clone https://github.com/michelangelo-ai/michelangelo.git
cd michelangelo/python
poetry install
source .venv/bin/activate
# Spin up a local sandbox cluster
ma sandbox create
# Run the demo pipeline to verify everything works
ma sandbox demo pipelineTo define your own pipeline, use the @task and @workflow decorators:
import michelangelo.uniflow.core as uniflow
@uniflow.task()
def train(learning_rate: float = 0.01) -> str:
# your training logic here
return "model_path"
@uniflow.workflow()
def my_pipeline(learning_rate: float = 0.01):
model = train(learning_rate=learning_rate)For a full walkthrough, see the Getting Started with ML Pipelines guide.
See the User Guides in the documentation for instructions on running tests and working with the development environment.
See the Sandbox Setup guide for instructions on running and importing container images into your local cluster.
We welcome contributions to Michelangelo-AI!
If you're interested in contributing, please read our Contributing Guidelines to get started.
This project is licensed under the Apache 2.0 License.
Thank you to the Michelangelo Open Source team for getting this project off the ground, and thank you in advance to our contributors.