Hi, we're MLOps Consulting 👋
We are Luke, Phil and Carlos. We are MLOps platform, product and engineering specialists.
What We Do
We run bespoke projects for end-users and vendors in MLOps, the emerging field of DevOps for Machine Learning:
- Helping end-user companies evaluate the best MLOps stack for them.
- Integrating MLOps tools together to develop end-to-end MLOps stacks.
- Spinning out open source projects to become MLOps stack components.
Intellisense is a pioneer in the AI for mining space based in Cambridge, UK.
We helped with user stories and evaluation criteria, tool selection, defining deployable stacks in Terraform, running a series of evaluations, and picking the best platform for their MLOps requirements of heterogeneity, reproducibility & provenance.
Kubeflow + Pachyderm
Pachyderm is a data versioning, provenance and pipelining startup based in San Francisco.
MLflow + Kubeflow on Juju
Canonical is the company behind Ubuntu Linux.
We developed the MLflow Juju charm, and worked with their Kubeflow team to integrate it into Kubeflow notebooks so that users can seamlessly log training experiments and models for better reproducibility.
Azure Machine Learning + Pachyderm
AzureML is an end-to-end platform for MLOps on the Microsoft Azure cloud.
We worked with the AzureML product team to integrate Pachyderm for immutable data versioning, architecting a Terraform stack and contributing to an internal Microsoft project written in Rust. The solution is in private preview.
Boxkite from BasisAI
BasisAI is an end-to-end MLOps platform company based in Singapore.
We helped them spin out a key component of the open-source MLOps observability stack: Boxkite, a lightweight way to solve data & model drift monitoring using Prometheus and Grafana.
We developed a Kubeflow, MLflow, Prometheus & Grafana stack which showcases Boxkite and can be run from the browser. We helped them promote it in the press, analyst and MLOps user community.
Chassis + OMI from Modzy
Modzy is a model inference platform based out of Washington, DC.
We developed PoCs for model operations, model drift, model evaluation, and automatic model containerization.
Microsoft's SAME Project
The SAME Project is David Aronchick's latest project after he co-founded Kubernetes and Kubeflow at Google.
We added declarative infrastructure support to SAME via terrachain, enabling easy deployment of MLflow alongside Kubeflow Pipelines, enabled metadata auto-logging in the SAME notebook-to-pipeline compiler for better ML governance, and enabled metadata import/export for better collaboration between ML research teams. Demo.
We helped them develop their DevOps strategy, in particular around massively scalable Kubernetes clusters, and helped prototype tooling for integrating MLOps stacks.
Combinator and Testfaster
Based on experience end-to-end testing MLOps stacks, we developed foundational technology around Firecracker, the technology that powers AWS Lambda, to declaratively define pools of prewarmed K8s clusters that run on lightweight VMs on bare metal. We called this technology Testfaster.
We are applying this technology to power the test drive capability in Combinator, a project to make it easier to test drive, combine & deploy end-to-end MLOps stacks.
Who We Are
Luke Marsden, founder and owner of MLOps Consulting
Luke is a technical leader and entrepreneur who developed end-to-end MLOps platform company Dotscience. He was Kubernetes SIG lead for cluster-lifecycle, creating kubeadm with Joe Beda, and worked on Docker plugins with Solomon Hykes.
Phil Winder, associate, and founder of Winder Research
Phil is one of those rare people who deeply understands both the mathematics of ML and the software engineering best practice of DevOps. He wrote the book on Reinforcement Learning.
Carlos Millán, associate software engineer
Carlos is an extremely sharp software engineer who can develop high quality MLOps software and integrations at speed. He is proficient in Python, Golang and more.
We see a future where AI/ML is pervasive throughout every industry, and sophisticated technology teams assemble their MLOps platform from a set of best-of-breed components, just like how software & DevOps teams do today.
The projects we take on are generally aligned around furthering and making it easier to integrate these best-of-breed components into full end-to-end MLOps stacks that enable better productivity & governance for ML & MLOps teams.
We believe in applying the best practices of software engineering & DevOps to the MLOps space, where they are sorely lacking in common practice today.
Reproducibility, provenance, CI/CD, observability and version control are all things that software & DevOps teams take for granted.
For AI/ML to emerge from research and deliver true business value, the same problems must be solved for data & ML.
Get in Touch
Interested in scoping out a project?
We'll jump on a video call to explore how we can help, then work together to develop a proposal.