Skip to content

Hi, we're MLOps Consulting 👋

Luke, Kai, Adam and Phil

We are Luke, Kai, Adam and Phil. We are LLM and MLOps platform, product and engineering specialists.


What We Do

We run bespoke projects in LLMs (Large Language Models) and MLOps (DevOps for Machine Learning):

  • Developing LLM-powered apps and stacks, such as Legal AI assistants.
  • Helping end-user companies evaluate the best MLOps stack for them.
  • Integrating MLOps tools together to develop end-to-end MLOps stacks.
  • Spinning out open source projects to become MLOps stack components.

Case Studies

We are helping a team of lawyers develop novel AI-powered tools to help enterprises and SaaS companies generate more revenue and reduce risk.

In the process we are developing tooling around the OpenAI API and vector databases for contract analysis.


Franchise Cloud

We are helping Franchise Cloud — who started with a tech stack for franchisee/franchisor management — add AI capabilities to their stack.

This is enabling AI use cases for marketing, onboarding and training franchisees, which is revolutionizing their business.


Intellisense

Intellisense is a pioneer in the AI for mining space based in Cambridge, UK.

We helped with user stories and evaluation criteria, tool selection, defining deployable stacks in Terraform, running a series of evaluations, and picking the best platform for their MLOps requirements of heterogeneity, reproducibility & provenance.


Kubeflow + Pachyderm

Pachyderm is a data versioning, provenance and pipelining startup based in San Francisco.

We helped them integrate Pachyderm with Kubeflow Pipelines, creating the KFData project. We presented a demo in the Kubeflow Pipelines community meeting.


MLflow + Kubeflow on Juju

Canonical is the company behind Ubuntu Linux.

We developed the MLflow Juju charm, and worked with their Kubeflow team to integrate it into Kubeflow notebooks so that users can seamlessly log training experiments and models for better reproducibility.


Azure Machine Learning + Pachyderm

AzureML is an end-to-end platform for MLOps on the Microsoft Azure cloud.

We worked with the AzureML product team to integrate Pachyderm for immutable data versioning, architecting a Terraform stack and contributing to an internal Microsoft project written in Rust. The solution is in private preview.


Boxkite from BasisAI

BasisAI is an end-to-end MLOps platform company based in Singapore.

We helped them spin out a key component of the open-source MLOps observability stack: Boxkite, a lightweight way to solve data & model drift monitoring using Prometheus and Grafana.

We developed a Kubeflow, MLflow, Prometheus & Grafana stack which showcases Boxkite and can be run from the browser. We helped them promote it in the press, analyst and MLOps user community.


Chassis + OMI from Modzy

Modzy is a model inference platform based out of Washington, DC.

We developed PoCs for model operations, model drift, model evaluation, and automatic model containerization.

We developed the Open Model Interface as a spec for model serving, and open-sourced Chassis, the missing link between ML teams and DevOps. Chassis integrates MLflow with KFServing and Modzy.


Microsoft's SAME Project

The SAME Project is David Aronchick's latest project after he co-founded Kubernetes and Kubeflow at Google.

We added declarative infrastructure support to SAME via terrachain, enabling easy deployment of MLflow alongside Kubeflow Pipelines, enabled metadata auto-logging in the SAME notebook-to-pipeline compiler for better ML governance, and enabled metadata import/export for better collaboration between ML research teams. Demo.


Grid.ai

Grid.ai is a company from the creators of the hugely successful PyTorch Lightning project, based in New York.

We helped them develop their DevOps strategy, in particular around massively scalable Kubernetes clusters, and helped prototype tooling for integrating MLOps stacks.


Combinator and Testfaster

Based on experience end-to-end testing MLOps stacks, we developed foundational technology around Firecracker, the technology that powers AWS Lambda, to declaratively define pools of prewarmed K8s clusters that run on lightweight VMs on bare metal. We called this technology Testfaster.

We are applying this technology to power the test drive capability in Combinator, a project to make it easier to test drive, combine & deploy end-to-end MLOps stacks.


Who We Are

Luke Marsden


Luke Marsden, founder and owner of MLOps Consulting

Luke is a technical leader and entrepreneur who developed end-to-end MLOps platform company Dotscience. He was Kubernetes SIG lead for cluster-lifecycle, creating kubeadm with Joe Beda, and worked on Docker plugins with Solomon Hykes.

Phil Winder


Phil Winder, associate, and founder of Winder.ai

Phil is one of those rare people who deeply understands both the mathematics of ML and the software engineering best practice of DevOps and MLOps. He wrote the book on Reinforcement Learning.

Kai Davenport


Kai Davenport, associate senior engineer

Kai is an extremely sharp software engineer who can develop high quality MLOps software and integrations at speed. He is proficient in Python, Golang and more.

Adam Knight


Adam Knight, LLM prototype builder

Adam is a skilled at identifying and prototyping innovative solutions for complex problems. Co-founder of Astonish Email, Nocode, and Franchise Cloud, Adam has experience across various industries and excels at understanding use cases and bulldozing problems until a solution is found.


Blog

Checkout our substack where we talk about machine learning, language models and applying this awesome tech to real world problems. MLOps Consulting Blog


Principles

We see a future where AI/ML is pervasive throughout every industry, and sophisticated technology teams assemble their MLOps platform from a set of best-of-breed components, just like how software & DevOps teams do today.

The projects we take on are generally aligned around furthering and making it easier to integrate these best-of-breed components into full end-to-end MLOps stacks that enable better productivity & governance for ML & MLOps teams.

We believe in applying the best practices of software engineering & DevOps to the MLOps space, where they are sorely lacking in common practice today.

Reproducibility, provenance, CI/CD, GitOps, observability and version control are all things that software & DevOps teams take for granted.

For AI/ML to emerge from research and deliver true business value, the same problems must be solved for data & ML.


Get in Touch

Interested in scoping out a project?

Drop Luke an email on luke@mlops.consulting or come chat to Luke Marsden on the MLOps.community Slack.

We'll jump on a video call to explore how we can help, then work together to develop a proposal.