Announcing Charmed Kubeflow 1.10

We are thrilled to announce the release of Charmed Kubeflow 1.10, Canonical’s latest update to the widely-adopted open source MLOps platform. This release integrates significant improvements from the upstream Kubeflow 1.10 project, while also bringing a suite of additional capabilities targeted towards enterprise deployments. Charmed Kubeflow 1.10 empowers machine learning practitioners and teams to operationalize machine learning workflows more efficiently, securely, and seamlessly than ever.

Highlights from upstream Kubeflow 1.10

Advanced hyperparameter tuning with Trainer 2.0 and Katib

Kubeflow Trainer 2.0 introduces enhanced capabilities designed to simplify hyperparameter optimization. In combination with Katib, a new high-level API specifically supports hyperparameter tuning for large language models (LLMs), reducing manual intervention and accelerating fine-tuning workflows. Additionally, Katib now supports:

  • Multiple parameter distribution types, including log-uniform, normal, and log-normal distributions.
  • Push-based metrics collection mechanism, enhancing performance and simplifying administration.

Improved scalability and flexibility in Kubeflow Pipelines

Kubeflow Pipelines 2.4.1 includes key enhancements such as:

  • Support for placeholders in resource limits, allowing dynamic and adaptable pipeline configurations.
  • Loop parallelism with configurable parallelism limits, facilitating massively parallel execution while maintaining system stability.
  • Reliable resolution of outputs from nested DAG components, simplifying pipeline management and reuse.

Next-level model serving with KServe

KServe 0.14.1 introduces powerful features to further streamline model deployment:

  • New Python SDK with asynchronous inference capabilities.
  • Stable OCI storage integration for robust model management.
  • Model caching leveraging local node storage for rapid deployment of large models.
  • Direct integration with Hugging Face, allowing seamless deployment using the Hugging Face Hub.

I am very excited to see continued collaborations and new features from KServe being integrated in Kubeflow 1.10 release, particularly the model cache feature and integration with Hugging Face, which enables more streamlined deployment and efficient autoscaling for both predictive and generative models. We are actively working with ecosystem projects and communities like vLLM, Kubernetes WG Serving, and Envoy to tackle the growing challenges of serving LLMs.

Yuan Tang
Kubeflow Steering Committee member

The Kubeflow ecosystem is also growing – it recently welcomed Spark, and the Feast community is actively working on a donation plan as well.

Feast is the reference open-source feature store for AI/ML, and when combined with Kubeflow, it provides a seamless end-to-end MLOps experience. I am excited to see the two projects working more closely together to unlock powerful use cases, especially for Generative AI and Retrieval-Augmented Generation (RAG). Kubeflow and Feast will enable data scientists to efficiently manage features, accelerate model development, and accelerate getting models to production.

— Francisco Javier Arceo
Kubeflow Steering Committee member & Feast Maintainer

Added value of Charmed Kubeflow 1.10

We don’t just package upstream components, we take the care needed to ensure a seamless production deployment experience for our customers. We develop open source solutions for improved orchestration and integration with ancillary services. And of course we always take our customers’ feedback in consideration. This is how we have improved Charmed Kubeflow 1.10 even further:

  • Added an automated and simplified way to manage your Kubeflow profiles via GitOps, with our new Github Profile Automator charm. This mechanism allows you to declaratively define your Kubeflow profiles in one single place. This work also lays the foundation to provide a seamless authentication experience with external identity providers as well, which can be particularly useful when deploying Kubeflow in the Public Cloud
  • We’ve enabled a high availability option for the Istio ingress, to improve the resilience of your deployments and make sure you can handle a high traffic volume with confidence.
  • You can now leverage more application health-check endpoints and alerting rules for KServe, Istio, and other components. With every release, we strive to provide more ways to monitor the health status of your deployment.
  • Charmed Kubeflow is more secure than ever. Most of our images are now based on Ubuntu and and our Rocks technology, leveraging Canonical’s security patching pipelines, guaranteeing the lowest number of CVEs possible.

Canonical’s AI/ML Ecosystem

Canonical works closely with a broad range of partners to enable open source technology at every scale and in any environment. Charmed Kubeflow runs seamlessly on any CNCF-certified Kubernetes distribution, providing a lot of flexibility to choose the best environment that fits your needs. Additionally, we’re working towards bringing Kubeflow as a managed offering in the public cloud, significantly cutting deployment time and operational costs. For data scientists looking to quickly start experimenting right on their Ubuntu laptops or workstations, our Data Science Stack provides a straightforward, ready-to-use solution. Lastly, we’re developing a robust, standalone model-serving solution built on Kubernetes, ideal for secure, mission-critical deployments and extending reliable inference capabilities even to the edge.

Get started with Charmed Kubeflow 1.10

Whether you’re a seasoned MLOps practitioner or new to Kubeflow, now is the perfect time to experience these enhancements firsthand. Install Charmed Kubeflow 1.10 today and elevate your machine learning workflows.

Explore the full details and installation instructions in our release notes.

Contact Canonical for enterprise support or managed services. 

To learn more about Canonical’s AI solutions, visit canonical.com/solutions/ai.

Kubeflow

Run Kubeflow anywhere, easily

With Charmed Kubeflow, deployment and operations of Kubeflow are easy for any scenario.

Charmed Kubeflow is a collection of Python operators that define integration of the apps inside Kubeflow, like katib or pipelines-ui.

Use Kubeflow on-prem, desktop, edge, public cloud and multi-cloud.

Learn more about Charmed Kubeflow ›

Kubeflow

What is Kubeflow?

Kubeflow makes deployments of Machine Learning workflows on Kubernetes simple, portable and scalable.

Kubeflow is the machine learning toolkit for Kubernetes. It extends Kubernetes ability to run independent and configurable steps, with machine learning specific frameworks and libraries.

Learn more about Kubeflow ›

kubeflow logo

Install Kubeflow

The Kubeflow project is dedicated to making deployments of machine learning workflows on Kubernetes simple, portable and scalable.

You can install Kubeflow on your workstation, local server or public cloud VM. It is easy to install with MicroK8s on any of these environments and can be scaled to high—availability.

Install Kubeflow ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

KubeCon Europe 2025: Containers & Connections with Ubuntu

It’s hard to believe that the first KubeCon took place nearly 10 years ago. Back then, Kubernetes was still in its early days, and the world was only just...

Accelerating AI with open source machine learning infrastructure

The landscape of artificial intelligence is rapidly evolving, demanding robust and scalable infrastructure. To meet these challenges, we’ve developed a...

Building optimized LLM chatbots with Canonical and NVIDIA

The landscape of generative AI is rapidly evolving, and building robust, scalable large language model (LLM) applications is becoming a critical need for many...