AI Update

  • KServe

    KServe is an open-source, cloud-agnostic platform that simplifies the deployment and serving of machine learning (ML) and generative AI models on Kubernetes. It provides a standardized API and framework for running models from various ML toolkits at scale. 

    How KServe works

    KServe provides a Kubernetes Custom Resource Definition (CRD) called InferenceService to make deploying and managing models easier. A developer specifies their model’s requirements in a YAML configuration file, and KServe automates the rest of the process. 

    The platform has two key architectural components: 

    • Control Plane: Manages the lifecycle of the ML models, including versioning, deployment strategies, and automatic scaling.
    • Data Plane: Executes the inference requests with high performance and low latency. It supports both predictive and generative AI models and adheres to standardized API protocols. 

    Key features

    • Standardized API: Provides a consistent interface for different model types and frameworks, promoting interoperability.
    • Multi-framework support: KServe supports a wide range of ML frameworks, including:
      • TensorFlow
      • PyTorch
      • Scikit-learn
      • XGBoost
      • Hugging Face (for large language models)
      • NVIDIA Triton (for high-performance serving)
    • Flexible deployment options: It supports different operational modes to fit specific needs:
      • Serverless: Leverages Knative for request-based autoscaling and can scale down to zero when idle to reduce costs.
      • Raw Deployment: A more lightweight option without Knative, relying on standard Kubernetes for scaling.
      • ModelMesh: An advanced option for high-density, multi-model serving scenarios.
    • Advanced deployment strategies: KServe enables sophisticated rollouts for production models, including:
      • Canary rollouts: Gradually shifting traffic from an old model version to a new one.
      • A/B testing: Routing traffic between different model versions to compare their performance.
      • Inference graphs: Building complex pipelines that can combine multiple models or perform pre/post-processing steps.
    • Scalability and cost efficiency: By automatically scaling model instances up or down based on traffic, KServe optimizes resource usage and costs, especially with its scale-to-zero capability. 

    Core components

    KServe is often used in combination with other cloud-native technologies to provide a complete solution: 

    • Kubernetes: The foundation on which KServe operates, managing containerized model instances.
    • Knative: An optional but commonly used component that provides the serverless functionality for request-based autoscaling.
    • Istio: A service mesh that provides advanced networking, security, and traffic management capabilities, such as canary deployments.
    • ModelMesh: An intelligent component used for high-density, multi-model serving by managing the loading and unloading of models from memory. 
  • Safeguarding Generative AI: Deploying Retrieval Augmented Generation (RAG) Applications on Kubernetes with Confidential Computing and Ephemeral Containers

    Deploying AI applications, particularly generative AI models like those used in Retrieval Augmented Generation (RAG) systems, on Kubernetes presents unique challenges around security, performance, and resilience. Traditional deployment strategies often fall short when handling sensitive data or demanding low-latency inference.

    This blog post explores a modern approach: leveraging confidential computing and ephemeral containers to enhance the security posture and performance of RAG applications deployed on Kubernetes. We’ll dive into the practical aspects of implementation, focusing on specific tools and technologies, and referencing real-world scenarios. 🚀


    The core of a RAG application involves retrieving relevant context from a knowledge base to inform the generation of responses by a large language model (LLM). This often means handling sensitive documents, proprietary data, or personally identifiable information (PII). Simply securing the Kubernetes cluster itself isn’t always enough; data breaches can occur from compromised containers or unauthorized access to memory. Confidential computing offers a solution by encrypting data in use, leveraging hardware-based security enclaves to isolate sensitive workloads. Intel Software Guard Extensions (SGX) and AMD Secure Encrypted Virtualization (SEV) are prominent technologies enabling this.

    To integrate confidential computing into a RAG deployment on Kubernetes, we can utilize the Enclave Manager for Kubernetes (EMK). EMK orchestrates the deployment and management of enclave-based containers, ensuring that only authorized code can access the decrypted data within the enclave. Let’s consider an example using Intel SGX.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: rag-app-deployment
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: rag-app
      template:
        metadata:
          labels:
            app: rag-app
          annotations:
            #Enables SGX attestation
            attestation.kubernetes.io/policy: "sgx-attestation-policy"
        spec:
          containers:
          - name: rag-app-container
            image: my-repo/rag-app:latest
            resources:
              limits:
                sgx.intel.com/enclave: "1"
            env:
            - name: VECTOR_DB_ENDPOINT
              value: "internal-vector-db:6379"

    In this example, the `sgx.intel.com/enclave: “1”` resource request tells Kubernetes to schedule the container on a node with available SGX enclaves. The `attestation.kubernetes.io/policy: “sgx-attestation-policy”` annotation triggers the EMK to verify the integrity of the enclave code before allowing the container to run, using a defined attestation policy. This policy confirms that the code being executed within the enclave is the intended, verified code. This protects your LLM and Retrieval components from unauthorized access, even if a attacker were to gain access to the K8s node it is running on.


    Performance

    Beyond security, performance is critical for RAG applications. Users expect low-latency responses, which necessitates optimized resource utilization and efficient data handling. Ephemeral containers, introduced in Kubernetes 1.23, offer a powerful mechanism for debugging and troubleshooting running containers *without* modifying the container image itself. They can be invaluable for performance optimization, especially when dealing with complex AI workloads. However, their true potential for performance enhancement lies in a more strategic application: deploying specialized performance-enhancing components *alongside* the main application container, only when needed.

    Imagine a scenario where your RAG application experiences intermittent performance bottlenecks during peak usage. Instead of permanently bloating the application container with performance monitoring tools, you can dynamically inject an ephemeral container equipped with profiling tools like `perf` or `bcc`. These tools can then be used to gather performance data in real-time, identifying the source of the bottleneck. The best part? The profiling container is removed once the performance issue is resolved, minimizing resource overhead and maintaining the application’s lean profile.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: rag-app-deployment
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: rag-app
      template:
        metadata:
          labels:
            app: rag-app
        spec:
          containers:
          - name: rag-app-container
            image: my-repo/rag-app:latest
            ports:
            - containerPort: 8080

    To create an Ephemeral Container:

    kubectl debug -it rag-app-deployment- --image=my-repo/profiling-tools:latest --target=rag-app-container



    This command will start a new container within the `rag-app-deployment-` pod. The ephemeral container will be named the container id, but you can select this with `–name`. The `–target` flag attaches to the running container that you want to profile.

    In a real-world implementation, a financial institution uses a RAG application to generate personalized investment advice. They deployed their LLM on a Kubernetes cluster enhanced with Intel SGX. The sensitive financial data used for context retrieval is processed within the secure enclave, protecting it from unauthorized access. Furthermore, they utilize ephemeral containers to monitor and optimize the performance of their vector database, ensuring low-latency retrieval of relevant information. ✅


    Conclusion

    Deploying RAG applications on Kubernetes requires a holistic approach that prioritizes security, performance, and resilience. By leveraging confidential computing with tools like EMK, you can protect sensitive data in use and maintain compliance with regulatory requirements. Ephemeral containers offer a flexible and efficient way to diagnose and optimize performance bottlenecks, ensuring a smooth and responsive user experience. Combining these technologies allows you to create a robust and secure foundation for your generative AI applications, enabling them to deliver valuable insights while safeguarding sensitive information. This strategic deployment strategy is essential for organizations looking to harness the power of AI in a responsible and secure manner.🛡️

  • AI Model Serving with Kubeflow on Kubernetes using Multi-Tenancy and GPU Sharing

    👋 Welcome, fellow DevOps engineers! In today’s fast-paced world of AI, deploying and managing AI models efficiently and securely is crucial. Many organizations are adopting Kubernetes to orchestrate their AI workloads. This post dives into a specific scenario: deploying a model serving application using Kubeflow on Kubernetes, focusing on multi-tenancy and GPU sharing to enhance security, performance, and resource utilization. We will explore practical deployment strategies, specific tools, and real-world implementations.


    Serving AI models at scale often requires significant compute resources, especially GPUs. In a multi-tenant environment, different teams or projects share the same Kubernetes cluster. This presents challenges related to security, resource isolation, and fair resource allocation. Kubeflow, a machine learning toolkit dedicated to Kubernetes, provides robust solutions for addressing these challenges. Using Kubeflow’s model serving component, combined with Kubernetes namespace isolation and GPU sharing technologies, allows for secure and efficient model deployment.

    Let’s consider a scenario where two teams, Team Alpha and Team Beta, need to deploy their respective AI models on the same Kubernetes cluster. Team Alpha’s model requires high GPU resources for real-time inference, while Team Beta’s model is less resource-intensive and can tolerate lower GPU availability. To address this, we will leverage Kubernetes namespaces for isolation and NVIDIA’s Multi-Instance GPU (MIG) for GPU sharing.

    First, we create separate namespaces for each team:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: team-alpha
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: team-beta

    Next, we configure ResourceQuotas and LimitRanges within each namespace to enforce resource constraints. This prevents one team from consuming all available resources, ensuring fair allocation. For example, we might allocate a higher GPU quota to Team Alpha due to their higher resource requirements:

    # ResourceQuota for Team Alpha
    apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: gpu-quota
      namespace: team-alpha
    spec:
      hard:
        nvidia.com/gpu: "2" # Allow up to 2 GPUs
    ---
    # ResourceQuota for Team Beta
    apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: gpu-quota
      namespace: team-beta
    spec:
      hard:
        nvidia.com/gpu: "1" # Allow up to 1 GPU

    To enable GPU sharing, we’ll leverage NVIDIA’s MIG feature (available on A100 and newer GPUs). MIG allows a single physical GPU to be partitioned into multiple independent instances, each with its own dedicated memory and compute resources. We can configure the Kubernetes node to expose MIG devices as resources. This usually requires installing the NVIDIA device plugin and configuring the node’s MIG configuration.

    For example, if we have an A100 GPU, we can partition it into seven 1g.5gb MIG instances. We then expose these as schedulable resources in Kubernetes. This allows different pods, even within the same namespace, to request specific MIG instances. The `nvidia.com/mig-1g.5gb` resource name is then used in pod specifications.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: alpha-model-serving
      namespace: team-alpha
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: alpha-model
      template:
        metadata:
          labels:
            app: alpha-model
        spec:
          containers:
          - name: model-server
            image: your-alpha-model-image:latest # Replace with your model image
            resources:
              limits:
                nvidia.com/mig-1g.5gb: 1 # Request one 1g.5gb MIG instance
            ports:
            - containerPort: 8080

    Kubeflow provides various model serving options, including KFServing (now superseded by KServe) and Triton Inference Server. KServe integrates seamlessly with Kubernetes and provides features like auto-scaling, canary deployments, and request logging. Triton Inference Server is also a popular choice for maximizing inference throughput. Using KServe, the model deployment becomes more streamlined:

    apiVersion: serving.kserve.io/v1alpha1
    kind: InferenceService
    metadata:
      name: alpha-model
      namespace: team-alpha
    spec:
      predictor:
        containers:
        - image: your-alpha-model-image:latest # Replace with your model image
          name: predictor
          resources:
            limits:
              nvidia.com/mig-1g.5gb: 1

    For enhanced security, consider using network policies to restrict traffic between namespaces. This prevents unauthorized access to models and data. Implement role-based access control (RBAC) to control who can create, modify, and delete resources within each namespace. Regularly audit logs and monitor resource utilization to identify potential security breaches or performance bottlenecks. Implement data encryption at rest and in transit to protect sensitive model data. Tools like HashiCorp Vault can be integrated to securely manage secrets and credentials required by the model serving application.


    Conclusion

    Real-world implementations of this approach are seen across various industries. Financial institutions use it to securely deploy fraud detection models, while healthcare providers leverage it for medical image analysis. E-commerce companies use multi-tenancy and GPU sharing to serve personalized recommendation models to different customer segments efficiently. Companies such as NVIDIA themselves, as well as cloud providers like AWS, Google, and Azure, actively promote and provide services around Kubeflow and GPU sharing.

    By adopting a multi-tenant architecture with Kubernetes namespaces, resource quotas, and GPU sharing technologies like NVIDIA MIG, organizations can achieve a secure, high-performance, and resilient AI model serving platform. This approach optimizes resource utilization, reduces costs, and accelerates the deployment of AI-powered applications. Remember to continuously monitor, adapt, and improve your deployment strategy to stay ahead of the curve in the ever-evolving world of AI and Kubernetes! 🚀