page_content
stringlengths 74
2.86k
| parent_section
stringclasses 7
values | url
stringlengths 21
129
| token_count
int64 17
755
|
---|---|---|---|
er β
βββββββββββββββββββββββββ·βββββββββββββββββββAlternatively, you can configure a Service Connector through the ZenML dashboard:
Note: Please remember to grant the entity associated with your cloud credentials permissions to access the Kubernetes cluster and to list accessible Kubernetes clusters. For a full list of permissions required to use a AWS Service Connector to access one or more Kubernetes cluster, please refer to the documentation for your Service Connector of choice or read the documentation available in the interactive CLI commands and dashboard. The Service Connectors supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use-case.
If you already have one or more Service Connectors configured in your ZenML deployment, you can check which of them can be used to access the Kubernetes cluster that you want to use for your Seldon Core Model Deployer by running e.g.:
zenml service-connector list-resources --resource-type kubernetes-cluster
Example Command Output
The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨
β bdf1dc76-e36b-4ab4-b5a6-5a9afea4822f β eks-zenhacks β πΆ aws β π kubernetes-cluster β zenhacks-cluster β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨ | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/seldon | 467 |
docs.
failure_hook_source and success_hook_sourceThe source of the failure and success hooks can be specified.
Step-specific configuration
A lot of pipeline-level configuration can also be applied at a step level (as we have already seen with the enable_cache flag). However, there is some configuration that is step-specific, meaning it cannot be applied at a pipeline level, but only at a step level.
experiment_tracker: Name of the experiment_tracker to enable for this step. This experiment_tracker should be defined in the active stack with the same name.
step_operator: Name of the step_operator to enable for this step. This step_operator should be defined in the active stack with the same name.
outputs: This is configuration of the output artifacts of this step. This is further keyed by output name (by default, step outputs are named output). The most interesting configuration here is the materializer_source, which is the UDF path of the materializer in code to use for this output (e.g. materializers.some_data.materializer.materializer_class). Read more about this source path here.
PreviousHow to configure a pipeline with a YAML
NextRuntime settings for Docker, resources, and stack components
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/use-configuration-files/what-can-be-configured | 245 |
ome additional important configuration parameters:namespace is the namespace under which the driver and executor pods will run.
service_account is the service account that will be used by various Spark components (to create and watch the pods).
Additionally, the _backend_configuration method is adjusted to handle the Kubernetes-specific configuration.
When to use it
You should use the Spark step operator:
when you are dealing with large amounts of data.
when you are designing a step that can benefit from distributed computing paradigms in terms of time and resources.
How to deploy it
To use the KubernetesSparkStepOperator you will need to setup a few things first:
Remote ZenML server: See the deployment guide for more information.
Kubernetes cluster: There are many ways to deploy a Kubernetes cluster using different cloud providers or on your custom infrastructure. For AWS, you can follow the Spark EKS Setup Guide below.
Spark EKS Setup Guide
The following guide will walk you through how to spin up and configure a Amazon Elastic Kubernetes Service with Spark on it:
EKS Kubernetes Cluster
Follow this guide to create an Amazon EKS cluster role.
Follow this guide to create an Amazon EC2 node role.
Go to the IAM website, and select Roles to edit both roles.
Attach the AmazonRDSFullAccess and AmazonS3FullAccess policies to both roles.
Go to the EKS website.
Make sure the correct region is selected on the top right.
Click on Add cluster and select Create.
Enter a name and select the cluster role for Cluster service role.
Keep the default values for the networking and logging steps and create the cluster.
Note down the cluster name and the API server endpoint:
EKS_CLUSTER_NAME=<EKS_CLUSTER_NAME>
EKS_API_SERVER_ENDPOINT=<API_SERVER_ENDPOINT>
After the cluster is created, select it and click on Add node group in the Compute tab.
Enter a name and select the node role.
For the instance type, we recommend t3a.xlarge, as it provides up to 4 vCPUs and 16 GB of memory. | stack-components | https://docs.zenml.io/stack-components/step-operators/spark-kubernetes | 409 |
πConnect to a server
Various means of connecting to ZenML.
Once ZenML is deployed, there are various ways to connect to it.
PreviousConfigure the server environment
NextConnect in with your User (interactive)
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/connecting-to-zenml | 52 |
Get arbitrary artifacts in a step
Not all artifacts need to come through the step interface from direct upstream steps.
As described in the metadata guide, the metadata can be fetched with the client, and this is how you would use it to fetch it within a step. This allows you to fetch artifacts from other upstream steps or even completely different pipelines.
from zenml.client import Client
from zenml import step
@step
def my_step():
client = Client()
# Directly fetch an artifact
output = client.get_artifact_version("my_dataset", "my_version")
output.run_metadata["accuracy"].value
This is one of the ways you can access artifacts that have already been created and stored in the artifact store. This can be useful when you want to use artifacts from other pipelines or steps that are not directly upstream.
See Also
Managing artifacts - learn about the ExternalArtifact type and how to pass artifacts between steps.
PreviousOrganize data with tags
NextHandle custom data types
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/get-arbitrary-artifacts-in-a-step | 205 |
β az://demo-zenmlartifactstore ββ ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β demo-zenml-demos/demo-zenml-terraform-cluster β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β demozenmlcontainerregistry.azurecr.io β
βββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββ
```
register and connect an Azure Blob Storage Artifact Store Stack Component to an Azure blob container:Copyzenml artifact-store register azure-demo --flavor azure --path=az://demo-zenmlartifactstore
Example Command Output
```
Successfully registered artifact_store `azure-demo`.
```
```sh
zenml artifact-store connect azure-demo --connector azure-service-principal
```
Example Command Output
```
Successfully connected artifact store `azure-demo` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββ―βββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββΌβββββββββββββββββββββββββββββββ¨
β f2316191-d20b-4348-a68b-f5e347862196 β azure-service-principal β π¦ azure β π¦ blob-container β az://demo-zenmlartifactstore β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββ·βββββββββββββββββββββββββββββββ
```
register and connect a Kubernetes Orchestrator Stack Component to an AKS cluster:Copyzenml orchestrator register aks-demo-cluster --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads
Example Command Output
```
Successfully registered orchestrator `aks-demo-cluster`.
```
```sh
zenml orchestrator connect aks-demo-cluster --connector azure-service-principal
```
Example Command Output
``` | how-to | https://docs.zenml.io/how-to/auth-management/azure-service-connector | 596 |
πOrchestrators
Orchestrating the execution of ML pipelines.
The orchestrator is an essential component in any MLOps stack as it is responsible for running your machine learning pipelines. To do so, the orchestrator provides an environment that is set up to execute the steps of your pipeline. It also makes sure that the steps of your pipeline only get executed once all their inputs (which are outputs of previous steps of your pipeline) are available.
Many of ZenML's remote orchestrators build Docker images in order to transport and execute your pipeline code. If you want to learn more about how Docker images are built by ZenML, check out this guide.
When to use it
The orchestrator is a mandatory component in the ZenML stack. It is used to store all artifacts produced by pipeline runs, and you are required to configure it in all of your stacks.
Orchestrator Flavors
Out of the box, ZenML comes with a local orchestrator already part of the default stack that runs pipelines locally. Additional orchestrators are provided by integrations: | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators | 220 |
Registering a Model
Registering models can be done in a number of ways depending on your specific needs. You can explicitly register models using the CLI or the Python SDK, or you can just allow ZenML to implicitly register your models as part of a pipeline run.
If you are using ZenML Pro, you already have access to a dashboard interface that allows you to register models.
Explicit CLI registration
Registering models using the CLI is as straightforward as the following command:
zenml model register iris_logistic_regression --license=... --description=...
You can view some of the options of what can be passed into this command by running zenml model register --help but since you are using the CLI outside a pipeline run the arguments you can pass in are limited to non-runtime items. You can also associate tags with models at this point, for example, using the --tag option.
Explicit dashboard registration
ZenML Pro can register their models directly from the cloud dashboard interface.
Explicit Python SDK registration
You can register a model using the Python SDK as follows:
from zenml import Model
from zenml.client import Client
Client().create_model(
name="iris_logistic_regression",
license="Copyright (c) ZenML GmbH 2023",
description="Logistic regression model trained on the Iris dataset.",
tags=["regression", "sklearn", "iris"],
Implicit registration by ZenML
The most common use case for registering models is to do so implicitly as part of a pipeline run. This is done by specifying a Model object as part of the model argument of the @pipeline decorator.
As an example, here we have a training pipeline which orchestrates the training of a model object, storing datasets and the model object itself as links within a newly created Model version. This integration is achieved by configuring the pipeline within a Model Context using Model. The name is specified, while other fields remain optional for this task.
from zenml import pipeline | how-to | https://docs.zenml.io/v/docs/how-to/use-the-model-control-plane/register-a-model | 393 |
ocally without including any cloud infrastructure.Thanks to the separation between the pipeline code and the stack in ZenML, you can easily switch your stack independently from your code. For instance, all it would take you to switch from an experimental local stack running on your machine to a remote stack that employs a full-fledged cloud infrastructure is a single CLI command.
3. Management
In order to benefit from the aforementioned core concepts to their fullest extent, it is essential to deploy and manage a production-grade environment that interacts with your ZenML installation.
ZenML Server
To use stack components that are running remotely on a cloud infrastructure, you need to deploy a ZenML Server so it can communicate with these stack components and run your pipelines. The server is also responsible for managing ZenML business entities like pipelines, steps, models, etc.
Server Deployment
In order to benefit from the advantages of using a deployed ZenML server, you can either choose to use the ZenML Pro SaaS offering which provides a control plane for you to create managed instances of ZenML servers, or deploy it in your self-hosted environment.
Metadata Tracking
On top of the communication with the stack components, the ZenML Server also keeps track of all the bits of metadata around a pipeline run. With a ZenML server, you are able to access all of your previous experiments with the associated details. This is extremely helpful in troubleshooting.
Secrets
The ZenML Server also acts as a centralized secrets store that safely and securely stores sensitive data such as credentials used to access the services that are part of your stack. It can be configured to use a variety of different backends for this purpose, such as the AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, and Hashicorp Vault. | getting-started | https://docs.zenml.io/v/docs/getting-started/core-concepts | 356 |
ncepts covered in this guide to your own projects.By the end of this guide, you'll have a solid understanding of how to leverage LLMs in your MLOps workflows using ZenML, enabling you to build powerful, scalable, and maintainable LLM-powered applications. First up, let's take a look at a super simple implementation of the RAG paradigm to get started.
PreviousAn end-to-end project
NextRAG with ZenML
Last updated 15 days ago | user-guide | https://docs.zenml.io/user-guide/llmops-guide | 98 |
ender the images:
from zenml.client import Clientfrom IPython.display import display, Image
annotator = Client().active_stack.annotator
annotations = annotator.launch(
data=[
'/path/to/image1.png',
'/path/to/image2.png'
],
options=[
'cat',
'dog'
],
display_fn=lambda filename: display(Image(filename))
The launch method returns the annotations as a list of tuples, where each tuple contains the data item and its corresponding label.
You can also use the zenml annotator dataset commands to manage your datasets:
zenml annotator dataset list - List all available datasets
zenml annotator dataset delete <dataset_name> - Delete a specific dataset
zenml annotator dataset stats <dataset_name> - Get statistics for a specific dataset
Annotation files are saved as JSON files in the specified output directory. Each annotation file represents a dataset, with the filename serving as the dataset name.
Acknowledgements
Pigeon was created by Anastasis Germanidis and released as a Python package and Github repository. It is licensed under the Apache License. It has been updated to work with more recent ipywidgets versions and some small UI improvements were added. We are grateful to Anastasis for creating this tool and making it available to the community.
PreviousLabel Studio
NextProdigy
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/annotators/pigeon | 271 |
i-fi themed corpus about "ZenML World"
corpus = ["The luminescent forests of ZenML World are inhabited by glowing Zenbots that emit a soft, pulsating light as they roam the enchanted landscape.",
"In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully, their iridescent wings leaving trails of stardust in their wake.",
"Telepathic Treants, ancient sentient trees, communicate through the quantum neural network that spans the entire surface of ZenML World, sharing wisdom and knowledge.",
"Deep within the melodic caverns of ZenML World, Fractal Fungi emit pulsating tones that resonate through the crystalline structures, creating a symphony of otherworldly sounds.",
"Near the ethereal waterfalls of ZenML World, Holographic Hummingbirds hover effortlessly, their translucent wings refracting the prismatic light into mesmerizing patterns.",
"Gravitational Geckos, masters of anti-gravity, traverse the inverted cliffs of ZenML World, defying the laws of physics with their extraordinary abilities.",
"Plasma Phoenixes, majestic creatures of pure energy, soar above the chromatic canyons of ZenML World, their fiery trails painting the sky in a dazzling display of colors.",
"Along the prismatic shores of ZenML World, Crystalline Crabs scuttle and burrow, their transparent exoskeletons refracting the light into a kaleidoscope of hues.",
corpus = [preprocess_text(sentence) for sentence in corpus]
question1 = "What are Plasma Phoenixes?"
answer1 = answer_question(question1, corpus)
print(f"Question: {question1}")
print(f"Answer: {answer1}")
question2 = (
"What kinds of creatures live on the prismatic shores of ZenML World?"
answer2 = answer_question(question2, corpus)
print(f"Question: {question2}")
print(f"Answer: {answer2}")
irrelevant_question_3 = "What is the capital of Panglossia?"
answer3 = answer_question(irrelevant_question_3, corpus)
print(f"Question: {irrelevant_question_3}")
print(f"Answer: {answer3}")
This outputs the following:
Question: What are Plasma Phoenixes? | user-guide | https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml/rag-85-loc | 463 |
Artifact Store and the local filesystem or memory.When calling the Artifact Store API, you should always use URIs that are relative to the Artifact Store root path, otherwise, you risk using an unsupported protocol or storing objects outside the store. You can use the Repository singleton to retrieve the root path of the active Artifact Store and then use it as a base path for artifact URIs, e.g.:
import os
from zenml.client import Client
from zenml.io import fileio
root_path = Client().active_stack.artifact_store.path
artifact_contents = "example artifact"
artifact_path = os.path.join(root_path, "artifacts", "examples")
artifact_uri = os.path.join(artifact_path, "test.txt")
fileio.makedirs(artifact_path)
with fileio.open(artifact_uri, "w") as f:
f.write(artifact_contents)
When using the Artifact Store API to write custom Materializers, the base artifact URI path is already provided. See the documentation on Materializers for an example.
The following are some code examples showing how to use the Artifact Store API for various operations:
creating folders, writing and reading data directly to/from an artifact store object
import os
from zenml.utils import io_utils
from zenml.io import fileio
from zenml.client import Client
root_path = Client().active_stack.artifact_store.path
artifact_contents = "example artifact"
artifact_path = os.path.join(root_path, "artifacts", "examples")
artifact_uri = os.path.join(artifact_path, "test.txt")
fileio.makedirs(artifact_path)
io_utils.write_file_contents_as_string(artifact_uri, artifact_contents)
import os
from zenml.utils import io_utils
from zenml.client import Client
root_path = Client().active_stack.artifact_store.path
artifact_path = os.path.join(root_path, "artifacts", "examples")
artifact_uri = os.path.join(artifact_path, "test.txt")
artifact_contents = io_utils.read_file_contents_as_string(artifact_uri) | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores | 407 |
ster <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --setLambda Labs is a cloud provider that offers GPU instances for machine learning workloads. Unlike the major cloud providers, with Lambda Labs we don't need to configure a service connector to authenticate with the cloud provider. Instead, we can directly use API keys to authenticate with the Lambda Labs API.
zenml integration install skypilot_lambda
Once the integration is installed, we can register the orchestrator with the following command:
# For more secure and recommended way, we will register the API key as a secret
zenml secret create lambda_api_key --scope user --api_key=<VALUE_1>
# Register the orchestrator
zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_lambda --api_key={{lambda_api_key.api_key}}
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
step-specific settings.
While testing the orchestrator, we noticed that the Lambda Labs orchestrator does not support the down flag. This means the orchestrator will not automatically tear down the cluster after all jobs finish. We recommend manually tearing down the cluster after all jobs finish to avoid unnecessary costs.
Additional Configuration
For additional configuration of the Skypilot orchestrator, you can pass Settings depending on which cloud you are using which allows you to configure (among others) the following attributes:
instance_type: The instance type to use.
cpus: The number of CPUs required for the task. If a string, must be a string of the form '2' or '2+', where the + indicates that the task requires at least 2 CPUs.
memory: The amount of memory in GiB required. If a string, must be a string of the form '16' or '16+', where the + indicates that the task requires at least 16 GB of memory. | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/skypilot-vm | 396 |
An end-to-end project
Put your new knowledge in action with an end-to-end project
That was awesome! We learned so many advanced MLOps production concepts:
The value of deploying ZenML
Abstracting infrastructure configuration into stacks
Connecting remote storage
Orchestrating on the cloud
Configuring the pipeline to scale compute
Connecting a git repository
We will now combine all of these concepts into an end-to-end MLOps project powered by ZenML.
Get started
Start with a fresh virtual environment with no dependencies. Then let's install our dependencies:
pip install "zenml[templates,server]" notebook
zenml integration install sklearn -y
We will then use ZenML templates to help us get the code we need for the project:
mkdir zenml_batch_e2e
cd zenml_batch_e2e
zenml init --template e2e_batch --template-with-defaults
# Just in case, we install the requirements again
pip install -r requirements.txt
The e2e template is also available as a ZenML example. You can clone it:
git clone --depth 1 [email protected]:zenml-io/zenml.git
cd zenml/examples/e2e
pip install -r requirements.txt
zenml init
What you'll learn
The e2e project is a comprehensive project template to cover major use cases of ZenML: a collection of steps and pipelines and, to top it all off, a simple but useful CLI. It showcases the core ZenML concepts for supervised ML with batch predictions. It builds on top of the starter project with more advanced concepts.
As you progress through the e2e batch template, try running the pipelines on a remote cloud stack on a tracked git repository to practice some of the concepts we have learned in this guide.
At the end, don't forget to share the ZenML e2e template with your colleagues and see how they react!
Conclusion and next steps | user-guide | https://docs.zenml.io/user-guide/production-guide/end-to-end | 398 |
ssions to clients. Additionally, the connector canhandle specialized authentication for S3, Docker and Kubernetes Python clients.
It also allows for the configuration of local Docker and Kubernetes CLIs.
The AWS Service Connector is part of the AWS ZenML integration. You can either
install the entire integration or use a pypi extra to install it independently
of the integration:
pip install "zenml[connectors-aws]" installs only prerequisites for the AWS
Service Connector Type
zenml integration install aws installs the entire AWS ZenML integration
It is not required to install and set up the AWS CLI on your local machine to
use the AWS Service Connector to link Stack Components to AWS resources and
services. However, it is recommended to do so if you are looking for a quick
setup that includes using the auto-configuration Service Connector features.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Dashboard equivalent:
Fetching details about the S3 bucket resource type:
zenml service-connector describe-type aws --resource-type s3-bucket
Example Command Output
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π¦ AWS S3 bucket (resource type: s3-bucket) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Authentication methods: implicit, secret-key, sts-token, iam-role,
session-token, federation-token
Supports resource instances: True
Authentication methods:
π implicit
π secret-key
π sts-token
π iam-role
π session-token
π federation-token
Allows users to connect to S3 buckets. When used by Stack Components, they are
provided a pre-configured boto3 S3 client instance.
The configured credentials must have at least the following AWS IAM permissions
associated with the ARNs of S3 buckets that the connector will be allowed to
access (e.g. arn:aws:s3:::* and arn:aws:s3:::*/* represent all the available S3
buckets).
s3:ListBucket
s3:GetObject
s3:PutObject | how-to | https://docs.zenml.io/how-to/auth-management | 495 |
β server β https://35.175.95.223 ββ ββββββββββββββββββββββββΌββββββββββββββββββββββββ¨
β insecure β False β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββ¨
β cluster_name β 35.175.95.223 β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββ¨
β token β [HIDDEN] β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββ¨
β certificate_authority β [HIDDEN] β
βββββββββββββββββββββββββ·ββββββββββββββββββββββββ
Credentials auto-discovered and lifted through the Kubernetes Service Connector might have a limited lifetime, especially if the target Kubernetes cluster is managed through a 3rd party authentication provider such a GCP or AWS. Using short-lived credentials with your Service Connectors could lead to loss of connectivity and other unexpected errors in your pipeline.
Local client provisioning
This Service Connector allows configuring the local Kubernetes client (i.e. kubectl) with credentials:
zenml service-connector login kube-auto
Example Command Output
β ¦ Attempting to configure local client using service connector 'kube-auto'...
Cluster "35.185.95.223" set.
β Attempting to configure local client using service connector 'kube-auto'...
β Attempting to configure local client using service connector 'kube-auto'...
Updated local kubeconfig with the cluster details. The current kubectl context was set to '35.185.95.223'.
The 'kube-auto' Kubernetes Service Connector connector was used to successfully configure the local Kubernetes cluster client/SDK.
Stack Components use
The Kubernetes Service Connector can be used in Orchestrator and Model Deployer stack component flavors that rely on Kubernetes clusters to manage their workloads. This allows Kubernetes container workloads to be managed without the need to configure and maintain explicit Kubernetes kubectl configuration contexts and credentials in the target environment and in the Stack Component.
PreviousDocker Service Connector | how-to | https://docs.zenml.io/how-to/auth-management/kubernetes-service-connector | 453 |
βββββββββββββββββββββββββββββββββββββββββββββββββββ RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π΅ gcp-generic β zenml-core β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://zenml-bucket-sl β
β β gs://zenml-core.appspot.com β
β β gs://zenml-core_cloudbuild β
β β gs://zenml-datasets β
β β gs://zenml-internal-artifact-store β
β β gs://zenml-kubeflow-artifact-store β
β β gs://zenml-project-time-series-bucket β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenml-test-cluster β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β gcr.io/zenml-core β
βββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββ
No sensitive credentials are stored with the Service Connector, just meta-information about the external provider and the external account:
zenml service-connector describe gcp-workload-identity -x
Example Command Output
Service connector 'gcp-workload-identity' of type 'gcp' with id '37b6000e-3f7f-483e-b2c5-7a5db44fe66b' is
owned by user 'default'.
'gcp-workload-identity' gcp Service Connector Details
ββββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 531 |
uite a few extra methods specific to Label Studio.The core Label Studio functionality that's currently enabled includes a way to register your datasets, export any annotations for use in separate steps as well as start the annotator daemon process. (Label Studio requires a server to be running in order to use the web interface, and ZenML handles the provisioning of this server locally using the details you passed in when registering the component unless you've specified that you want to use a deployed instance.)
Standard Steps
ZenML offers some standard steps (and their associated config objects) which will get you up and running with the Label Studio integration quickly. These include:
LabelStudioDatasetRegistrationConfig - a step config object to be used when registering a dataset with Label studio using the get_or_create_dataset step
LabelStudioDatasetSyncConfig - a step config object to be used when registering a dataset with Label studio using the sync_new_data_to_label_studio step. Note that this requires a ZenML secret to have been pre-registered with your artifact store as being the one that holds authentication secrets specific to your particular cloud provider. (Label Studio provides some documentation on what permissions these secrets require here.)
get_or_create_dataset step - This takes a LabelStudioDatasetRegistrationConfig config object which includes the name of the dataset. If it exists, this step will return the name, but if it doesn't exist then ZenML will register the dataset along with the appropriate label config with Label Studio.
get_labeled_data step - This step will get all labeled data available for a particular dataset. Note that these are output in a Label Studio annotation format, which will subsequently be converted into a format appropriate for your specific use case. | stack-components | https://docs.zenml.io/v/docs/stack-components/annotators/label-studio | 337 |
wo main options to access a deployed ZenML server:SaaS: With the Cloud offering you can utilize a control plane to create ZenML servers, also known as tenants. These tenants are managed and maintained by ZenML's dedicated team, alleviating the burden of server management from your end. Importantly, your data remains securely within your stack, and ZenML's role is primarily to handle tracking of metadata and server maintenance.
Self-hosted Deployment: Alternatively, you have the ability to deploy ZenML on your own self-hosted environment. This can be achieved through various methods, including using our CLI, Docker, Helm, or HuggingFace Spaces. We also offer our Pro version for self-hosted deployments so you can use our full paid feature-set while staying fully in control with an airgapped solution on your infrastructure.
Both options offer distinct advantages, allowing you to choose the deployment approach that best aligns with your organization's needs and infrastructure preferences. Whichever path you select, ZenML facilitates a seamless and efficient way to take advantage of the ZenML Server and enhance your machine learning workflows for production-level success.
Choose the most appropriate deployment strategy for you out of the following options to get started with the deployment:
Deploy with ZenML CLIDeploying ZenML on cloud using the ZenML CLI.
Deploy with DockerDeploying ZenML in a Docker container.
Deploy with HelmDeploying ZenML in a Kubernetes cluster with Helm.
Deploy using HuggingFace SpacesDeploying ZenML to Huggingface Spaces.
PreviousCore concepts
NextDeploy with ZenML CLI
Last updated 15 days ago | getting-started | https://docs.zenml.io/getting-started/deploying-zenml | 323 |
Controlling Model versions
Each model can have many versions. Model versions are a way for you to track different iterations of your training process, complete with some extra dashboard and API functionality to support the full ML lifecycle.
E.g. Based on your business rules during training, you can associate model version with stages and promote them to production. You have an interface that allows you to link these versions with non-technical artifacts and data, e.g. business data, datasets, or even stages in your process and workflow.
Model versions are created implicitly as you are running your machine learning training, so you don't have to immediately think about this. If you want more control over versions, our API has you covered, with an option to explicitly name your versions.
Explicitly name your model version
If you want to explicitly name your model version, you can do so by passing in the version argument to the Model object. If you don't do this, ZenML will automatically generate a version number for you.
from zenml import Model, step, pipeline
model= Model(
name="my_model",
version="1.0.5"
# The step configuration will take precedence over the pipeline
@step(model=model)
def svc_trainer(...) -> ...:
...
# This configures it for all steps within the pipeline
@pipeline(model=model)
def training_pipeline( ... ):
# training happens here
Here we are specifically setting the model configuration for a particular step or for the pipeline as a whole.
Please note in the above example if the model version exists, it is automatically associated with the pipeline and becomes active in the pipeline context. Therefore, a user should be careful and intentional as to whether you want to create a new pipeline, or fetch an existing one. See below for an example of fetching a model from an existing version/stage.
Fetching model versions by stage | how-to | https://docs.zenml.io/v/docs/how-to/use-the-model-control-plane/model-versions | 375 |
use for the database connection.
database_ssl_ca:# The path to the client SSL certificate to use for the database connection.
database_ssl_cert:
# The path to the client SSL key to use for the database connection.
database_ssl_key:
# Whether to verify the database server SSL certificate.
database_ssl_verify_server_cert:
Run the deploy command and pass the config file above to it.Copyzenml deploy --config=/PATH/TO/FILENote To be able to run the deploy command, you should have your cloud provider's CLI configured locally with permissions to create resources like MySQL databases and networks.
Configuration file templates
Base configuration file
Below is the general structure of a config file. Use this as a base and then add any cloud-specific parameters from the sections below.
# Name of the server deployment.
name:
# The server provider type, one of aws, gcp or azure.
provider:
# The path to the kubectl config file to use for deployment.
kubectl_config_path:
# The Kubernetes namespace to deploy the ZenML server to.
namespace: zenmlserver
# The path to the ZenML server helm chart to use for deployment.
helm_chart:
# The repository and tag to use for the ZenML server Docker image.
zenmlserver_image_repo: zenmldocker/zenml
zenmlserver_image_tag: latest
# Whether to deploy an nginx ingress controller as part of the deployment.
create_ingress_controller: true
# Whether to use TLS for the ingress.
ingress_tls: true
# Whether to generate self-signed TLS certificates for the ingress.
ingress_tls_generate_certs: true
# The name of the Kubernetes secret to use for the ingress.
ingress_tls_secret_name: zenml-tls-certs
# The ingress controller's IP address. The ZenML server will be exposed on a subdomain of this IP. For AWS, if you have a hostname instead, use the following command to get the IP address: `dig +short <hostname>`.
ingress_controller_ip:
# Whether to create a SQL database service as part of the recipe.
deploy_db: true
# The username and password for the database. | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-zenml-cli | 437 |
em as type SecretField in the configuration class.With the configuration defined, we can move on to the implementation class, which will use the S3 file system to implement the abstract methods of the BaseArtifactStore:
import s3fs
from zenml.artifact_stores import BaseArtifactStore
class MyS3ArtifactStore(BaseArtifactStore):
"""Custom artifact store implementation."""
_filesystem: Optional[s3fs.S3FileSystem] = None
@property
def filesystem(self) -> s3fs.S3FileSystem:
"""Get the underlying S3 file system."""
if self._filesystem:
return self._filesystem
self._filesystem = s3fs.S3FileSystem(
key=self.config.key,
secret=self.config.secret,
token=self.config.token,
client_kwargs=self.config.client_kwargs,
config_kwargs=self.config.config_kwargs,
s3_additional_kwargs=self.config.s3_additional_kwargs,
return self._filesystem
def open(self, path, mode: = "r"):
"""Custom logic goes here."""
return self.filesystem.open(path=path, mode=mode)
def exists(self, path):
"""Custom logic goes here."""
return self.filesystem.exists(path=path)
The configuration values defined in the corresponding configuration class are always available in the implementation class under self.config.
Finally, let's define a custom flavor that brings these two classes together. Make sure that you give your flavor a globally unique name here.
from zenml.artifact_stores import BaseArtifactStoreFlavor
class MyS3ArtifactStoreFlavor(BaseArtifactStoreFlavor):
"""Custom artifact store implementation."""
@property
def name(self):
"""The name of the flavor."""
return 'my_s3_artifact_store'
@property
def implementation_class(self):
"""Implementation class for this flavor."""
from ... import MyS3ArtifactStore
return MyS3ArtifactStore
@property
def config_class(self):
"""Configuration class for this flavor."""
from ... import MyS3ArtifactStoreConfig
return MyS3ArtifactStoreConfig | how-to | https://docs.zenml.io/how-to/stack-deployment/implement-a-custom-stack-component | 394 |
βββββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Finally, to delete a registered model or a specific model version, you can use the zenml model-registry models delete REGISTERED_MODEL_NAME and zenml model-registry models delete-version REGISTERED_MODEL_NAME -v VERSION commands respectively.
Check out the SDK docs to see more about the interface and implementation.
PreviousModel Registries
NextDevelop a Custom Model Registry
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/model-registries/mlflow | 222 |
e the AWS Service Connector authentication method.ZENML_SECRETS_STORE_REGION_NAME: The AWS region to use. This must be set to the region where the AWS Secrets Manager service that you want to use is located.
ZENML_SECRETS_STORE_AWS_ACCESS_KEY_ID: The AWS access key ID to use for authentication. This must be set to a valid AWS access key ID that has access to the AWS Secrets Manager service that you want to use. If you are using an IAM role attached to an EKS cluster to authenticate, you can omit this variable.
ZENML_SECRETS_STORE_AWS_SECRET_ACCESS_KEY: The AWS secret access key to use for authentication. This must be set to a valid AWS secret access key that has access to the AWS Secrets Manager service that you want to use. If you are using an IAM role attached to an EKS cluster to authenticate, you can omit this variable.
These configuration options are only relevant if you're using the GCP Secrets Manager as the secrets store backend.
ZENML_SECRETS_STORE_TYPE: Set this to gcp in order to set this type of secret store.
The GCP Secrets Store uses the ZenML GCP Service Connector under the hood to authenticate with the GCP Secrets Manager API. This means that you can use any of the authentication methods supported by the GCP Service Connector to authenticate with the GCP Secrets Manager API.
The minimum set of permissions that must be attached to the implicit or configured GCP credentials are as follows:
secretmanager.secrets.create for the target GCP project (i.e. no condition on the name prefix)
secretmanager.secrets.get, secretmanager.secrets.update, secretmanager.versions.access, secretmanager.versions.add and secretmanager.secrets.delete for the target GCP project and for secrets that have a name starting with zenml- | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-docker | 375 |
pace: spark-namespace
roleRef:
kind: ClusterRolename: edit
apiGroup: rbac.authorization.k8s.io
---
And then execute the following command to create the resources:
aws eks --region=$REGION update-kubeconfig --name=$EKS_CLUSTER_NAME
kubectl create -f rbac.yaml
Lastly, note down the namespace and the name of the service account since you will need them when registering the stack component in the next step.
How to use it
To use the KubernetesSparkStepOperator, you need:
the ZenML spark integration. If you haven't installed it already, runCopyzenml integration install spark
Docker installed and running.
A remote artifact store as part of your stack.
A remote container registry as part of your stack.
A Kubernetes cluster deployed.
We can then register the step operator and use it in our active stack:
zenml step-operator register spark_step_operator \
--flavor=spark-kubernetes \
--master=k8s://$EKS_API_SERVER_ENDPOINT \
--namespace=<SPARK_KUBERNETES_NAMESPACE> \
--service_account=<SPARK_KUBERNETES_SERVICE_ACCOUNT>
# Register the stack
zenml stack register spark_stack \
o default \
s spark_step_operator \
a spark_artifact_store \
c spark_container_registry \
i local_builder \
--set
Once you added the step operator to your active stack, you can use it to execute individual steps of your pipeline by specifying it in the @step decorator as follows:
from zenml import step
@step(step_operator=<STEP_OPERATOR_NAME>)
def step_on_spark(...) -> ...:
"""Some step that should run with Spark on Kubernetes."""
...
After successfully running any step with a KubernetesSparkStepOperator, you should be able to see that a Spark driver pod was created in your cluster for each pipeline step when running kubectl get pods -n $KUBERNETES_NAMESPACE.
Instead of hardcoding a step operator name, you can also use the Client to dynamically use the step operator of your active stack:
from zenml.client import Client
step_operator = Client().active_stack.step_operator
@step(step_operator=step_operator.name) | stack-components | https://docs.zenml.io/v/docs/stack-components/step-operators/spark-kubernetes | 443 |
ucket β user-account β β ββ β β π kubernetes-cluster β service-account β β β
β β β π³ docker-registry β oauth2-token β β β
β β β β impersonation β β β
βββββββββββββββββββββββββ·βββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββ·ββββββββ·βββββββββ
```
Register an individual single-instance GCP Service Connector using auto-configuration for each of the resources that will be needed for the Stack Components: a GCS bucket, a GCR registry, and generic GCP access for the VertexAI orchestrator and another one for the GCP Cloud Builder:Copyzenml service-connector register gcs-zenml-bucket-sl --type gcp --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl --auto-configure
Example Command Output
```text
Successfully registered service connector `gcs-zenml-bucket-sl` with access to the following resources:
βββββββββββββββββ―βββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌβββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://zenml-bucket-sl β
βββββββββββββββββ·βββββββββββββββββββββββ
```
```sh
zenml service-connector register gcr-zenml-core --type gcp --resource-type docker-registry --auto-configure
```
Example Command Output
```text
Successfully registered service connector `gcr-zenml-core` with access to the following resources:
ββββββββββββββββββββββ―ββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββΌββββββββββββββββββββ¨
β π³ docker-registry β gcr.io/zenml-core β
ββββββββββββββββββββββ·ββββββββββββββββββββ
```
```sh
zenml service-connector register vertex-ai-zenml-core --type gcp --resource-type gcp-generic --auto-configure
```
Example Command Output
```text
Successfully registered service connector `vertex-ai-zenml-core` with access to the following resources:
ββββββββββββββββββ―βββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 618 |
not scoped to a single ECR repository. Instead, aconnector configured with this resource type will grant access to all the ECR
repositories that the credentials are allowed to access under the configured AWS
region (i.e. all repositories under the Docker registry URL
https://{account-id}.dkr.ecr.{region}.amazonaws.com).
The resource name associated with this resource type uniquely identifies an ECR
registry using one of the following formats (the repository name is ignored,
only the registry URL/ARN is used):
ECR repository URI (canonical resource name):
[https://]{account}.dkr.ecr.{region}.amazonaws.com[/{repository-name}]
ECR repository ARN:
arn:aws:ecr:{region}:{account-id}:repository[/{repository-name}]
ECR repository names are region scoped. The connector can only be used to access
ECR repositories in the AWS region that it is configured to use.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The Service Connector is how you configure ZenML to authenticate and connect to one or more external resources. It stores the required configuration and security credentials and can optionally be scoped with a Resource Type and a Resource Name.
Depending on the Service Connector Type implementation, a Service Connector instance can be configured in one of the following modes with regards to the types and number of resources that it has access to:
a multi-type Service Connector instance that can be configured once and used to gain access to multiple types of resources. This is only possible with Service Connector Types that support multiple Resource Types to begin with, such as those that target multi-service cloud providers like AWS, GCP and Azure. In contrast, a single-type Service Connector can only be used with a single Resource Type. To configure a multi-type Service Connector, you can simply skip scoping its Resource Type during registration. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 376 |
π§ͺData Validators
How to enhance and maintain the quality of your data and the performance of your models with data profiling and validation
Without good data, even the best machine learning models will yield questionable results. A lot of effort goes into ensuring and maintaining data quality not only in the initial stages of model development, but throughout the entire machine learning project lifecycle. Data Validators are a category of ML libraries, tools and frameworks that grant a wide range of features and best practices that should be employed in the ML pipelines to keep data quality in check and to monitor model performance to keep it from degrading over time.
Data profiling, data integrity testing, data and model drift detection are all ways of employing data validation techniques at different points in your ML pipelines where data is concerned: data ingestion, model training and evaluation and online or batch inference. Data profiles and model performance evaluation results can be visualized and analyzed to detect problems and take preventive or correcting actions.
Related concepts:
the Data Validator is an optional type of Stack Component that needs to be registered as part of your ZenML Stack.
Data Validators used in ZenML pipelines usually generate data profiles and data quality check reports that are versioned and stored in the Artifact Store and can be retrieved and visualized later.
When to use it
Data-centric AI practices are quickly becoming mainstream and using Data Validators are an easy way to incorporate them into your workflow. These are some common cases where you may consider employing the use of Data Validators in your pipelines:
early on, even if it's just to keep a log of the quality state of your data and the performance of your models at different stages of development. | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators | 330 |
data_type: The type of the data to read.
Raises:ImportError: If pyarrow or fastparquet is not installed.
Returns:
The pandas dataframe or series.
"""
if self.artifact_store.exists(self.parquet_path):
if self.pyarrow_exists:
with self.artifact_store.open(
self.parquet_path, mode="rb"
) as f:
df = pd.read_parquet(f)
else:
raise ImportError(
"You have an old version of a `PandasMaterializer` "
"data artifact stored in the artifact store "
"as a `.parquet` file, which requires `pyarrow` "
"for reading, You can install `pyarrow` by running "
"'`pip install pyarrow fastparquet`'."
else:
with self.artifact_store.open(self.csv_path, mode="rb") as f:
df = pd.read_csv(f, index_col=0, parse_dates=True)
# validate the type of the data.
def is_dataframe_or_series(
df: Union[pd.DataFrame, pd.Series],
) -> Union[pd.DataFrame, pd.Series]:
"""Checks if the data is a `pd.DataFrame` or `pd.Series`.
Args:
df: The data to check.
Returns:
The data if it is a `pd.DataFrame` or `pd.Series`.
"""
if issubclass(data_type, pd.Series):
# Taking the first column if it is a series as the assumption
# is that there will only be one
assert len(df.columns) == 1
df = df[df.columns[0]]
return df
else:
return df
return is_dataframe_or_series(df)
def save(self, df: Union[pd.DataFrame, pd.Series]) -> None:
"""Writes a pandas dataframe or series to the specified filename.
Args:
df: The pandas dataframe or series to write.
"""
if isinstance(df, pd.Series):
df = df.to_frame(name="series")
if self.pyarrow_exists:
with self.artifact_store.open(self.parquet_path, mode="wb") as f:
df.to_parquet(f, compression=COMPRESSION_TYPE)
else:
with self.artifact_store.open(self.csv_path, mode="wb") as f:
df.to_csv(f, index=True)
Code example
Let's see how materialization works with a basic example. Let's say you have a custom class called MyObject that flows between two steps in a pipeline:
import logging
from zenml import step, pipeline
class MyObj:
def __init__(self, name: str):
self.name = name
@step | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/handle-custom-data-types | 515 |
Troubleshoot the deployed server
Troubleshooting tips for your ZenML deployment
In this document, we will go over some common issues that you might face when deploying ZenML and how to solve them.
Viewing logs
Analyzing logs is a great way to debug issues. Depending on whether you have a Kubernetes (using Helm or zenml deploy) or a Docker deployment, you can view the logs in different ways.
If you are using Kubernetes, you can view the logs of the ZenML server using the following method:
Check all pods that are running your ZenML deployment.
kubectl -n <KUBERNETES_NAMESPACE> get pods
If you see that the pods aren't running, you can use the command below to get the logs for all pods at once.
kubectl -n <KUBERNETES_NAMESPACE> logs -l app.kubernetes.io/name=zenml
Note that the error can either be from the zenml-db-init container that connects to the MySQL database or from the zenml container that runs the server code. If the get pods command shows that the pod is failing in the Init state then use zenml-db-init as the container name, otherwise use zenml.
kubectl -n <KUBERNETES_NAMESPACE> logs -l app.kubernetes.io/name=zenml -c <CONTAINER_NAME>
You can also use the --tail flag to limit the number of lines to show or the --follow flag to follow the logs in real-time.
If you are using Docker, you can view the logs of the ZenML server using the following method:
If you used the zenml up --docker CLI command to deploy the Docker ZenML server, you can check the logs with the command:Copyzenml logs -f
If you used the docker run command to manually deploy the Docker ZenML server, you can check the logs with the command:Copydocker logs zenml -f
If you used the docker compose command to manually deploy the Docker ZenML server, you can check the logs with the command:Copydocker compose -p zenml logs -f
Fixing database connection problems | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/troubleshoot-your-deployed-server | 432 |
Evaluation in practice
Learn how to evaluate the performance of your RAG system in practice.
Now that we've seen individually how to evaluate the retrieval and generation components of our pipeline, it's worth taking a step back to think through how all of this works in practice.
Our example project includes the evaluation as a separate pipeline that optionally runs after the main pipeline that generates and populates the embeddings. This is a good practice to follow, as it allows you to separate the concerns of generating the embeddings and evaluating them. Depending on the specific use case, the evaluations could be included as part of the main pipeline and used as a gating mechanism to determine whether the embeddings are good enough to be used in production.
Given some of the performance constraints of the LLM judge, it might be worth experimenting with using a local LLM judge for evaluation during the course of the development process and then running the full evaluation using a cloud LLM like Anthropic's Claude or OpenAI's GPT-3.5 or 4. This can help you iterate faster and get a sense of how well your embeddings are performing before committing to the cost of running the full evaluation.
Automated evaluation isn't a silver bullet
While automating the evaluation process can save you time and effort, it's important to remember that it doesn't replace the need for a human to review the results. The LLM judge is expensive to run, and it takes time to get the results back. Automating the evaluation process can help you focus on the details and the data, but it doesn't replace the need for a human to review the results and make sure that the embeddings (and the RAG system as a whole) are performing as expected.
When and how much to evaluate | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/evaluation-in-practice | 350 |
e --authentication_secret. For example, you'd run:zenml secret create argilla_secrets --api_key="<your_argilla_api_key>"
(Visit the Argilla documentation and interface to obtain your API key.)
Then register your annotator with ZenML:
zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets
When using a deployed instance of Argilla, the instance URL must be specified without any trailing / at the end. If you are using a Hugging Face Spaces instance and its visibility is set to private, you must also set the extra_headers parameter which would include a Hugging Face token. For example:
zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --instance_url="https://[your-owner-name]-[your_space_name].hf.space" --extra_headers="{"Authorization": f"Bearer {<your_hugging_face_token>}"}"
Finally, add all these components to a stack and set it as your active stack. For example:
zenml stack copy default annotation
# this must be done separately so that the other required stack components are first registered
zenml stack update annotation -an <YOUR_ARGILLA_ANNOTATOR>
zenml stack set annotation
# optionally also
zenml stack describe
Now if you run a simple CLI command like zenml annotator dataset list this should work without any errors. You're ready to use your annotator in your ML workflow!
How do you use it?
ZenML supports access to your data and annotations via the zenml annotator ... CLI command. We have also implemented an interface to some of the common Argilla functionality via the ZenML SDK.
You can access information about the datasets you're using with the zenml annotator dataset list. To work on annotation for a particular dataset, you can run zenml annotator dataset annotate <dataset_name>. What follows is an overview of some key components to the Argilla integration and how it can be used.
Argilla Annotator Stack Component | stack-components | https://docs.zenml.io/v/docs/stack-components/annotators/argilla | 418 |
ter an S3 Artifact Store and add it to your stack:zenml integration install s3 -y
The only configuration parameter mandatory for registering an S3 Artifact Store is the root path URI, which needs to point to an S3 bucket and take the form s3://bucket-name. Please read the documentation relevant to the S3 service that you are using on how to create an S3 bucket. For example, the AWS S3 documentation is available here.
With the URI to your S3 bucket known, registering an S3 Artifact Store and using it in a stack can be done as follows:
# Register the S3 artifact-store
zenml artifact-store register s3_store -f s3 --path=s3://bucket-name
# Register and set a stack with the new artifact store
zenml stack register custom_stack -a s3_store ... --set
Depending on your use case, however, you may also need to provide additional configuration parameters pertaining to authentication or pass advanced configuration parameters to match your S3-compatible service or deployment scenario.
Infrastructure Deployment
An S3 Artifact Store can be deployed directly from the ZenML CLI:
zenml artifact-store deploy s3-artifact-store --flavor=s3 --provider=aws ...
You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
Authentication Methods
Integrating and using an S3-compatible Artifact Store in your pipelines is not possible without employing some form of authentication. If you're looking for a quick way to get started locally, you can use the Implicit Authentication method. However, the recommended way to authenticate to the AWS cloud platform is through an AWS Service Connector. This is particularly useful if you are configuring ZenML stacks that combine the S3 Artifact Store with other remote stack components also running in AWS. | stack-components | https://docs.zenml.io/stack-components/artifact-stores/s3 | 398 |
d9fe8eb2f833b788e9d8a6ebde194765b42fe46 size: 3256It is also possible to update the local gcloud CLI configuration with credentials extracted from the GCP Service Connector:
zenml service-connector login gcp-user-account --resource-type gcp-generic
Example Command Output
Updated the local gcloud default application credentials file at '/home/user/.config/gcloud/application_default_credentials.json'
The 'gcp-user-account' GCP Service Connector connector was used to successfully configure the local Generic GCP resource client/SDK.
Stack Components use
The GCS Artifact Store Stack Component can be connected to a remote GCS bucket through a GCP Service Connector.
The Google Cloud Image Builder Stack Component, VertexAI Orchestrator, and VertexAI Step Operator can be connected and use the resources of a target GCP project through a GCP Service Connector.
The GCP Service Connector can also be used with any Orchestrator or Model Deployer stack component flavor that relies on Kubernetes clusters to manage workloads. This allows GKE Kubernetes container workloads to be managed without the need to configure and maintain explicit GCP or Kubernetes kubectl configuration contexts and credentials in the target environment or in the Stack Component itself.
Similarly, Container Registry Stack Components can be connected to a GCR Container Registry through a GCP Service Connector. This allows container images to be built and published to GCR container registries without the need to configure explicit GCP credentials in the target environment or the Stack Component.
End-to-end examples
This is an example of an end-to-end workflow involving Service Connectors that use a single multi-type GCP Service Connector to give access to multiple resources for multiple Stack Components. A complete ZenML Stack is registered and composed of the following Stack Components, all connected through the same Service Connector:
a Kubernetes Orchestrator connected to a GKE Kubernetes cluster
a GCS Artifact Store connected to a GCS bucket | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 400 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β OWNER β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SHARED β β β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-06-20 19:16:26.802374 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β UPDATED_AT β 2023-06-20 19:16:26.802378 β
ββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Configuration
βββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β ββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β tenant_id β a79ff333-8f45-4a74-a42e-68871c17b7fb β
β ββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β client_id β 8926254a-8c3f-430a-a2fd-bdab234d491e β
β ββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β client_secret β [HIDDEN] β
βββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββ
Azure Access Token
Uses temporary Azure access tokens explicitly configured by the user or auto-configured from a local environment. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 461 |
Advanced: Configuring a Custom Seldon Core SecretThe Seldon Core model deployer stack component allows configuring an additional secret attribute that can be used to specify custom credentials that Seldon Core should use to authenticate to the persistent storage service where models are located. This is useful if you want to connect Seldon Core to a persistent storage service that is not supported as a ZenML Artifact Store, or if you don't want to configure or use the same credentials configured for your Artifact Store. The secret attribute must be set to the name of a ZenML secret containing credentials configured in the format supported by Seldon Core.
This method is not recommended, because it limits the Seldon Core model deployer to a single persistent storage service, whereas using the Artifact Store credentials gives you more flexibility in combining the Seldon Core model deployer with any Artifact Store in the same ZenML stack.
Seldon Core model servers use rclone to connect to persistent storage services and the credentials that can be configured in the ZenML secret must also be in the configuration format supported by rclone. This section covers a few common use cases and provides examples of how to configure the ZenML secret to support them, but for more information on supported configuration options, you can always refer to the rclone documentation for various providers.
Example of configuring a Seldon Core secret for AWS S3:
zenml secret create s3-seldon-secret \
--rclone_config_s3_type="s3" \ # set to 's3' for S3 storage.
--rclone_config_s3_provider="aws" \ # the S3 provider (e.g. aws, Ceph, Minio).
--rclone_config_s3_env_auth=False \ # set to true to use implicit AWS authentication from EC2/ECS meta data
# (i.e. with IAM roles configuration). Only applies if access_key_id and secret_access_key are blank.
--rclone_config_s3_access_key_id="<AWS-ACCESS-KEY-ID>" \ # AWS Access Key ID.
--rclone_config_s3_secret_access_key="<AWS-SECRET-ACCESS-KEY>" \ # AWS Secret Access Key. | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/seldon | 434 |
π§ͺData Validators
How to enhance and maintain the quality of your data and the performance of your models with data profiling and validation
Without good data, even the best machine learning models will yield questionable results. A lot of effort goes into ensuring and maintaining data quality not only in the initial stages of model development, but throughout the entire machine learning project lifecycle. Data Validators are a category of ML libraries, tools and frameworks that grant a wide range of features and best practices that should be employed in the ML pipelines to keep data quality in check and to monitor model performance to keep it from degrading over time.
Data profiling, data integrity testing, data and model drift detection are all ways of employing data validation techniques at different points in your ML pipelines where data is concerned: data ingestion, model training and evaluation and online or batch inference. Data profiles and model performance evaluation results can be visualized and analyzed to detect problems and take preventive or correcting actions.
Related concepts:
the Data Validator is an optional type of Stack Component that needs to be registered as part of your ZenML Stack.
Data Validators used in ZenML pipelines usually generate data profiles and data quality check reports that are versioned and stored in the Artifact Store and can be retrieved and visualized later.
When to use it
Data-centric AI practices are quickly becoming mainstream and using Data Validators are an easy way to incorporate them into your workflow. These are some common cases where you may consider employing the use of Data Validators in your pipelines:
early on, even if it's just to keep a log of the quality state of your data and the performance of your models at different stages of development. | stack-components | https://docs.zenml.io/stack-components/data-validators | 330 |
ate this documentation as we develop this feature.Getting features from a registered and active feature store is possible by creating your own step that interfaces into the feature store:
from datetime import datetime
from typing import Any, Dict, List, Union
import pandas as pd
from zenml import step
from zenml.client import Client
@step
def get_historical_features(
entity_dict: Union[Dict[str, Any], str],
features: List[str],
full_feature_names: bool = False
) -> pd.DataFrame:
"""Feast Feature Store historical data step
Returns:
The historical features as a DataFrame.
"""
feature_store = Client().active_stack.feature_store
if not feature_store:
raise DoesNotExistException(
"The Feast feature store component is not available. "
"Please make sure that the Feast stack component is registered as part of your current active stack."
params.entity_dict["event_timestamp"] = [
datetime.fromisoformat(val)
for val in entity_dict["event_timestamp"]
entity_df = pd.DataFrame.from_dict(entity_dict)
return feature_store.get_historical_features(
entity_df=entity_df,
features=features,
full_feature_names=full_feature_names,
entity_dict = {
"driver_id": [1001, 1002, 1003],
"label_driver_reported_satisfaction": [1, 5, 3],
"event_timestamp": [
datetime(2021, 4, 12, 10, 59, 42).isoformat(),
datetime(2021, 4, 12, 8, 12, 10).isoformat(),
datetime(2021, 4, 12, 16, 40, 26).isoformat(),
],
"val_to_add": [1, 2, 3],
"val_to_add_2": [10, 20, 30],
features = [
"driver_hourly_stats:conv_rate",
"driver_hourly_stats:acc_rate",
"driver_hourly_stats:avg_daily_trips",
"transformed_conv_rate:conv_rate_plus_val1",
"transformed_conv_rate:conv_rate_plus_val2",
@pipeline
def my_pipeline():
my_features = get_historical_features(entity_dict, features)
... | stack-components | https://docs.zenml.io/v/docs/stack-components/feature-stores/feast | 450 |
SDK Docs .
Enabling CUDA for GPU-backed hardwareNote that if you wish to use this step operator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
PreviousStep Operators
NextGoogle Cloud VertexAI
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/step-operators/sagemaker | 81 |
bernetes.github.io/ingress-nginx
helm repo updatehelm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress --create-namespace
Next, you need to create a ClusterIssuer resource that will be used by cert-manager to generate TLS certificates with Let's Encrypt:
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <your email address here>
privateKeySecretRef:
name: letsencrypt-staging
solvers:
http01:
ingress:
class: nginx
EOF
Finally, you can deploy the ZenML server with the following Helm values:
zenml:
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-staging"
tls:
enabled: true
generateCerts: false
Note This use-case exposes ZenML at the root URL path of the IP address or hostname of the Ingress service. You cannot share the same Ingress hostname and URL path for multiple applications. See the next section for a solution to this problem.
Shared Ingress controller
If the root URL path of your Ingress controller is already in use by another application, you cannot use it for ZenML. This section presents three possible solutions to this problem.
Use a dedicated Ingress hostname for ZenML
If you know the IP address of the load balancer in use by your Ingress controller, you can use a service like https://nip.io/ to create a new DNS name associated with it and expose ZenML at this new root URL path. For example, if your Ingress controller has the IP address 192.168.10.20, you can use a DNS name like zenml.192.168.10.20.nip.io to expose ZenML at the root URL path https://zenml.192.168.10.20.nip.io.
To find the IP address of your Ingress controller, you can use a command like the following: | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-helm | 471 |
s is achieved using the log_model_metadata method:from zenml import get_step_context, step, log_model_metadata
@step
def svc_trainer(
X_train: pd.DataFrame,
y_train: pd.Series,
gamma: float = 0.001,
) -> Annotated[ClassifierMixin, "sklearn_classifier"],:
# Train and score model
...
model.fit(dataset[0], dataset[1])
accuracy = model.score(dataset[0], dataset[1])
model = get_step_context().model
log_model_metadata(
# Model name can be omitted if specified in the step or pipeline context
model_name="iris_classifier",
# Passing None or omitting this will use the `latest` version
version=None,
# Metadata should be a dictionary of JSON-serializable values
metadata={"accuracy": float(accuracy)}
# A dictionary of dictionaries can also be passed to group metadata
# in the dashboard
# metadata = {"metrics": {"accuracy": accuracy}}
from zenml.client import Client
# Get an artifact version (in this the latest `iris_classifier`)
model_version = Client().get_model_version('iris_classifier')
# Fetch it's metadata
model_version.run_metadata["accuracy"].value
The ZenML Pro dashboard offers advanced visualization features for artifact exploration, including a dedicated artifacts tab with metadata visualization:
Choosing log metadata with artifacts or model versions depends on the scope and purpose of the information you wish to capture. Artifact metadata is best for details specific to individual outputs, while model version metadata is suitable for broader information relevant to the overall model. By utilizing ZenML's metadata logging capabilities and special types, you can enhance the traceability, reproducibility, and analysis of your ML workflows.
Once metadata has been logged to a model, we can retrieve it easily with the client:
from zenml.client import Client
client = Client()
model = client.get_model_version("my_model", "my_version")
print(model.run_metadata["metadata_key"].value) | user-guide | https://docs.zenml.io/user-guide/starter-guide/track-ml-models | 400 |
_NAME
Install Tekton Pipelines onto your cluster.If one or more of the deployments are not in the Running state, try increasing the number of nodes in your cluster.
ZenML has only been tested with Tekton Pipelines >=0.38.3 and may not work with previous versions.
Infrastructure Deployment
A Tekton orchestrator can be deployed directly from the ZenML CLI:
zenml orchestrator deploy tekton_orchestrator --flavor=tekton --provider=<YOUR_PROVIDER> ...
You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
How to use it
To use the Tekton orchestrator, we need:
The ZenML tekton integration installed. If you haven't done so, runCopyzenml integration install tekton -y
Docker installed and running.
Tekton pipelines deployed on a remote cluster. See the deployment section for more information.
The name of your Kubernetes context which points to your remote cluster. Run kubectl config get-contexts to see a list of available contexts.
A remote artifact store as part of your stack.
A remote container registry as part of your stack.
kubectl installed and the name of the Kubernetes configuration context which points to the target cluster (i.e. runkubectl config get-contexts to see a list of available contexts). This is optional (see below).
It is recommended that you set up a Service Connector and use it to connect ZenML Stack Components to the remote Kubernetes cluster, especially If you are using a Kubernetes cluster managed by a cloud provider like AWS, GCP or Azure, This guarantees that your Stack is fully portable on other environments and your pipelines are fully reproducible.
We can then register the orchestrator and use it in our active stack. This can be done in two ways: | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/tekton | 395 |
ervice-principal
```
Example Command Output
```Successfully connected orchestrator `aks-demo-cluster` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨
β f2316191-d20b-4348-a68b-f5e347862196 β azure-service-principal β π¦ azure β π kubernetes-cluster β demo-zenml-demos/demo-zenml-terraform-cluster β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββ
```
Register and connect an Azure Container Registry Stack Component to an ACR container registry:Copyzenml container-registry register acr-demo-registry --flavor azure --uri=demozenmlcontainerregistry.azurecr.io
Example Command Output
```
Successfully registered container_registry `acr-demo-registry`.
```
```sh
zenml container-registry connect acr-demo-registry --connector azure-service-principal
```
Example Command Output
```
Successfully connected container registry `acr-demo-registry` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ¨
β f2316191-d20b-4348-a68b-f5e347862196 β azure-service-principal β π¦ azure β π³ docker-registry β demozenmlcontainerregistry.azurecr.io β | how-to | https://docs.zenml.io/how-to/auth-management/azure-service-connector | 626 |
ntation section.
Seldon Core Installation ExampleThe following example briefly shows how you can install Seldon in an EKS Kubernetes cluster. It assumes that the EKS cluster itself is already set up and configured with IAM access. For more information or tutorials for other clouds, check out the official Seldon Core installation instructions.
Configure EKS cluster access locally, e.g:
aws eks --region us-east-1 update-kubeconfig --name zenml-cluster --alias zenml-eks
Install Istio 1.5.0 (required for the latest Seldon Core version):
curl -L [https://istio.io/downloadIstio](https://istio.io/downloadIstio) | ISTIO_VERSION=1.5.0 sh -
cd istio-1.5.0/
bin/istioctl manifest apply --set profile=demo
Set up an Istio gateway for Seldon Core:
curl https://raw.githubusercontent.com/SeldonIO/seldon-core/master/notebooks/resources/seldon-gateway.yaml | kubectl apply -f -
Install Seldon Core:
helm install seldon-core seldon-core-operator \
--repo https://storage.googleapis.com/seldon-charts \
--set usageMetrics.enabled=true \
--set istio.enabled=true \
--namespace seldon-system
Test that the installation is functional
kubectl apply -f iris.yaml
with iris.yaml defined as follows:
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: iris-model
namespace: default
spec:
name: iris
predictors:
graph:
implementation: SKLEARN_SERVER
modelUri: gs://seldon-models/v1.14.0-dev/sklearn/iris
name: classifier
name: default
replicas: 1
Then extract the URL where the model server exposes its prediction API:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
And use curl to send a test prediction API request to the server:
curl -X POST http://$INGRESS_HOST/seldon/default/iris-model/api/v1.0/predictions \
H 'Content-Type: application/json' \
d '{ "data": { "ndarray": [[1,2,3,4]] } }'
Using a Service Connector | stack-components | https://docs.zenml.io/stack-components/model-deployers/seldon | 487 |
πModel Registries
Tracking and managing ML models.
Model registries are centralized storage solutions for managing and tracking machine learning models across various stages of development and deployment. They help track the different versions and configurations of each model and enable reproducibility. By storing metadata such as version, configuration, and metrics, model registries help streamline the management of trained models. In ZenML, model registries are Stack Components that allow for the easy retrieval, loading, and deployment of trained models. They also provide information on the pipeline in which the model was trained and how to reproduce it.
Model Registry Concepts and Terminology
ZenML provides a unified abstraction for model registries through which it is possible to handle and manage the concepts of model groups, versions, and stages in a consistent manner regardless of the underlying registry tool or platform being used. The following concepts are useful to be aware of for this abstraction:
RegisteredModel: A logical grouping of models that can be used to track different versions of a model. It holds information about the model, such as its name, description, and tags, and can be created by the user or automatically created by the model registry when a new model is logged.
RegistryModelVersion: A specific version of a model identified by a unique version number or string. It holds information about the model, such as its name, description, tags, and metrics, and a reference to the model artifact logged to the model registry. In ZenML, it also holds a reference to the pipeline name, pipeline run ID, and step name. Each model version is associated with a model registration. | stack-components | https://docs.zenml.io/stack-components/model-registries | 325 |
Docker settings on a step
You have the option to customize the Docker settings at a step level.
By default every step of a pipeline uses the same Docker image that is defined at the pipeline level. Sometimes your steps will have special requirements that make it necessary to define a different Docker image for one or many steps. This can easily be accomplished by adding the DockerSettings to the step decorator directly.
from zenml import step
from zenml.config import DockerSettings
@step(
settings={
"docker": DockerSettings(
parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime"
def training(...):
...
Alternatively, this can also be done within the configuration file.
steps:
training:
settings:
docker:
parent_image: pytorch/pytorch:2.2.0-cuda11.8-cudnn8-runtime
required_integrations:
gcp
github
requirements:
zenml # Make sure to include ZenML for other parent images
numpy
PreviousDocker settings on a pipeline
NextSpecify pip dependencies and apt packages
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/customize-docker-builds/docker-settings-on-a-step | 230 |
registry β demozenmlcontainerregistry.azurecr.io βββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββ
```
Combine all Stack Components together into a Stack and set it as active (also throw in a local Image Builder for completion):Copyzenml image-builder register local --flavor local
Example Command Output
```
Running with active workspace: 'default' (global)
Running with active stack: 'default' (global)
Successfully registered image_builder `local`.
```
```sh
zenml stack register gcp-demo -a azure-demo -o aks-demo-cluster -c acr-demo-registry -i local --set
```
Example Command Output
```
Stack 'gcp-demo' successfully registered!
Active repository stack set to:'gcp-demo'
```
Finally, run a simple pipeline to prove that everything works as expected. We'll use the simplest pipelines possible for this example:Copyfrom zenml import pipeline, step
@step
def step_1() -> str:
"""Returns the `world` string."""
return "world"
@step(enable_cache=False)
def step_2(input_one: str, input_two: str) -> None:
"""Combines the two strings at its input and prints them."""
combined_str = f"{input_one} {input_two}"
print(combined_str)
@pipeline
def my_pipeline():
output_step_one = step_1()
step_2(input_one="hello", input_two=output_step_one)
if __name__ == "__main__":
my_pipeline()Saving that to a run.py file and running it gives us:
Example Command Output
```
$ python run.py
Registered pipeline simple_pipeline (version 1).
Building Docker image(s) for pipeline simple_pipeline.
Building Docker image demozenmlcontainerregistry.azurecr.io/zenml:simple_pipeline-orchestrator.
Including integration requirements: adlfs==2021.10.0, azure-identity==1.10.0, azure-keyvault-keys, azure-keyvault-secrets, azure-mgmt-containerservice>=20.0.0, azureml-core==1.48.0, kubernetes, kubernetes==18.20.0
No .dockerignore found, including all files inside build context. | how-to | https://docs.zenml.io/how-to/auth-management/azure-service-connector | 539 |
n the respective artifact in the pipeline run DAG.Alternatively, if you are running inside a Jupyter notebook, you can load and render the reports using the artifact.visualize() method, e.g.:
from zenml.client import Client
def visualize_results(pipeline_name: str, step_name: str) -> None:
pipeline = Client().get_pipeline(pipeline=pipeline_name)
evidently_step = pipeline.last_run.steps[step_name]
evidently_step.visualize()
if __name__ == "__main__":
visualize_results("text_data_report_pipeline", "text_report")
visualize_results("text_data_test_pipeline", "text_test")
PreviousDeepchecks
NextWhylogs
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/evidently | 147 |
orConfig class adds your configuration parameters.Bring both the implementation and the configuration together by inheriting from the BaseStepOperatorFlavor class. Make sure that you give a name to the flavor through its abstract property.
Once you are done with the implementation, you can register it through the CLI. Please ensure you point to the flavor class via dot notation:
zenml step-operator flavor register <path.to.MyStepOperatorFlavor>
For example, if your flavor class MyStepOperatorFlavor is defined in flavors/my_flavor.py, you'd register it by doing:
zenml step-operator flavor register flavors.my_flavor.MyStepOperatorFlavor
ZenML resolves the flavor class by taking the path where you initialized zenml (via zenml init) as the starting point of resolution. Therefore, please ensure you follow the best practice of initializing zenml at the root of your repository.
If ZenML does not find an initialized ZenML repository in any parent directory, it will default to the current working directory, but usually, it's better to not have to rely on this mechanism and initialize zenml at the root.
Afterward, you should see the new flavor in the list of available flavors:
zenml step-operator flavor list
It is important to draw attention to when and how these base abstractions are coming into play in a ZenML workflow.
The CustomStepOperatorFlavor class is imported and utilized upon the creation of the custom flavor through the CLI.
The CustomStepOperatorConfig class is imported when someone tries to register/update a stack component with this custom flavor. Especially, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config objects are inherently pydantic objects, you can also add your own custom validators here.
The CustomStepOperator only comes into play when the component is ultimately in use. | stack-components | https://docs.zenml.io/stack-components/step-operators/custom | 377 |
Local Image Builder
Building container images locally.
The local image builder is an image builder flavor that comes built-in with ZenML and uses the local Docker installation on your client machine to build container images.
ZenML uses the official Docker Python library to build and push your images. This library loads its authentication credentials to push images from the default config location: $HOME/.docker/config.json. If your Docker configuration is stored in a different directory, you can use the environment variable DOCKER_CONFIG to override this behavior:
export DOCKER_CONFIG=/path/to/config_dir
The directory that you specify here must contain your Docker configuration in a file called config.json.
When to use it
You should use the local image builder if:
you're able to install and use Docker on your client machine.
you want to use remote components that require containerization without the additional hassle of configuring infrastructure for an additional component.
How to deploy it
The local image builder comes with ZenML and works without any additional setup.
How to use it
To use the Local image builder, we need:
Docker installed and running.
The Docker client authenticated to push to the container registry that you intend to use in the same stack.
We can then register the image builder and use it to create a new stack:
zenml image-builder register <NAME> --flavor=local
# Register and activate a stack with the new image builder
zenml stack register <STACK_NAME> -i <NAME> ... --set
For more information and a full list of configurable attributes of the local image builder, check out the SDK Docs .
PreviousImage Builders
NextKaniko Image Builder
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/image-builders/local | 338 |
Skypilot
Use Skypilot with ZenML.
The ZenML SkyPilot VM Orchestrator allows you to provision and manage VMs on any supported cloud provider (AWS, GCP, Azure, Lambda Labs) for running your ML pipelines. It simplifies the process and offers cost savings and high GPU availability.
Prerequisites
To use the SkyPilot VM Orchestrator, you'll need:
ZenML SkyPilot integration for your cloud provider installed (zenml integration install <PROVIDER> skypilot_<PROVIDER>)
Docker installed and running
A remote artifact store and container registry in your ZenML stack
A remote ZenML deployment
Appropriate permissions to provision VMs on your cloud provider
A service connector configured to authenticate with your cloud provider (not needed for Lambda Labs)
Configuring the Orchestrator
Configuration steps vary by cloud provider:
AWS, GCP, Azure:
Install the SkyPilot integration and connectors extra for your provider
Register a service connector with credentials that have SkyPilot's required permissions
Register the orchestrator and connect it to the service connector
Register and activate a stack with the new orchestrator
zenml service-connector register <PROVIDER>-skypilot-vm -t <PROVIDER> --auto-configure
zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_<PROVIDER>
zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <PROVIDER>-skypilot-vm
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
Lambda Labs:
Install the SkyPilot Lambda integration
Register a secret with your Lambda Labs API key
Register the orchestrator with the API key secret
Register and activate a stack with the new orchestrator
zenml secret create lambda_api_key --scope user --api_key=<KEY>
zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_lambda --api_key={{lambda_api_key.api_key}}
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
Running a Pipeline | how-to | https://docs.zenml.io/v/docs/how-to/popular-integrations/skypilot | 444 |
ice account key (long-lived credentials) directly:zenml service-connector register gcp-empty-sa --type gcp --auth-method service-account --service_account_json=@[email protected] --project_id=zenml-core
Example Command Output
Expanding argument value service_account_json to contents of file /home/stefan/aspyre/src/zenml/[email protected].
Successfully registered service connector `gcp-empty-sa` with access to the following resources:
βββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π΅ gcp-generic β zenml-core β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ gcs-bucket β π₯ error: connector authorization failure: failed to list GCS buckets: 403 GET β
β β https://storage.googleapis.com/storage/v1/b?project=zenml-core&projection=noAcl&prettyPrint=false: β
β β [email protected] does not have storage.buckets.list access to the Google Cloud β
β β project. Permission 'storage.buckets.list' denied on resource (or it may not exist). β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 433 |
CTOR TYPE β RESOURCE TYPE β RESOURCE NAMES ββ βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββββββββ¨
β 405034fe-5e6e-4d29-ba62-8ae025381d98 β gcs-zenml-bucket-sl β π΅ gcp β π¦ gcs-bucket β gs://zenml-bucket-sl β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·βββββββββββββββββββββββ
```
register and connect a Google Cloud Image Builder Stack Component to the target GCP project:Copyzenml image-builder register gcp-zenml-core --flavor gcp
Example Command Output
```text
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully registered image_builder `gcp-zenml-core`.
```
```sh
zenml image-builder connect gcp-zenml-core --connector gcp-cloud-builder-zenml-core
```
Example Command Output
```text
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully connected image builder `gcp-zenml-core` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββ¨
β 648c1016-76e4-4498-8de7-808fd20f057b β gcp-cloud-builder-zenml-core β π΅ gcp β π΅ gcp-generic β zenml-core β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ
``` | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 579 |
πTrain with GPUs
Ensuring your pipelines or steps run on GPU-backed hardware.
There are several reasons why you may want to scale your machine learning pipelines to the cloud, such as utilizing more powerful hardware or distributing tasks across multiple nodes. In order to achieve this with ZenML you'll need to run your steps on GPU-backed hardware using ResourceSettings to allocate greater resources on an orchestrator node and/or make some adjustments to the container environment.
Specify resource requirements for steps
Some steps of your machine learning pipeline might be more resource-intensive and require special hardware to execute. In such cases, you can specify the required resources for steps as follows:
from zenml.config import ResourceSettings
from zenml import step
@step(settings={"resources": ResourceSettings(cpu_count=8, gpu_count=2, memory="8GB")})
def training_step(...) -> ...:
# train a model
If the underlying orchestrator in your stack then supports specifying resources, this setting will attempt to secure these resources. Some orchestrators (like the Skypilot orchestrator) do not support ResourceSettings directly, but rather use their Orchestrator specific settings to achieve the same effect:
from zenml import step
from zenml.integrations.skypilot.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings
skypilot_settings = SkypilotAWSOrchestratorSettings(
cpus="2",
memory="16",
accelerators="V100:2",
@step(settings={"orchestrator.vm_aws": skypilot_settings)
def training_step(...) -> ...:
# train a model
Please refer to the source code and documentation of each orchestrator to find out which orchestrator supports specifying resources in what way. | how-to | https://docs.zenml.io/how-to/training-with-gpus | 363 |
contents of file [email protected] registered service connector `gcp-service-account` with access to the following resources:
βββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://zenml-bucket-sl β
β β gs://zenml-core.appspot.com β
β β gs://zenml-core_cloudbuild β
β β gs://zenml-datasets β
β β gs://zenml-internal-artifact-store β
β β gs://zenml-kubeflow-artifact-store β
β β gs://zenml-project-time-series-bucket β
βββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββ
The GCP service connector configuration and service account credentials:
zenml service-connector describe gcp-service-account
Example Command Output
Service connector 'gcp-service-account' of type 'gcp' with id '4b3d41c9-6a6f-46da-b7ba-8f374c3f49c5' is owned by user 'default' and is 'private'.
'gcp-service-account' gcp Service Connector Details
ββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β ID β 4b3d41c9-6a6f-46da-b7ba-8f374c3f49c5 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β NAME β gcp-service-account β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β TYPE β π΅ gcp β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β service-account β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 565 |
register the Azure Container Registry as follows:# Register the Azure container registry and reference the target ACR registry URI
zenml container-registry register <CONTAINER_REGISTRY_NAME> -f azure \
--uri=<REGISTRY_URL>
# Connect the Azure container registry to the target ACR registry via an Azure Service Connector
zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i
A non-interactive version that connects the Azure Container Registry to a target ACR registry through an Azure Service Connector:
zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID>
Example Command Output
$ zenml container-registry connect azure-demo --connector azure-demo
Successfully connected container registry `azure-demo` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ¨
β db5821d0-a658-4504-ae96-04c3302d8f85 β azure-demo β π¦ azure β π³ docker-registry β demozenmlcontainerregistry.azurecr.io β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββ
As a final step, you can use the Azure Container Registry in a ZenML Stack:
# Register and set a stack with the new container registry
zenml stack register <STACK_NAME> -c <CONTAINER_REGISTRY_NAME> ... --set
Linking the Azure Container Registry to a Service Connector means that your local Docker client is no longer authenticated to access the remote registry. If you need to manually interact with the remote registry via the Docker CLI, you can use the local login Service Connector feature to temporarily authenticate your local Docker client to the remote registry: | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/azure | 532 |
of configuration parameters. These parameters are:kubernetes_context: the Kubernetes context to use to contact the remote Seldon Core installation. If not specified, the active Kubernetes context is used or the in-cluster configuration is used if the model deployer is running in a Kubernetes cluster. The recommended approach is to use a Service Connector to link the Seldon Deployer Stack Component to a Kubernetes cluster and to skip this parameter.
kubernetes_namespace: the Kubernetes namespace where the Seldon Core deployment servers are provisioned and managed by ZenML. If not specified, the namespace set in the current configuration is used.
base_url: the base URL of the Kubernetes ingress used to expose the Seldon Core deployment servers.
In addition to these parameters, the Seldon Core Model Deployer may also require additional configuration to be set up to allow it to authenticate to the remote artifact store or persistent storage service where model artifacts are located. This is covered in the Managing Seldon Core Authentication section.
Configuring Seldon Core in a Kubernetes cluster can be a complex and error-prone process, so we have provided a set of Terraform-based recipes to quickly provision popular combinations of MLOps tools. More information about these recipes can be found in the MLOps Stack Recipes.
Infrastructure Deployment
The Seldon Model Deployer can be deployed directly from the ZenML CLI:
zenml model-deployer deploy seldon_deployer --flavor=seldon --provider=<YOUR_PROVIDER> ...
You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
Seldon Core Installation Example | stack-components | https://docs.zenml.io/stack-components/model-deployers/seldon | 349 |
dentified by the kubernetes-cluster Resource Type.The resource name is a user-friendly cluster name configured during registration.
Authentication Methods
Two authentication methods are supported:
username and password. This is not recommended for production purposes.
authentication token with or without client certificates.
For Kubernetes clusters that use neither username and password nor authentication tokens, such as local K3D clusters, the authentication token method can be used with an empty token.
This Service Connector does not support generating short-lived credentials from the credentials configured in the Service Connector. In effect, this means that the configured credentials will be distributed directly to clients and used to authenticate to the target Kubernetes API. It is recommended therefore to use API tokens accompanied by client certificates if possible.
Auto-configuration
The Kubernetes Service Connector allows fetching credentials from the local Kubernetes CLI (i.e. kubectl) during registration. The current Kubernetes kubectl configuration context is used for this purpose. The following is an example of lifting Kubernetes credentials granting access to a GKE cluster:
zenml service-connector register kube-auto --type kubernetes --auto-configure
Example Command Output
Successfully registered service connector `kube-auto` with access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββ¨
β π kubernetes-cluster β 35.185.95.223 β
βββββββββββββββββββββββββ·βββββββββββββββββ
zenml service-connector describe kube-auto
Example Command Output
Service connector 'kube-auto' of type 'kubernetes' with id '4315e8eb-fcbd-4938-a4d7-a9218ab372a1' is owned by user 'default' and is 'private'.
'kube-auto' kubernetes Service Connector Details
ββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/kubernetes-service-connector | 451 |
ββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββ$ zenml orchestrator connect <ORCHESTRATOR_NAME> --connector aws-iam-multi-us
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully connected orchestrator `<ORCHESTRATOR_NAME>` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββ¨
β ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 β aws-iam-multi-us β πΆ aws β π kubernetes-cluster β zenhacks-cluster β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββββ
# Register and activate a stack with the new orchestrator
$ zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
if you don't have a Service Connector on hand and you don't want to register one , the local Kubernetes kubectl client needs to be configured with a configuration context pointing to the remote cluster. The kubernetes_context stack component must also be configured with the value of that context:Copyzenml orchestrator register <ORCHESTRATOR_NAME> \
--flavor=kubernetes \
--kubernetes_context=<KUBERNETES_CONTEXT>
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Kubernetes. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.
You can now run any ZenML pipeline using the Kubernetes orchestrator:
python file_that_runs_a_zenml_pipeline.py | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/kubernetes | 581 |
Get past pipeline/step runs
In order to get past pipeline/step runs, you can use the get_pipeline method in combination with the last_run property or just index into the runs:
from zenml.client import Client
client = Client()
# Retrieve a pipeline by its name
p = client.get_pipeline("mlflow_train_deploy_pipeline")
# Get the latest run of this pipeline
latest_run = p.last_run
# Alternatively you can also access runs by index or name
first_run = p[0]
PreviousFetching pipelines
NextUse configuration files
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines/get-past-pipeline-step-runs | 119 |
Stack Component.
PreviousDocker Service ConnectorNextAWS Service Connector
Last updated 7 months ago | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/kubernetes-service-connector | 19 |
ββ β β β β HTTP response headers: HTTPHeaderDict({'Audit-Id': '72558f83-e050-4fe3-93e5-9f7e66988a4c', 'Cache-Control': β
β β β β β 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Fri, 09 Jun 2023 18:59:02 GMT', β
β β β β β 'Content-Length': '129'}) β
β β β β β HTTP response body: β
β β β β β {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauth β
β β β β β orized","code":401} β
β β β β β β
β β β β β β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 356 |
when the server is first deployed. Defaults to 0.ZENML_DEFAULT_USER_NAME: The name of the initial admin user account created by the server on the first deployment, during database initialization. Defaults to default.
ZENML_DEFAULT_USER_PASSWORD: The password to use for the initial admin user account. Defaults to an empty password value, if not set.
Run the ZenML server with Docker
As previously mentioned, the ZenML server container image uses sensible defaults for most configuration options. This means that you can simply run the container with Docker without any additional configuration and it will work out of the box for most use cases:
docker run -it -d -p 8080:8080 --name zenml zenmldocker/zenml-server
Note: It is recommended to use a ZenML container image version that matches the version of your client, to avoid any potential API incompatibilities (e.g. zenmldocker/zenml-server:0.21.1 instead of zenmldocker/zenml-server).
The above command will start a containerized ZenML server running on your machine that uses a temporary SQLite database file stored in the container. Temporary means that the database and all its contents (stacks, pipelines, pipeline runs, etc.) will be lost when the container is removed with docker rm.
You need to visit the ZenML dashboard at http://localhost:8080 and activate the server by creating an initial admin user account. You can then connect your client to the server with the web login flow:
$ zenml connect --url http://localhost:8080
Connecting to: 'http://localhost:8080'...
If your browser did not open automatically, please open the following URL into your browser to proceed with the authentication:
http://localhost:8080/devices/verify?device_id=f7a7333a-3ef0-4f39-85a9-f190279456d3&user_code=9375f5cdfdaf36772ce981fe3ee6172c
Successfully logged in.
Creating default stack for user 'default' in workspace default...
Updated the global store configuration. | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 438 |
β impersonation β β ββββββββββββββββββββββββββ·βββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββ·ββββββββ·βββββββββ
For this example we will configure a service connector using the user-account auth method. But before we can do that, we need to login to GCP using the following command:
gcloud auth application-default login
This will open a browser window and ask you to login to your GCP account. Once you have logged in, you can register a new service connector using the following command:
# We want to use --auto-configure to automatically configure the service connector with the appropriate credentials and permissions to provision VMs on GCP.
zenml service-connector register gcp-skypilot-vm -t gcp --auth-method user-account --auto-configure
# using generic resource type requires disabling the generation of temporary tokens
zenml service-connector update gcp-skypilot-vm --generate_temporary_tokens=False
This will automatically configure the service connector with the appropriate credentials and permissions to provision VMs on GCP. You can then use the service connector to configure your registered VM Orchestrator stack component using the following commands:
# Register the orchestrator
zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_gcp
# Connect the orchestrator to the service connector
zenml orchestrator connect <ORCHESTRATOR_NAME> --connector gcp-skypilot-vm
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
We need first to install the SkyPilot integration for Azure and the Azure extra for ZenML, using the following two commands
pip install "zenml[connectors-azure]"
zenml integration install azure skypilot_azure
To provision VMs on Azure, your VM Orchestrator stack component needs to be configured to authenticate with Azure Service Connector | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/skypilot-vm | 443 |
Promote a Model
Stages and Promotion
Model stages are a way to model the progress that different versions takes through various stages in its lifecycle. A ZenML Model version can be promoted to a different stage through the Dashboard, the ZenML CLI or code.
This is a way to signify the progression of your model version through the ML lifecycle and are an extra layer of metadata to identify the state of a particular model version. Possible options for stages are:
staging: This version is staged for production.
production: This version is running in a production setting.
latest: The latest version of the model. This is a virtual stage to retrieve the latest version only - versions cannot be promoted to latest.
archived: This is archived and no longer relevant. This stage occurs when a model moves out of any other stage.
Your own particular business or use case logic will determine which model version you choose to promote, and you can do this in the following ways:
Promotion via CLI
This is probably the least common way that you'll use, but it's still possible and perhaps might be useful for some use cases or within a CI system, for example. You simply use the following CLI subcommand:
zenml model version update iris_logistic_regression --stage=...
Promotion via Cloud Dashboard
This feature is not yet available, but soon you will be able to promote your model versions directly from the ZenML Pro dashboard.
Promotion via Python SDK
This is the most common way that you'll use to promote your models. You can see how you would do this here:
from zenml import Model
MODEL_NAME = "iris_logistic_regression"
from zenml.enums import ModelStages
model = Model(name=MODEL_NAME, version="1.2.3")
model.set_stage(stage=ModelStages.PRODUCTION)
# get latest model and set it as Staging
# (if there is current Staging version it will get Archived)
latest_model = Model(name=MODEL_NAME, version=ModelStages.LATEST)
latest_model.set_stage(stage=ModelStages.STAGING) | how-to | https://docs.zenml.io/v/docs/how-to/use-the-model-control-plane/promote-a-model | 423 |
>' \
--client_secret='<YOUR_AZURE_CLIENT_SECRET>'# Alternatively for providing key-value pairs, you can utilize the '--values' option by specifying a file path containing
# key-value pairs in either JSON or YAML format.
# File content example: {"account_name":"<YOUR_AZURE_ACCOUNT_NAME>",...}
zenml secret create az_secret \
--values=@path/to/file.txt
# Register the Azure artifact store and reference the ZenML secret
zenml artifact-store register az_store -f azure \
--path='az://your-container' \
--authentication_secret=az_secret
# Register and set a stack with the new artifact store
zenml stack register custom_stack -a az_store ... --set
For more, up-to-date information on the Azure Artifact Store implementation and its configuration, you can have a look at the SDK docs .
How do you use it?
Aside from the fact that the artifacts are stored in Azure Blob Storage, using the Azure Artifact Store is no different from using any other flavor of Artifact Store.
PreviousGoogle Cloud Storage (GCS)
NextDevelop a custom artifact store
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/azure | 232 |
onfiguration, if specified overrides for this stepenable_artifact_metadata: True
enable_artifact_visualization: True
enable_cache: False
enable_step_logs: True
# Same as pipeline level configuration, if specified overrides for this step
extra: {}
# Same as pipeline level configuration, if specified overrides for this step
model: {}
# Same as pipeline level configuration, if specified overrides for this step
settings:
docker: {}
resources: {}
# Stack component specific settings
step_operator.sagemaker:
estimator_args:
instance_type: m7g.medium
Deep-dive
enable_XXX parameters
These are boolean flags for various configurations:
enable_artifact_metadata: Whether to associate metadata with artifacts or not.
enable_artifact_visualization: Whether to attach visualizations of artifacts.
enable_cache: Utilize caching or not.
enable_step_logs: Enable tracking step logs.
enable_artifact_metadata: True
enable_artifact_visualization: True
enable_cache: True
enable_step_logs: True
build ID
The UUID of the build to use for this pipeline. If specified, Docker image building is skipped for remote orchestrators, and the Docker image specified in this build is used.
build: <INSERT-BUILD-ID-HERE>
Configuring the model
Specifies the ZenML Model to use for this pipeline.
model:
name: "ModelName"
version: "production"
description: An example model
tags: ["classifier"]
Pipeline and step parameters
A dictionary of JSON-serializable parameters specified at the pipeline or step level. For example:
parameters:
gamma: 0.01
steps:
trainer:
parameters:
gamma: 0.001
Corresponds to:
from zenml import step, pipeline
@step
def trainer(gamma: float):
# Use gamma as normal
print(gamma)
@pipeline
def my_pipeline(gamma: float):
# use gamma or pass it into the step
print(0.01)
trainer(gamma=gamma) | how-to | https://docs.zenml.io/v/docs/how-to/use-configuration-files/what-can-be-configured | 405 |
s/kube-system/services/https:metrics-server:/proxyA similar process is possible with GCR container registries:
zenml service-connector verify gcp-user-account --resource-type docker-registry
Example Command Output
Service connector 'gcp-user-account' is correctly configured with valid credentials and has access to the following resources:
ββββββββββββββββββββββ―ββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββΌββββββββββββββββββββ¨
β π³ docker-registry β gcr.io/zenml-core β
ββββββββββββββββββββββ·ββββββββββββββββββββ
zenml service-connector login gcp-user-account --resource-type docker-registry
Example Command Output
β ¦ Attempting to configure local client using service connector 'gcp-user-account'...
WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
The 'gcp-user-account' Docker Service Connector connector was used to successfully configure the local Docker/OCI container registry client/SDK.
To verify that the local Docker container registry client is correctly configured, the following command can be used:
docker push gcr.io/zenml-core/zenml-server:connectors
Example Command Output
The push refers to repository [gcr.io/zenml-core/zenml-server]
d4aef4f5ed86: Pushed
2d69a4ce1784: Pushed
204066eca765: Pushed
2da74ab7b0c1: Pushed
75c35abda1d1: Layer already exists
415ff8f0f676: Layer already exists
c14cb5b1ec91: Layer already exists
a1d005f5264e: Layer already exists
3a3fd880aca3: Layer already exists
149a9c50e18e: Layer already exists
1f6d3424b922: Layer already exists
8402c959ae6f: Layer already exists
419599cb5288: Layer already exists
8553b91047da: Layer already exists
connectors: digest: sha256:a4cfb18a5cef5b2201759a42dd9fe8eb2f833b788e9d8a6ebde194765b42fe46 size: 3256 | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 557 |
bucket name: {bucket-name}
EKS Kubernetes clusterAllows users to access an EKS cluster as a standard Kubernetes cluster resource. When used by Stack Components, they are provided a pre-authenticated Python Kubernetes client instance.
The configured credentials must have at least the following AWS IAM permissions associated with the ARNs of EKS clusters that the connector will be allowed to access (e.g. arn:aws:eks:{region_id}:{project_id}:cluster/* represents all the EKS clusters available in the target AWS region).
eks:ListClusters
eks:DescribeCluster
If you are using the AWS IAM role, Session Token or Federation Token authentication methods, you don't have to worry too much about restricting the permissions of the AWS credentials that you use to access the AWS cloud resources. These authentication methods already support automatically generating temporary tokens with permissions down-scoped to the minimum required to access the target resource.
In addition to the above permissions, if the credentials are not associated with the same IAM user or role that created the EKS cluster, the IAM principal must be manually added to the EKS cluster's aws-auth ConfigMap, otherwise the Kubernetes client will not be allowed to access the cluster's resources. This makes it more challenging to use the AWS Implicit and AWS Federation Token authentication methods for this resource. For more information, see this documentation.
If set, the resource name must identify an EKS cluster using one of the following formats:
EKS cluster name (canonical resource name): {cluster-name}
EKS cluster ARN: arn:aws:eks:{region}:{account-id}:cluster/{cluster-name}
EKS cluster names are region scoped. The connector can only be used to access EKS clusters in the AWS region that it is configured to use.
ECR container registry
Allows Stack Components to access one or more ECR repositories as a standard Docker registry resource. When used by Stack Components, they are provided a pre-authenticated python-docker client instance. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 397 |
ister custom_stack -dv ge_data_validator ... --setYou can continue to edit your local Great Expectations configuration (e.g. add new Data Sources, update the Metadata Stores etc.) and these changes will be visible in your ZenML pipelines. You can also use the Great Expectations CLI as usual to manage your configuration and your Expectations.
This deployment method migrates your existing Great Expectations configuration to ZenML and allows you to use it with local as well as remote orchestrators. You have to load the Great Expectations configuration contents in one of the Data Validator configuration parameters using the @ operator, e.g.:
# Register the Great Expectations data validator
zenml data-validator register ge_data_validator --flavor=great_expectations \
--context_config=@/path/to/my/great_expectations/great_expectations.yaml
# Register and set a stack with the new data validator
zenml stack register custom_stack -dv ge_data_validator ... --set
When you are migrating your existing Great Expectations configuration to ZenML, keep in mind that the Metadata Stores that you configured there will also need to be accessible from the location where pipelines are running. For example, you cannot use a non-local orchestrator with a Great Expectations Metadata Store that is located on your filesystem.
Advanced Configuration
The Great Expectations Data Validator has a few advanced configuration attributes that might be useful for your particular use-case: | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/great-expectations | 284 |
D Kubernetes clusters and local Docker containers.when used with a remote ZenML server, the implicit authentication method only works if your ZenML server is deployed in the same cloud as the one supported by the Service Connector Type that you are using. For instance, if you're using the AWS Service Connector Type, then the ZenML server must also be deployed in AWS (e.g. in an EKS Kubernetes cluster). You may also need to manually adjust the cloud configuration of the remote cloud workload where the ZenML server is running to allow access to resources (e.g. add permissions to the AWS IAM role attached to the EC2 or EKS node, add roles to the GCP service account attached to the GKE cluster nodes).
The following is an example of using the GCP Service Connector's implicit authentication method to gain immediate access to all the GCP resources that the ZenML server also has access to. Note that this is only possible because the ZenML server is also deployed in GCP, in a GKE cluster, and the cluster is attached to a GCP service account with permissions to access the project resources:
zenml service-connector register gcp-implicit --type gcp --auth-method implicit --project_id=zenml-core
Example Command Output
Successfully registered service connector `gcp-implicit` with access to the following resources:
βββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π΅ gcp-generic β zenml-core β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://annotation-gcp-store β
β β gs://zenml-bucket-sl β
β β gs://zenml-core.appspot.com β
β β gs://zenml-core_cloudbuild β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/best-security-practices | 452 |
ββ βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β METADATA β {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_08_52_398499', 'zenml_pipeline_name': 'mlflow_training_pipeline', 'zenml_pipeline_run_uuid': '29fb22c1-6e0b-4431-9e04-226226506d16', β
β β 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'lr': '0.001', 'epochs': '5', 'optimizer': 'Adam'} β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β MODEL_SOURCE_URI β file:///Users/safoine-zenml/Library/Application Support/zenml/local_stores/0902a511-117d-4152-a098-b2f1124c4493/mlruns/489728212459131640/293a0d2e71e046999f77a79639f6eac2/artifacts/model β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β STAGE β None β | stack-components | https://docs.zenml.io/v/docs/stack-components/model-registries/mlflow | 376 |
our stack:
zenml integration install label_studioYou will then need to obtain your Label Studio API key. This will give you access to the web annotation interface. (The following steps apply to a local instance of Label Studio, but feel free to obtain your API key directly from your deployed instance if that's what you are using.)
git clone https://github.com/HumanSignal/label-studio.git
cd label-studio
docker-compose up -d # starts label studio at http://localhost:8080
Then visit http://localhost:8080/ to log in, and then visit http://localhost:8080/user/account and get your Label Studio API key (from the upper right-hand corner). You will need it for the next step. Keep the Label Studio server running, because the ZenML Label Studio annotator will use it as the backend.
At this point you should register the API key under a custom secret name, making sure to replace the two parts in <> with whatever you choose:
zenml secret create label_studio_secrets --api_key="<your_label_studio_api_key>"
Then register your annotator with ZenML:
zenml annotator register label_studio --flavor label_studio --api_key="{{label_studio_secrets.api_key}}"
# for deployed instances of Label Studio, you can also pass in the URL as follows, for example:
# zenml annotator register label_studio --flavor label_studio --authentication_secret="<LABEL_STUDIO_SECRET_NAME>" --instance_url="<your_label_studio_url>" --port=80
When using a deployed instance of Label Studio, the instance URL must be specified without any trailing / at the end. You should specify the port, for example, port 80 for a standard HTTP connection. For a Hugging Face deployment (the easiest way to get going with Label Studio), please read the Hugging Face deployment documentation.
Finally, add all these components to a stack and set it as your active stack. For example:
zenml stack copy default annotation
zenml stack update annotation -a <YOUR_CLOUD_ARTIFACT_STORE> | stack-components | https://docs.zenml.io/v/docs/stack-components/annotators/label-studio | 431 |
io β
ββββββββββββββββββββββ·βββββββββββββββββIf you already have one or more Docker Service Connectors configured in your ZenML deployment, you can check which of them can be used to access the container registry you want to use for your Default Container Registry by running e.g.:
zenml service-connector list-resources --connector-type docker --resource-id <REGISTRY_URI>
Example Command Output
$ zenml service-connector list-resources --connector-type docker --resource-id docker.io
The resource with name 'docker.io' can be accessed by 'docker' service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌβββββββββββββββββ¨
β cf55339f-dbc8-4ee6-862e-c25aff411292 β dockerhub β π³ docker β π³ docker-registry β docker.io β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·βββββββββββββββββ
After having set up or decided on a Docker Service Connector to use to connect to the target container registry, you can register the Docker Container Registry as follows:
# Register the container registry and reference the target registry URI
zenml container-registry register <CONTAINER_REGISTRY_NAME> -f default \
--uri=<REGISTRY_URL>
# Connect the container registry to the target registry via a Docker Service Connector
zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i
A non-interactive version that connects the Default Container Registry to a target registry through a Docker Service Connector:
zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID>
Example Command Output
$ zenml container-registry connect dockerhub --connector dockerhub | stack-components | https://docs.zenml.io/stack-components/container-registries/default | 523 |
Develop a Custom Step Operator
Learning how to develop a custom step operator.
Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.
Base Abstraction
The BaseStepOperator is the abstract base class that needs to be subclassed in order to run specific steps of your pipeline in a separate environment. As step operators can come in many shapes and forms, the base class exposes a deliberately basic and generic interface:
from abc import ABC, abstractmethod
from typing import List, Type
from zenml.enums import StackComponentType
from zenml.stack import StackComponent, StackComponentConfig, Flavor
from zenml.config.step_run_info import StepRunInfo
class BaseStepOperatorConfig(StackComponentConfig):
"""Base config for step operators."""
class BaseStepOperator(StackComponent, ABC):
"""Base class for all ZenML step operators."""
@abstractmethod
def launch(
self,
info: StepRunInfo,
entrypoint_command: List[str],
) -> None:
"""Abstract method to execute a step.
Subclasses must implement this method and launch a **synchronous**
job that executes the `entrypoint_command`.
Args:
info: Information about the step run.
entrypoint_command: Command that executes the step.
"""
class BaseStepOperatorFlavor(Flavor):
"""Base class for all ZenML step operator flavors."""
@property
@abstractmethod
def name(self) -> str:
"""Returns the name of the flavor."""
@property
def type(self) -> StackComponentType:
"""Returns the flavor type."""
return StackComponentType.STEP_OPERATOR
@property
def config_class(self) -> Type[BaseStepOperatorConfig]:
"""Returns the config class for this flavor."""
return BaseStepOperatorConfig
@property
@abstractmethod
def implementation_class(self) -> Type[BaseStepOperator]: | stack-components | https://docs.zenml.io/v/docs/stack-components/step-operators/custom | 387 |
βIntroduction
Welcome to ZenML!
ZenML is an extensible, open-source MLOps framework for creating portable, production-ready machine learning pipelines. By decoupling infrastructure from code, ZenML enables developers across your organization to collaborate more effectively as they develop to production.
ZenML enables MLOps infrastructure experts to define, deploy, and manage sophisticated production environments that are easy to share with colleagues.
ZenML Pro: ZenML Pro provides a control plane that allows you to deploy a managed ZenML instance and get access to exciting new features such as CI/CD, Model Control Plane, and RBAC.
Self-hosted deployment: ZenML can be deployed on any cloud provider and provides many Terraform-based utility functions to deploy other MLOps tools or even entire MLOps stacks:Copy# Deploy ZenML to any cloud
zenml deploy --provider aws
# Deploy MLOps tools and infrastructure to any cloud
zenml orchestrator deploy kfp --flavor kubeflow --provider gcp
# Deploy entire MLOps stacks at once
zenml stack deploy gcp-vertexai --provider gcp -o kubeflow ...
Standardization: With ZenML, you can standardize MLOps infrastructure and tooling across your organization. Simply register your staging and production environments as ZenML stacks and invite your colleagues to run ML workflows on them.Copy# Register MLOps tools and infrastructure
zenml orchestrator register kfp_orchestrator -f kubeflow
# Register your production environment
zenml stack register production --orchestrator kubeflow ...
# Make it available to your colleagues
zenml stack share production
Registering your environments as ZenML stacks also enables you to browse and explore them in a convenient user interface. Try it out at https://www.zenml.io/live-demo! | docs | https://docs.zenml.io/v/docs | 380 |
_operator
@step(step_operator=step_operator.name)def step_on_spark(...) -> ...:
...
Additional configuration
For additional configuration of the Spark step operator, you can pass SparkStepOperatorSettings when defining or running your pipeline. Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings.
PreviousAzureML
NextDevelop a Custom Step Operator
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/step-operators/spark-kubernetes | 89 |
<ORCHESTRATOR_NAME> ... --set
Running a PipelineOnce configured, you can run any ZenML pipeline using the SkyPilot VM Orchestrator. Each step will run in a Docker container on a provisioned VM.
Additional Configuration
You can further configure the orchestrator using cloud-specific Settings objects:
from zenml.integrations.skypilot_<PROVIDER>.flavors.skypilot_orchestrator_<PROVIDER>_vm_flavor import Skypilot<PROVIDER>OrchestratorSettings
skypilot_settings = Skypilot<PROVIDER>OrchestratorSettings(
cpus="2",
memory="16",
accelerators="V100:2",
use_spot=True,
region=<REGION>,
...
@pipeline(
settings={
"orchestrator.vm_<PROVIDER>": skypilot_settings
This allows specifying VM size, spot usage, region, and more.
You can also configure resources per step:
high_resource_settings = Skypilot<PROVIDER>OrchestratorSettings(...)
@step(settings={"orchestrator.vm_<PROVIDER>": high_resource_settings})
def resource_intensive_step():
...
For more details and advanced options, see the full SkyPilot VM Orchestrator documentation.
PreviousMLflow
NextConnect services (AWS, GCP, Azure, K8s etc)
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/popular-integrations/skypilot | 290 |
rets:
Setting it to NONE disables any validation.Setting it to SECRET_EXISTS only validates the existence of secrets. This might be useful if the machine you're running on only has permission to list secrets but not actually read their values.
Setting it to SECRET_AND_KEY_EXISTS (the default) validates both the secret existence as well as the existence of the exact key-value pair.
Fetch secret values in a step
If you are using centralized secrets management, you can access secrets directly from within your steps through the ZenML Client API. This allows you to use your secrets for querying APIs from within your step without hard-coding your access keys:
from zenml import step
from zenml.client import Client
@step
def secret_loader() -> None:
"""Load the example secret from the server."""
# Fetch the secret from ZenML.
secret = Client().get_secret( < SECRET_NAME >)
# `secret.secret_values` will contain a dictionary with all key-value
# pairs within your secret.
authenticate_to_some_api(
username=secret.secret_values["username"],
password=secret.secret_values["password"],
...
See Also
Interact with secrets: Learn how to create, list, and delete secrets using the ZenML CLI and Python SDK.
PreviousDeploy a stack using mlstacks
NextImplement a custom stack component
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/stack-deployment/reference-secrets-in-stack-configuration | 273 |
urns it with the configuration of the cloud stack.Based on the stack info and pipeline specification, the client builds and pushes an image to the container registry. The image contains the environment needed to execute the pipeline and the code of the steps.
The client creates a run in the orchestrator. For example, in the case of the Skypilot orchestrator, it creates a virtual machine in the cloud with some commands to pull and run a Docker image from the specified container registry.
The orchestrator pulls the appropriate image from the container registry as it's executing the pipeline (each step has an image).
As each pipeline runs, it stores artifacts physically in the artifact store. Of course, this artifact store needs to be some form of cloud storage.
As each pipeline runs, it reports status back to the ZenML server and optionally queries the server for metadata.
Provisioning and registering a Skypilot orchestrator alongside a container registry
While there are detailed docs on how to set up a Skypilot orchestrator and a container registry on each public cloud, we have put the most relevant details here for convenience:
In order to launch a pipeline on AWS with the SkyPilot orchestrator, the first thing that you need to do is to install the AWS and Skypilot integrations:
zenml integration install aws skypilot_aws -y
Before we start registering any components, there is another step that we have to execute. As we explained in the previous section, components such as orchestrators and container registries often require you to set up the right permissions. In ZenML, this process is simplified with the use of Service Connectors. For this example, we need to use the IAM role authentication method of our AWS service connector:
AWS_PROFILE=<AWS_PROFILE> zenml service-connector register cloud_connector --type aws --auto-configure
Once the service connector is set up, we can register a Skypilot orchestrator:
zenml orchestrator register skypilot_orchestrator -f vm_aws | user-guide | https://docs.zenml.io/user-guide/production-guide/cloud-orchestration | 409 |
Connecting remote storage
Transitioning to remote artifact storage.
In the previous chapters, we've been working with artifacts stored locally on our machines. This setup is fine for individual experiments, but as we move towards a collaborative and production-ready environment, we need a solution that is more robust, shareable, and scalable. Enter remote storage!
Remote storage allows us to store our artifacts in the cloud, which means they're accessible from anywhere and by anyone with the right permissions. This is essential for team collaboration and for managing the larger datasets and models that come with production workloads.
When using a stack with remote storage, nothing changes except the fact that the artifacts get materialized in a central and remote storage location. This diagram explains the flow:
Provisioning and registering a remote artifact store
Out of the box, ZenML ships with many different supported artifact store flavors. For convenience, here are some brief instructions on how to quickly get up and running on the major cloud providers:
You will need to install and set up the AWS CLI on your machine as a prerequisite, as covered in the AWS CLI documentation, before you register the S3 Artifact Store.
The Amazon Web Services S3 Artifact Store flavor is provided by the S3 ZenML integration, you need to install it on your local machine to be able to register an S3 Artifact Store and add it to your stack:
zenml integration install s3 -y
Having trouble with this command? You can use poetry or pip to install the requirements of any ZenML integration directly. In order to obtain the exact requirements of the AWS S3 integration you can use zenml integration requirements s3.
The only configuration parameter mandatory for registering an S3 Artifact Store is the root path URI, which needs to point to an S3 bucket and take the form s3://bucket-name. In order to create a S3 bucket, refer to the AWS documentation. | user-guide | https://docs.zenml.io/v/docs/user-guide/production-guide/remote-storage | 383 |
ggingFaceModelDeployer.get_active_model_deployer()# fetch existing services with same pipeline name, step name and model name
existing_services = model_deployer.find_model_server(
pipeline_name=pipeline_name,
pipeline_step_name=pipeline_step_name,
model_name=model_name,
running=running,
if not existing_services:
raise RuntimeError(
f"No Hugging Face inference endpoint deployed by step "
f"'{pipeline_step_name}' in pipeline '{pipeline_name}' with name "
f"'{model_name}' is currently running."
return existing_services[0]
# Use the service for inference
@step
def predictor(
service: HuggingFaceDeploymentService,
data: str
) -> Annotated[str, "predictions"]:
"""Run a inference request against a prediction service"""
prediction = service.predict(data)
return prediction
@pipeline
def huggingface_deployment_inference_pipeline(
pipeline_name: str, pipeline_step_name: str = "huggingface_model_deployer_step",
):
inference_data = ...
model_deployment_service = prediction_service_loader(
pipeline_name=pipeline_name,
pipeline_step_name=pipeline_step_name,
predictions = predictor(model_deployment_service, inference_data)
For more information and a full list of configurable attributes of the Hugging Face Model Deployer, check out the SDK Docs.
PreviousBentoML
NextDevelop a Custom Model Deployer
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/model-deployers/huggingface | 282 |
ource-type docker-registry
Example Command OutputThe following 'docker-registry' resources can be accessed by service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β 37c97fa0-fa47-4d55-9970-e2aa6e1b50cf β aws-secret-key β πΆ aws β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β d400e0c6-a8e7-4b95-ab34-0359229c5d36 β aws-us-east-1 β πΆ aws β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
After having set up or decided on an AWS Service Connector to use to connect to the target ECR registry, you can register the AWS Container Registry as follows:
# Register the AWS container registry and reference the target ECR registry URI
zenml container-registry register <CONTAINER_REGISTRY_NAME> -f aws \
--uri=<REGISTRY_URL>
# Connect the AWS container registry to the target ECR registry via an AWS Service Connector
zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i
A non-interactive version that connects the AWS Container Registry to a target ECR registry through an AWS Service Connector:
zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID>
Example Command Output
$ zenml container-registry connect aws-us-east-1 --connector aws-us-east-1 | stack-components | https://docs.zenml.io/stack-components/container-registries/aws | 592 |
r managed solutions like Vertex.
How to deploy itThe Kubernetes orchestrator requires a Kubernetes cluster in order to run. There are many ways to deploy a Kubernetes cluster using different cloud providers or on your custom infrastructure, and we can't possibly cover all of them, but you can check out our cloud guide
If the above Kubernetes cluster is deployed remotely on the cloud, then another pre-requisite to use this orchestrator would be to deploy and connect to a remote ZenML server.
Infrastructure Deployment
A Kubernetes orchestrator can be deployed directly from the ZenML CLI:
zenml orchestrator deploy k8s_orchestrator --flavor=kubernetes --provider=<YOUR_PROVIDER> ...
You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
How to use it
To use the Kubernetes orchestrator, we need:
The ZenML kubernetes integration installed. If you haven't done so, runCopyzenml integration install kubernetes
Docker installed and running.
kubectl installed.
A remote artifact store as part of your stack.
A remote container registry as part of your stack.
A Kubernetes cluster deployed
kubectl installed and the name of the Kubernetes configuration context which points to the target cluster (i.e. runkubectl config get-contexts to see a list of available contexts) . This is optional (see below).
It is recommended that you set up a Service Connector and use it to connect ZenML Stack Components to the remote Kubernetes cluster, especially If you are using a Kubernetes cluster managed by a cloud provider like AWS, GCP or Azure, This guarantees that your Stack is fully portable on other environments and your pipelines are fully reproducible.
We can then register the orchestrator and use it in our active stack. This can be done in two ways: | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/kubernetes | 392 |
Amazon Simple Cloud Storage (S3)
Storing artifacts in an AWS S3 bucket.
The S3 Artifact Store is an Artifact Store flavor provided with the S3 ZenML integration that uses the AWS S3 managed object storage service or one of the self-hosted S3 alternatives, such as MinIO or Ceph RGW, to store artifacts in an S3 compatible object storage backend.
When would you want to use it?
Running ZenML pipelines with the local Artifact Store is usually sufficient if you just want to evaluate ZenML or get started quickly without incurring the trouble and the cost of employing cloud storage services in your stack. However, the local Artifact Store becomes insufficient or unsuitable if you have more elaborate needs for your project:
if you want to share your pipeline run results with other team members or stakeholders inside or outside your organization
if you have other components in your stack that are running remotely (e.g. a Kubeflow or Kubernetes Orchestrator running in a public cloud).
if you outgrow what your local machine can offer in terms of storage space and need to use some form of private or public storage service that is shared with others
if you are running pipelines at scale and need an Artifact Store that can handle the demands of production-grade MLOps
In all these cases, you need an Artifact Store that is backed by a form of public cloud or self-hosted shared object storage service.
You should use the S3 Artifact Store when you decide to keep your ZenML artifacts in a shared object storage and if you have access to the AWS S3 managed service or one of the S3 compatible alternatives (e.g. Minio, Ceph RGW). You should consider one of the other Artifact Store flavors if you don't have access to an S3-compatible service.
How do you deploy it?
The S3 Artifact Store flavor is provided by the S3 ZenML integration, you need to install it on your local machine to be able to register an S3 Artifact Store and add it to your stack: | stack-components | https://docs.zenml.io/stack-components/artifact-stores/s3 | 411 |
'default' ...
Creating default user 'default' ...Creating default stack for user 'default' in workspace default...
Active workspace not set. Setting it to the default.
The active stack is not set. Setting the active stack to the default workspace stack.
Using the default store for the global config.
Unable to find ZenML repository in your current working directory (/tmp/folder) or any parent directories. If you want to use an existing repository which is in a different location, set the environment variable 'ZENML_REPOSITORY_PATH'. If you want to create a new repository, run zenml init.
Running without an active repository root.
Using the default local database.
Running with active workspace: 'default' (global)
ββββββββββ―βββββββββββββ―βββββββββ―ββββββββββ―βββββββββββββββββ―βββββββββββββββ
β ACTIVE β STACK NAME β SHARED β OWNER β ARTIFACT_STORE β ORCHESTRATOR β
β βββββββββΌβββββββββββββΌβββββββββΌββββββββββΌβββββββββββββββββΌβββββββββββββββ¨
β π β default β β β default β default β default β
ββββββββββ·βββββββββββββ·βββββββββ·ββββββββββ·βββββββββββββββββ·βββββββββββββββ
The following is an example of the layout of the global config directory immediately after initialization:
/home/stefan/.config/zenml <- Global Config Directory
βββ config.yaml <- Global Configuration Settings
βββ local_stores <- Every Stack component that stores information
| locally will have its own subdirectory here.
βββ a1a0d3d0-d552-4a80-be09-67e5e29be8ee <- e.g. Local Store path for the
| `default` local Artifact Store
βββ default_zen_store
βββ zenml.db <- SQLite database where ZenML data (stacks,
components, etc) are stored by default.
As shown above, the global config directory stores the following information: | reference | https://docs.zenml.io/reference/global-settings | 484 |
dvantages over the implicit authentication method:you don't need to install and configure the GCP CLI on your host
you don't need to care about enabling your other stack components (orchestrators, step operators and model deployers) to have access to the artifact store through GCP Service Accounts and Workload Identity
you can combine the GCS artifact store with other stack components that are not running in GCP
For this method, you need to create a user-managed GCP service account, grant it privileges to read and write to your GCS bucket (i.e. use the Storage Object Admin role) and then create a service account key.
With the service account key downloaded to a local file, you can register a ZenML secret and reference it in the GCS Artifact Store configuration as follows:
# Store the GCP credentials in a ZenML
zenml secret create gcp_secret \
--token=@path/to/service_account_key.json
# Register the GCS artifact store and reference the ZenML secret
zenml artifact-store register gcs_store -f gcp \
--path='gs://your-bucket' \
--authentication_secret=gcp_secret
# Register and set a stack with the new artifact store
zenml stack register custom_stack -a gs_store ... --set
For more, up-to-date information on the GCS Artifact Store implementation and its configuration, you can have a look at the SDK docs .
How do you use it?
Aside from the fact that the artifacts are stored in GCP Cloud Storage, using the GCS Artifact Store is no different from using any other flavor of Artifact Store.
PreviousAmazon Simple Cloud Storage (S3)
NextAzure Blob Storage
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/gcp | 350 |
e steps
from zenml.steps import StepContext, stepfrom zenml.environment import Environment
@step
def my_step(context: StepContext) -> Any: # Old: `StepContext` class defined as arg
env = Environment().step_environment
output_uri = context.get_output_artifact_uri()
step_name = env.step_name # Old: Run info accessible via `StepEnvironment`
...
from zenml import get_step_context, step
@step
def my_step() -> Any: # New: StepContext is no longer an argument of the step
context = get_step_context()
output_uri = context.get_output_artifact_uri()
step_name = context.step_name # New: StepContext now has ALL run/step info
...
Check out this page for more information on how to fetch run information inside your steps using get_step_context().
PreviousMigration guide 0.23.0 β 0.30.0
NextCommunity & content
Last updated 15 days ago | reference | https://docs.zenml.io/reference/migration-guide/migration-zero-forty | 204 |
t of changes.
Changes in our integrations changesMuch like ZenML, pydantic is an important dependency in many other Python packages. Thatβs why conducting this upgrade helped us unlock a new version for several ZenML integration dependencies. Additionally, in some instances, we had to adapt the functionality of the integration to keep it compatible with pydantic. So, if you are using any of these integrations, please go through the changes.
Airflow
right here.
AWS
Evidently
The old version of our evidently integration was not compatible with Pydantic v2. They started supporting it starting from version 0.4.16. As their latest version is 0.4.22, the new dependency of the integration is limited between these two versions.
Feast
Our previous implementation of the feast integration was not compatible with Pydantic v2 due to the extra redis dependency we were using. This extra dependency is now removed and the feast integration is working as intended.
GCP
The previous version of the Kubeflow dependency (kfp==1.8.22) in our GCP integration required Pydantic V1 to be installed. While we were upgrading our Pydantic dependency, we saw this as an opportunity and wanted to use this chance to upgrade the kfp dependency to v2 (which has no dependencies on the Pydantic library). This is why you may see some functional changes in the vertex step operator and orchestrator. If you would like to go through the changes in the kfp library, you can find the migration guide here.
Great Expectations
Great Expectations started supporting Pydantic v2 starting from version 0.17.15 and they are closing in on their 1.0 release. Since this release might include a lot of big changes, we adjusted the dependency in our integration to great-expectations>=0.17.15,<1.0. We will try to keep it updated in the future once they release the 1.0 version
Kubeflow
the migration guide here. ( We also are considering adding an alternative version of this integration so our users can keep using
MLflow
Label Studio | reference | https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-sixty | 446 |
.
@step
def svc_trainer(
X_train: pd.DataFrame,y_train: pd.Series,
gamma: float = 0.001,
) -> Tuple[
Annotated[ClassifierMixin, "trained_model"],
Annotated[float, "training_acc"],
]:
"""Train a sklearn SVC classifier."""
model = SVC(gamma=gamma)
model.fit(X_train.to_numpy(), y_train.to_numpy())
train_acc = model.score(X_train.to_numpy(), y_train.to_numpy())
print(f"Train accuracy: {train_acc}")
return model, train_acc
If you want to run the step function outside the context of a ZenML pipeline, all you need to do is call the step function outside of a ZenML pipeline. For example:
svc_trainer(X_train=..., y_train=...)
Next, we will combine our two steps into a pipeline and run it. As you can see, the parameter gamma is configurable as a pipeline input as well.
@pipeline
def training_pipeline(gamma: float = 0.002):
X_train, X_test, y_train, y_test = training_data_loader()
svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train)
if __name__ == "__main__":
training_pipeline(gamma=0.0015)
Best Practice: Always nest the actual execution of the pipeline inside an if __name__ == "__main__" condition. This ensures that loading the pipeline from elsewhere does not also run it.
if __name__ == "__main__":
training_pipeline()
Running python run.py should look somewhat like this in the terminal:
Registered new pipeline with name `training_pipeline`.
Pipeline run `training_pipeline-2023_04_29-09_19_54_273710` has finished in 0.236s.
In the dashboard, you should now be able to see this new run, along with its runtime configuration and a visualization of the training data.
Configure with a YAML file
Instead of configuring your pipeline runs in code, you can also do so from a YAML file. This is best when we do not want to make unnecessary changes to the code; in production this is usually the case.
To do this, simply reference the file like this:
# Configure the pipeline
training_pipeline = training_pipeline.with_options(
config_path='/local/path/to/config.yaml'
# Run the pipeline | user-guide | https://docs.zenml.io/user-guide/starter-guide/create-an-ml-pipeline | 479 |
βFAQ
Find answers to the most frequently asked questions about ZenML.
Why did you build ZenML?
We built it because we scratched our own itch while deploying multiple machine-learning models in production over the past three years. Our team struggled to find a simple yet production-ready solution whilst developing large-scale ML pipelines. We built a solution for it that we are now proud to share with all of you! Read more about this backstory on our blog here.
Is ZenML just another orchestrator like Airflow, Kubeflow, Flyte, etc?
Not really! An orchestrator in MLOps is the system component that is responsible for executing and managing the execution of an ML pipeline. ZenML is a framework that allows you to run your pipelines on whatever orchestrator you like, and we coordinate with all the other parts of an ML system in production. There are standard orchestrators that ZenML supports out-of-the-box, but you are encouraged to write your own orchestrator in order to gain more control as to exactly how your pipelines are executed!
Can I use the tool X? How does the tool Y integrate with ZenML?
Take a look at our documentation (in particular the component guide), which contains instructions and sample code to support each integration that ZenML supports out-of-the-box. You can also check out our integration test code to see active examples of many of our integrations in action.
The ZenML team and community are constantly working to include more tools and integrations to the above list (check out the roadmap for more details). You can upvote features you'd like and add your ideas to the roadmap.
Most importantly, ZenML is extensible, and we encourage you to use it with whatever other tools you require as part of your ML process and system(s). Check out our documentation on how to get started with extending ZenML to learn more!
How can I make ZenML work with my custom tool? How can I extend or build on ZenML? | reference | https://docs.zenml.io/v/docs/reference/faq | 400 |
y service connectors configured in your workspace:ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β eeeabc13-9203-463b-aa52-216e629e903c β gcp-demo-multi β π΅ gcp β π¦ gcs-bucket β gs://zenml-bucket-sl β
β β β β β gs://zenml-core.appspot.com β
β β β β β gs://zenml-core_cloudbuild β
β β β β β gs://zenml-datasets β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββ
```
```sh
zenml service-connector list-resources --resource-type kubernetes-cluster
```
Example Command Output
```text
The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββββ¨
β eeeabc13-9203-463b-aa52-216e629e903c β gcp-demo-multi β π΅ gcp β π kubernetes-cluster β zenml-test-cluster β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββββββ
```
```sh | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 641 |
f,
name: str,
description: Optional[str] = None,tags: Optional[Dict[str, str]] = None,
) -> RegisteredModel:
"""Registers a model in the model registry."""
@abstractmethod
def delete_model(
self,
name: str,
) -> None:
"""Deletes a registered model from the model registry."""
@abstractmethod
def update_model(
self,
name: str,
description: Optional[str] = None,
tags: Optional[Dict[str, str]] = None,
) -> RegisteredModel:
"""Updates a registered model in the model registry."""
@abstractmethod
def get_model(self, name: str) -> RegisteredModel:
"""Gets a registered model from the model registry."""
@abstractmethod
def list_models(
self,
name: Optional[str] = None,
tags: Optional[Dict[str, str]] = None,
) -> List[RegisteredModel]:
"""Lists all registered models in the model registry."""
# ---------
# Model Version Methods
# ---------
@abstractmethod
def register_model_version(
self,
name: str,
description: Optional[str] = None,
tags: Optional[Dict[str, str]] = None,
model_source_uri: Optional[str] = None,
version: Optional[str] = None,
description: Optional[str] = None,
tags: Optional[Dict[str, str]] = None,
metadata: Optional[Dict[str, str]] = None,
zenml_version: Optional[str] = None,
zenml_run_name: Optional[str] = None,
zenml_pipeline_name: Optional[str] = None,
zenml_step_name: Optional[str] = None,
**kwargs: Any,
) -> RegistryModelVersion:
"""Registers a model version in the model registry."""
@abstractmethod
def delete_model_version(
self,
name: str,
version: str,
) -> None:
"""Deletes a model version from the model registry."""
@abstractmethod
def update_model_version(
self,
name: str,
version: str,
description: Optional[str] = None,
tags: Optional[Dict[str, str]] = None,
stage: Optional[ModelVersionStage] = None,
) -> RegistryModelVersion:
"""Updates a model version in the model registry."""
@abstractmethod
def list_model_versions(
self,
name: Optional[str] = None,
model_source_uri: Optional[str] = None, | stack-components | https://docs.zenml.io/v/docs/stack-components/model-registries/custom | 471 |
h more complex frameworks.
Preprocessing the dataOnce we have loaded the documents, we can preprocess them into a form that's useful for a RAG pipeline. There are a lot of options here, depending on how complex you want to get, but to start with you can think of the 'chunk size' as one of the key parameters to think about.
Our text is currently in the form of various long strings, with each one representing a single web page. These are going to be too long to pass into our LLM, especially if we care about the speed at which we get our answers back. So the strategy here is to split our text into smaller chunks that can be processed more efficiently. There's a sweet spot between having tiny chunks, which will make it harder for our search / retrieval step to find relevant information to pass into the LLM, and having large chunks, which will make it harder for the LLM to process the text.
import logging
from typing import Annotated, List
from utils.llm_utils import split_documents
from zenml import ArtifactConfig, log_artifact_metadata, step
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@step(enable_cache=False)
def preprocess_documents(
documents: List[str],
) -> Annotated[List[str], ArtifactConfig(name="split_chunks")]:
"""Preprocesses a list of documents by splitting them into chunks."""
try:
log_artifact_metadata(
artifact_name="split_chunks",
metadata={
"chunk_size": 500,
"chunk_overlap": 50
},
return split_documents(
documents, chunk_size=500, chunk_overlap=50
except Exception as e:
logger.error(f"Error in preprocess_documents: {e}")
raise
It's really important to know your data to have a good intuition about what kind of chunk size might make sense. If your data is structured in such a way where you need large paragraphs to capture a particular concept, then you might want a larger chunk size. If your data is more conversational or question-and-answer based, then you might want a smaller chunk size. | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/rag-with-zenml/data-ingestion | 423 |
te = (failures / total_tests) * 100
logging.info(f"Total tests: {total_tests}. Failures: {failures}. Failure rate: {failure_rate}%"
return round(failure_rate, 2)
Our end-to-end evaluation of the generation component is then a combination of these tests:
@step
def e2e_evaluation() -> (
Annotated[float, "failure_rate_bad_answers"],
Annotated[float, "failure_rate_bad_immediate_responses"],
Annotated[float, "failure_rate_good_responses"],
):
logging.info("Testing bad answers...")
failure_rate_bad_answers = run_tests(
bad_answers, test_content_for_bad_words
logging.info(f"Bad answers failure rate: {failure_rate_bad_answers}%")
logging.info("Testing bad immediate responses...")
failure_rate_bad_immediate_responses = run_tests(
bad_immediate_responses, test_response_starts_with_bad_words
logging.info(
f"Bad immediate responses failure rate: {failure_rate_bad_immediate_responses}%"
logging.info("Testing good responses...")
failure_rate_good_responses = run_tests(
good_responses, test_content_contains_good_words
logging.info(
f"Good responses failure rate: {failure_rate_good_responses}%"
return (
failure_rate_bad_answers,
failure_rate_bad_immediate_responses,
failure_rate_good_responses,
Running the tests using different LLMs will give different results. Here our Ollama Mixtral did worse than GPT 3.5, for example, but there were still some failures with GPT 3.5. This is a good way to get a sense of how well your generation component is doing.
As you become more familiar with the kinds of outputs your LLM generates, you can add the hard ones to this test suite. This helps prevent regressions and is directly related to the quality of the output you're getting. This way you can optimize for your specific use case.
Automated evaluation using another LLM | user-guide | https://docs.zenml.io/user-guide/llmops-guide/evaluation/generation | 394 |
πFeature Stores
Managing data in feature stores.
Feature stores allow data teams to serve data via an offline store and an online low-latency store where data is kept in sync between the two. It also offers a centralized registry where features (and feature schemas) are stored for use within a team or wider organization.
As a data scientist working on training your model, your requirements for how you access your batch / 'offline' data will almost certainly be different from how you access that data as part of a real-time or online inference setting. Feast solves the problem of developing train-serve skew where those two sources of data diverge from each other.
Feature stores are a relatively recent addition to commonly-used machine learning stacks.
When to use it
The feature store is an optional stack component in the ZenML Stack. The feature store as a technology should be used to store the features and inject them into the process on the server side. This includes
Productionalize new features
Reuse existing features across multiple pipelines and models
Achieve consistency between training and serving data (Training Serving Skew)
Provide a central registry of features and feature schemas
List of available feature stores
For production use cases, some more flavors can be found in specific integrations modules. In terms of features stores, ZenML features an integration of feast.
Feature Store Flavor Integration Notes FeastFeatureStore feast feast Connect ZenML with already existing Feast Custom Implementation custom Extend the feature store abstraction and provide your own implementation
If you would like to see the available flavors for feature stores, you can use the command:
zenml feature-store flavor list
How to use it
The available implementation of the feature store is built on top of the feast integration, which means that using a feature store is no different from what's described on the feast page: How to use it?. | stack-components | https://docs.zenml.io/stack-components/feature-stores | 370 |
Azure Blob Storage
Storing artifacts using Azure Blob Storage
The Azure Artifact Store is an Artifact Store flavor provided with the Azure ZenML integration that uses the Azure Blob Storage managed object storage service to store ZenML artifacts in an Azure Blob Storage container.
When would you want to use it?
Running ZenML pipelines with the local Artifact Store is usually sufficient if you just want to evaluate ZenML or get started quickly without incurring the trouble and the cost of employing cloud storage services in your stack. However, the local Artifact Store becomes insufficient or unsuitable if you have more elaborate needs for your project:
if you want to share your pipeline run results with other team members or stakeholders inside or outside your organization
if you have other components in your stack that are running remotely (e.g. a Kubeflow or Kubernetes Orchestrator running in a public cloud).
if you outgrow what your local machine can offer in terms of storage space and need to use some form of private or public storage service that is shared with others
if you are running pipelines at scale and need an Artifact Store that can handle the demands of production-grade MLOps
In all these cases, you need an Artifact Store that is backed by a form of public cloud or self-hosted shared object storage service.
You should use the Azure Artifact Store when you decide to keep your ZenML artifacts in a shared object storage and if you have access to the Azure Blob Storage managed service. You should consider one of the other Artifact Store flavors if you don't have access to the Azure Blob Storage service.
How do you deploy it?
The Azure Artifact Store flavor is provided by the Azure ZenML integration, you need to install it on your local machine to be able to register an Azure Artifact Store and add it to your stack:
zenml integration install azure -y | stack-components | https://docs.zenml.io/stack-components/artifact-stores/azure | 366 |
Delete an artifact
Learn how to delete artifacts.
There is currently no way to delete an artifact directly, because it may lead to a broken state of the ZenML database (dangling references to pipeline runs that produce artifacts).
However, it is possible to delete artifacts that are no longer referenced by any pipeline runs:
zenml artifact prune
By default, this method deletes artifacts physically from the underlying artifact store AND also the entry in the database. You can control this behavior by using the --only-artifact and --only-metadata flags.
You might find that some artifacts throw errors when you try to prune them, likely because they were stored locally and no longer exist. If you wish to continue pruning and to ignore these errors, please add the --ignore-errors flag. Warning messages will still be output to the terminal during this process.
PreviousReturn multiple outputs from a step
NextOrganize data with tags
Last updated 7 days ago | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/delete-an-artifact | 187 |
running for the Inference Endpoint. Defaults to 0.max_replica: (Optional) The maximum number of replicas (instances) to scale to for the Inference Endpoint. Defaults to 1.
revision: (Optional) The specific model revision to deploy on the Inference Endpoint for the Hugging Face repository .
task: Select a supported Machine Learning Task. (e.g. "text-classification", "text-generation")
custom_image: (Optional) A custom Docker image to use for the Inference Endpoint.
namespace: The namespace where the Inference Endpoint will be created. The same namespace can be passed used while registering the Hugging Face model deployer.
endpoint_type: (Optional) The type of the Inference Endpoint, which can be "protected", "public" (default) or "private".
For more information and a full list of configurable attributes of the Hugging Face Model Deployer, check out the SDK Docs and Hugging Face endpoint code.
Run inference on a provisioned inference endpoint
The following code example shows how to run inference against a provisioned inference endpoint:
from typing import Annotated
from zenml import step, pipeline
from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer
from zenml.integrations.huggingface.services import HuggingFaceDeploymentService
# Load a prediction service deployed in another pipeline
@step(enable_cache=False)
def prediction_service_loader(
pipeline_name: str,
pipeline_step_name: str,
running: bool = True,
model_name: str = "default",
) -> HuggingFaceDeploymentService:
"""Get the prediction service started by the deployment pipeline.
Args:
pipeline_name: name of the pipeline that deployed the MLflow prediction
server
step_name: the name of the step that deployed the MLflow prediction
server
running: when this flag is set, the step only returns a running service
model_name: the name of the model that is deployed
"""
# get the Hugging Face model deployer stack component
model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() | stack-components | https://docs.zenml.io/stack-components/model-deployers/huggingface | 427 |
Subsets and Splits