page_content
stringlengths
74
2.86k
parent_section
stringclasses
7 values
url
stringlengths
21
129
token_count
int64
17
755
tication credentials directly in the orchestrator:zenml orchestrator register <ORCHESTRATOR_NAME> \ --flavor=sagemaker \ --execution_role=<YOUR_IAM_ROLE_ARN> \ --aws_access_key_id=... --aws_secret_access_key=... --aws_region=... zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set See the SagemakerOrchestratorConfig SDK Docs for more information on available configuration options. If you neither connect your orchestrator to a service connector nor configure credentials explicitly, ZenML will try to implicitly authenticate to AWS via the default profile in your local AWS configuration file. zenml orchestrator register <ORCHESTRATOR_NAME> \ --flavor=sagemaker \ --execution_role=<YOUR_IAM_ROLE_ARN> zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set python run.py # Authenticates with `default` profile in `~/.aws/config` ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Sagemaker. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them. You can now run any ZenML pipeline using the Sagemaker orchestrator: python run.py If all went well, you should now see the following output: Steps can take 5-15 minutes to start running when using the Sagemaker Orchestrator. Your orchestrator 'sagemaker' is running remotely. Note that the pipeline run will only show up on the ZenML dashboard once the first step has started executing on the remote infrastructure. If it is taking more than 15 minutes for your run to show up, it might be that a setup error occurred in SageMaker before the pipeline could be started. Checkout the Debugging SageMaker Pipelines section for more information on how to debug this. Sagemaker UI Sagemaker comes with its own UI that you can use to find further details about your pipeline runs, such as the logs of your steps.
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/sagemaker
448
om your ZenML steps. List of available parametersWhen using the mlflow_register_model_step, you can set a variety of parameters for fine-grained control over which information is logged with your model: name: The name of the model. This is a required parameter. version: version: The version of the model. trained_model_name: Name of the model artifact in MLflow. model_source_uri: The path to the model. If not provided, the model will be fetched from the MLflow tracking server via the trained_model_name. description: A description of the model version. metadata: A list of metadata to associate with the model version. The model_source_uri parameter is the path to the model within the MLflow tracking server. If you are using a local MLflow tracking server, the path will be something like file:///.../mlruns/667102566783201219/3973eabc151c41e6ab98baeb20c5323b/artifacts/model. If you are using a remote MLflow tracking server, the path will be something like s3://.../mlruns/667102566783201219/3973eabc151c41e6ab98baeb20c5323b/artifacts/model. You can find the path of the model in the MLflow UI. Go to the Artifacts tab of the run that produced the model and click on the model. The path will be displayed in the URL: Register models via the CLI Sometimes adding a mlflow_registry_training_pipeline step to your pipeline might not be the best option for you, as it will register a model in the MLflow model registry every time you run the pipeline. If you want to register your models manually, you can use the zenml model-registry models register-version CLI command instead: zenml model-registry models register-version Tensorflow-model \ --description="A new version of the tensorflow model with accuracy 98.88%" \ v 1 \ --model-uri="file:///.../mlruns/667102566783201219/3973eabc151c41e6ab98baeb20c5323b/artifacts/model" \ m key1 value1 -m key2 value2 \ --zenml-pipeline-name="mlflow_training_pipeline" \ --zenml-step-name="trainer" Deploy a registered model
stack-components
https://docs.zenml.io/stack-components/model-registries/mlflow
480
Skypilot Use Skypilot with ZenML. The ZenML SkyPilot VM Orchestrator allows you to provision and manage VMs on any supported cloud provider (AWS, GCP, Azure, Lambda Labs) for running your ML pipelines. It simplifies the process and offers cost savings and high GPU availability. Prerequisites To use the SkyPilot VM Orchestrator, you'll need: ZenML SkyPilot integration for your cloud provider installed (zenml integration install <PROVIDER> skypilot_<PROVIDER>) Docker installed and running A remote artifact store and container registry in your ZenML stack A remote ZenML deployment Appropriate permissions to provision VMs on your cloud provider A service connector configured to authenticate with your cloud provider (not needed for Lambda Labs) Configuring the Orchestrator Configuration steps vary by cloud provider: AWS, GCP, Azure: Install the SkyPilot integration and connectors extra for your provider Register a service connector with credentials that have SkyPilot's required permissions Register the orchestrator and connect it to the service connector Register and activate a stack with the new orchestrator zenml service-connector register <PROVIDER>-skypilot-vm -t <PROVIDER> --auto-configure zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_<PROVIDER> zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <PROVIDER>-skypilot-vm zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set Lambda Labs: Install the SkyPilot Lambda integration Register a secret with your Lambda Labs API key Register the orchestrator with the API key secret Register and activate a stack with the new orchestrator zenml secret create lambda_api_key --scope user --api_key=<KEY> zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_lambda --api_key={{lambda_api_key.api_key}} zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set Running a Pipeline
how-to
https://docs.zenml.io/how-to/popular-integrations/skypilot
444
nml-step-name="trainer" Deploy a registered modelAfter you have registered a model in the MLflow model registry, you can also easily deploy it as a prediction service. Checkout the MLflow model deployer documentation for more information on how to do that. Interact with registered models You can also use the ZenML CLI to interact with registered models and their versions. The zenml model-registry models list command will list all registered models in the model registry: $ zenml model-registry models list ┏━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━┯━━━━━━━━━━┓ ┃ NAME β”‚ DESCRIPTION β”‚ METADATA ┃ ┠────────────────────────┼─────────────┼──────────┨ ┃ tensorflow-mnist-model β”‚ β”‚ ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━┷━━━━━━━━━━┛ To list all versions of a specific model, you can use the zenml model-registry models list-versions REGISTERED_MODEL_NAME command: $ zenml model-registry models list-versions tensorflow-mnist-model ┏━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ NAME β”‚ MODEL_VERSION β”‚ VERSION_DESCRIPTION β”‚ METADATA ┃ ┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ tensorflow-mnist-model β”‚ 3 β”‚ Run #3 of the mlflow_training_pipeline. β”‚ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_09_23_672599', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃
stack-components
https://docs.zenml.io/stack-components/model-registries/mlflow
543
story of a remote stack. Model Deployers FlavorsZenML comes with a local MLflow model deployer which is a simple model deployer that deploys models to a local MLflow server. Additional model deployers that can be used to deploy models on production environments are provided by integrations: Model Deployer Flavor Integration Notes MLflow mlflow mlflow Deploys ML Model locally BentoML bentoml bentoml Build and Deploy ML models locally or for production grade (Cloud, K8s) Seldon Core seldon seldon Core Built on top of Kubernetes to deploy models for production grade environment Hugging Face huggingface huggingface Deploys ML model on Hugging Face Inference Endpoints Custom Implementation custom Extend the Artifact Store abstraction and provide your own implementation Every model deployer may have different attributes that must be configured in order to interact with the model serving tool, framework, or platform (e.g. hostnames, URLs, references to credentials, and other client-related configuration parameters). The following example shows the configuration of the MLflow and Seldon Core model deployers: # Configure MLflow model deployer zenml model-deployer register mlflow --flavor=mlflow # Configure Seldon Core model deployer zenml model-deployer register seldon --flavor=seldon \ --kubernetes_context=zenml-eks --kubernetes_namespace=zenml-workloads \ --base_url=http://abb84c444c7804aa98fc8c097896479d-377673393.us-east-1.elb.amazonaws.com ... The role that a model deployer plays in a ZenML Stack
stack-components
https://docs.zenml.io/v/docs/stack-components/model-deployers
337
Implement a custom stack component How to write a custom stack component flavor When building a sophisticated MLOps Platform, you will often need to come up with custom-tailored solutions for your infrastructure or tooling. ZenML is built around the values of composability and reusability which is why the stack component flavors in ZenML are designed to be modular and straightforward to extend. This guide will help you understand what a flavor is, and how you can develop and use your own custom flavors in ZenML. Understanding component flavors In ZenML, a component type is a broad category that defines the functionality of a stack component. Each type can have multiple flavors, which are specific implementations of the component type. For instance, the type artifact_store can have flavors like local, s3, etc. Each flavor defines a unique implementation of functionality that an artifact store brings to a stack. Base Abstractions Before we get into the topic of creating custom stack component flavors, let us briefly discuss the three core abstractions related to stack components: the StackComponent, the StackComponentConfig, and the Flavor. Base Abstraction 1: StackComponent The StackComponent is the abstraction that defines the core functionality. As an example, check out the BaseArtifactStore definition below: The BaseArtifactStore inherits from StackComponent and establishes the public interface of all artifact stores. Any artifact store flavor needs to follow the standards set by this base class. from zenml.stack import StackComponent class BaseArtifactStore(StackComponent): """Base class for all ZenML artifact stores.""" # --- public interface --- @abstractmethod def open(self, path, mode = "r"): """Open a file at the given path.""" @abstractmethod def exists(self, path): """Checks if a path exists.""" ...
how-to
https://docs.zenml.io/how-to/stack-deployment/implement-a-custom-stack-component
362
━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃┠────────────────┼────────────────┨ ┃ πŸ”΅ gcp-generic β”‚ zenml-core ┃ ┗━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ ``` ```sh zenml service-connector register gcp-cloud-builder-zenml-core --type gcp --resource-type gcp-generic --auto-configure ``` Example Command Output ```text Successfully registered service connector `gcp-cloud-builder-zenml-core` with access to the following resources: ┏━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠────────────────┼────────────────┨ ┃ πŸ”΅ gcp-generic β”‚ zenml-core ┃ ┗━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ ``` **NOTE**: from this point forward, we don't need the local GCP CLI credentials or the local GCP CLI at all. The steps that follow can be run on any machine regardless of whether it has been configured and authorized to access the GCP project. In the end, the service connector list should look like this: ```sh zenml service-connector list ``` Example Command Output ```text ┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓ ┃ ACTIVE β”‚ NAME β”‚ ID β”‚ TYPE β”‚ RESOURCE TYPES β”‚ RESOURCE NAME β”‚ SHARED β”‚ OWNER β”‚ EXPIRES IN β”‚ LABELS ┃ ┠────────┼──────────────────────────────┼──────────────────────────────────────┼────────┼────────────────────┼──────────────────────┼────────┼─────────┼────────────┼────────┨ ┃ β”‚ gcs-zenml-bucket-sl β”‚ 405034fe-5e6e-4d29-ba62-8ae025381d98 β”‚ πŸ”΅ gcp β”‚ πŸ“¦ gcs-bucket β”‚ gs://zenml-bucket-sl β”‚ βž– β”‚ default β”‚ β”‚ ┃ ┠────────┼──────────────────────────────┼──────────────────────────────────────┼────────┼────────────────────┼──────────────────────┼────────┼─────────┼────────────┼────────┨
how-to
https://docs.zenml.io/how-to/auth-management/gcp-service-connector
613
ep def print_data(data: np.ndarray): print(data)@pipeline def printing_pipeline(): # One can also pass data directly into the ExternalArtifact # to create a new artifact on the fly data = ExternalArtifact(value=np.array([0])) print_data(data=data) if __name__ == "__main__": printing_pipeline() Optionally, you can configure the ExternalArtifact to use a custom materializer for your data or disable artifact metadata and visualizations. Check out the SDK docs for all available options. Using an ExternalArtifact for your step automatically disables caching for the step. Consuming artifacts produced by other pipelines It is also common to consume an artifact downstream after producing it in an upstream pipeline or step. As we have learned in the previous section, the Client can be used to fetch artifacts directly inside the pipeline code: from uuid import UUID import pandas as pd from zenml import step, pipeline from zenml.client import Client @step def trainer(dataset: pd.DataFrame): ... @pipeline def training_pipeline(): client = Client() # Fetch by ID dataset_artifact = client.get_artifact_version( name_id_or_prefix=UUID("3a92ae32-a764-4420-98ba-07da8f742b76") # Fetch by name alone - uses the latest version of this artifact dataset_artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset") # Fetch by name and version dataset_artifact = client.get_artifact_version( name_id_or_prefix="iris_dataset", version="raw_2023" # Pass into any step trainer(dataset=dataset_artifact) if __name__ == "__main__": training_pipeline() Calls of Client methods like get_artifact_version directly inside the pipeline code makes use of ZenML's late materialization behind the scenes. If you would like to bypass materialization entirely and just download the data or files associated with a particular artifact version, you can use the .download_files method: from zenml.client import Client client = Client() artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset")
user-guide
https://docs.zenml.io/user-guide/starter-guide/manage-artifacts
433
System Architectures Different variations of the ZenML architecture depending on your needs. If you're interested in assessing ZenML Pro, you can create a free account, which defaults to a Scenario 1 deployment. To upgrade to different scenarios, please reach out to us. The ZenML Pro offers many additional features to increase your teams productivity. No matter your specific needs, the hosting options for ZenML Pro range from easy SaaS integration to completely airgapped deployments on your own infrastructure. A ZenML Pro deployment consists of the following moving pieces for both the SaaS product as well as the self-hosted version.: ZenML Pro Control Plane: This is a centralized MLOps control plane that includes a managed ZenML dashboard and a special ZenML server optimized for production MLOps workloads. Single Sign-On (SSO): The ZenML Pro API is integrated with Auth0 as an SSO provider to manage user authentication and authorization. Users can log in to the ZenML Pro app using their social media accounts or their corporate credentials. Secrets Store: All secrets and credentials required to access customer infrastructure services are stored in a secure secrets store. The ZenML Pro API has access to these secrets and uses them to access customer infrastructure services on behalf of the ZenML Pro. The secrets store can be hosted either by the ZenML Pro or by the customer. ML Metadata Store: This is where all ZenML metadata is stored, including ML metadata such as tracking and versioning information about pipelines and models. The above four interact with other MLOps stack components, secrets, and data in varying scenarios described below. Scenario 1: Full SaaS In this scenario, all services are hosted on infrastructure hosted by the ZenML Team. Customer secrets and credentials required to access customer infrastructure are stored and managed by the ZenML Pro Control Plane.
getting-started
https://docs.zenml.io/v/docs/getting-started/zenml-pro/system-architectures
373
How ZenML stores data Understand how ZenML stores your data under-the-hood. ZenML seamlessly integrates data versioning and lineage into its core functionality. When a pipeline is executed, each run generates automatically tracked and managed artifacts. One can easily view the entire lineage of how artifacts are created and interact with them. The dashboard is also a way to interact with the artifacts produced by different pipeline runs. ZenML's artifact management, caching, lineage tracking, and visualization capabilities can help gain valuable insights, streamline the experimentation process, and ensure the reproducibility and reliability of machine learning workflows. Artifact Creation and Caching Each time a ZenML pipeline runs, the system first checks if there have been any changes in the inputs, outputs, parameters, or configuration of the pipeline steps. Each step in a run gets a new directory in the artifact store: Suppose a step is new or has been modified. In that case, ZenML creates a new directory structure in the Artifact Store with a unique ID and stores the data using the appropriate materializers in this directory. On the other hand, if the step remains unchanged, ZenML intelligently decides whether to cache the step or not. By caching steps that have not been modified, ZenML can save valuable time and computational resources, allowing you to focus on experimenting with different configurations and improving your machine-learning models without the need to rerun unchanged parts of your pipeline.
how-to
https://docs.zenml.io/how-to/handle-data-artifacts/artifact-versioning
286
by running: 'zenml service-connector register -i'The second step is registering a Service Connector that effectively enables ZenML to authenticate to and access one or more remote resources. This step is best handled by someone with some infrastructure knowledge, but there are sane defaults and auto-detection mechanisms built into most Service Connectors that can make this a walk in the park even for the uninitiated. For our simple example, we're registering an AWS Service Connector with AWS credentials automatically lifted up from your local host, giving ZenML access to the same resources that you can access from your local machine through the AWS CLI. This step assumes the AWS CLI is already installed and set up with credentials on your machine (e.g. by running aws configure). zenml service-connector register aws-s3 --type aws --auto-configure --resource-type s3-bucket Example Command Output β Ό Registering service connector 'aws-s3'... Successfully registered service connector `aws-s3` with access to the following resources: ┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────┼───────────────────────────────────────┨ ┃ πŸ“¦ s3-bucket β”‚ s3://aws-ia-mwaa-715803424590 ┃ ┃ β”‚ s3://zenbytes-bucket ┃ ┃ β”‚ s3://zenfiles ┃ ┃ β”‚ s3://zenml-demos ┃ ┃ β”‚ s3://zenml-generative-chat ┃ ┃ β”‚ s3://zenml-public-datasets ┃ ┃ β”‚ s3://zenml-public-swagger-spec ┃ ┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ The CLI validates and shows all S3 buckets that can be accessed with the auto-discovered credentials. The ZenML CLI provides an interactive way of registering Service Connectors. Just use the -i command line argument and follow the interactive guide: zenml service-connector register -i
how-to
https://docs.zenml.io/v/docs/how-to/auth-management
484
β”‚ β”‚ β”‚ β”‚ ┃┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━┛ The following lists all Kubernetes clusters accessible through the GCP Service Connector: zenml service-connector verify gcp-user-account --resource-type kubernetes-cluster Example Command Output Service connector 'gcp-user-account' is correctly configured with valid credentials and has access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────────────┼────────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ zenml-test-cluster ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┛ Calling the login CLI command will configure the local Kubernetes kubectl CLI to access the Kubernetes cluster through the GCP Service Connector: zenml service-connector login gcp-user-account --resource-type kubernetes-cluster --resource-id zenml-test-cluster Example Command Output β ΄ Attempting to configure local client using service connector 'gcp-user-account'... Context "gke_zenml-core_zenml-test-cluster" modified. Updated local kubeconfig with the cluster details. The current kubectl context was set to 'gke_zenml-core_zenml-test-cluster'. The 'gcp-user-account' Kubernetes Service Connector connector was used to successfully configure the local Kubernetes cluster client/SDK. To verify that the local Kubernetes kubectl CLI is correctly configured, the following command can be used: kubectl cluster-info Example Command Output Kubernetes control plane is running at https://35.185.95.223 GLBCDefaultBackend is running at https://35.185.95.223/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy KubeDNS is running at https://35.185.95.223/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://35.185.95.223/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
how-to
https://docs.zenml.io/how-to/auth-management/gcp-service-connector
552
-registry β”‚ iam-role β”‚ β”‚ ┃┃ β”‚ β”‚ β”‚ session-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ federation-token β”‚ β”‚ ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛ ``` Register a multi-type AWS Service Connector using auto-configurationCopyAWS_PROFILE=connectors zenml service-connector register aws-demo-multi --type aws --auto-configure Example Command Output ```text β Ό Registering service connector 'aws-demo-multi'... Successfully registered service connector `aws-demo-multi` with access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ πŸ”Ά aws-generic β”‚ us-east-1 ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ πŸ“¦ s3-bucket β”‚ s3://zenfiles ┃ ┃ β”‚ s3://zenml-demos ┃ ┃ β”‚ s3://zenml-generative-chat ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ zenhacks-cluster ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ 🐳 docker-registry β”‚ 715803424590.dkr.ecr.us-east-1.amazonaws.com ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ ``` **NOTE**: from this point forward, we don't need the local AWS CLI credentials or the local AWS CLI at all. The steps that follow can be run on any machine regardless of whether it has been configured and authorized to access the AWS platform or not.
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector
521
else we're examining the contents of the database.Deciding when to update your embeddings is a separate discussion and depends on the specific use case. If your data is frequently changing, and the changes are significant, you might want to fully reset the embeddings with each update. In other cases, you might just want to add new documents and embeddings into the database because the changes are minor or infrequent. In the code above, we choose to only add new embeddings if they don't already exist in the database. Depending on the size of your dataset and the number of embeddings you're storing, you might find that running this step on a CPU is too slow. In that case, you should ensure that this step runs on a GPU-enabled machine to speed up the process. You can do this with ZenML by using a step operator that runs on a GPU-enabled machine. See the docs here for more on how to set this up. We also generate an index for the embeddings using the ivfflat method with the vector_cosine_ops operator. This is a common method for indexing high-dimensional vectors in PostgreSQL and is well-suited for similarity search using cosine distance. The number of lists is calculated based on the number of records in the table, with a minimum of 10 lists and a maximum of the square root of the number of records. This is a good starting point for tuning the index parameters, but you might want to experiment with different values to see how they affect the performance of your RAG pipeline. Now that we have our embeddings stored in a vector database, we can move on to the next step in the pipeline, which is to retrieve the most relevant documents based on a given query. This is where the real magic of the RAG pipeline comes into play, as we can use the embeddings to quickly retrieve the most relevant chunks of text based on their similarity to the query. This allows us to build a powerful and efficient question-answering system that can provide accurate and relevant responses to user queries in real-time. Code Example
user-guide
https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-vector-database
409
e following: Question: What are Plasma Phoenixes?Answer: Plasma Phoenixes are majestic creatures made of pure energy that soar above the chromatic canyons of Zenml World. They leave fiery trails behind them, painting the sky with dazzling displays of colors. Question: What kinds of creatures live on the prismatic shores of ZenML World? Answer: On the prismatic shores of ZenML World, you can find crystalline crabs scuttling and burrowing with their transparent exoskeletons, which refract light into a kaleidoscope of hues. Question: What is the capital of Panglossia? Answer: The capital of Panglossia is not mentioned in the provided context. The implementation above is by no means sophisticated or performant, but it's simple enough that you can see all the moving parts. Our tokenization process consists of splitting the text into individual words. The way we check for similarity between the question / query and the chunks of text is extremely naive and inefficient. The similarity between the query and the current chunk is calculated using the Jaccard similarity coefficient. This coefficient measures the similarity between two sets and is defined as the size of the intersection divided by the size of the union of the two sets. So we count the number of words that are common between the query and the chunk and divide it by the total number of unique words in both the query and the chunk. There are much better ways of measuring the similarity between two pieces of text, such as using embeddings or other more sophisticated techniques, but this example is kept simple for illustrative purposes. The rest of this guide will showcase a more performant and scalable way of performing the same task using ZenML. If you ever are unsure why we're doing something, feel free to return to this example for the high-level overview. PreviousRAG with ZenML NextUnderstanding Retrieval-Augmented Generation (RAG) Last updated 15 days ago
user-guide
https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml/rag-85-loc
392
Pigeon Annotating data using Pigeon. Pigeon is a lightweight, open-source annotation tool designed for quick and easy labeling of data directly within Jupyter notebooks. It provides a simple and intuitive interface for annotating various types of data, including: Text Classification Image Classification Text Captioning When would you want to use it? If you need to label a small to medium-sized dataset as part of your ML workflow and prefer the convenience of doing it directly within your Jupyter notebook, Pigeon is a great choice. It is particularly useful for: Quick labeling tasks that don't require a full-fledged annotation platform Iterative labeling during the exploratory phase of your ML project Collaborative labeling within a Jupyter notebook environment How to deploy it? To use the Pigeon annotator, you first need to install the ZenML Pigeon integration: zenml integration install pigeon Next, register the Pigeon annotator with ZenML, specifying the output directory where the annotation files will be stored: zenml annotator register pigeon --flavor pigeon --output_dir="path/to/dir" Note that the output_dir is relative to the repository or notebook root. Finally, add the Pigeon annotator to your stack and set it as the active stack: zenml stack update <YOUR_STACK_NAME> --annotator pigeon Now you're ready to use the Pigeon annotator in your ML workflow! How do you use it? With the Pigeon annotator registered and added to your active stack, you can easily access it using the ZenML client within your Jupyter notebook. For text classification tasks, you can launch the Pigeon annotator as follows: from zenml.client import Client annotator = Client().active_stack.annotator annotations = annotator.launch( data=[ 'I love this movie', 'I was really disappointed by the book' ], options=[ 'positive', 'negative' For image classification tasks, you can provide a custom display function to render the images: from zenml.client import Client
stack-components
https://docs.zenml.io/v/docs/stack-components/annotators/pigeon
423
Run on GCP A simple guide to quickly set up a minimal stack on GCP. The GCP integration currently only works for Python versions <3.11. The ZenML team is aware of this dependency clash/issue and is working on a fix. For now, please use Python <3.11 together with the GCP integration. This page aims to quickly set up a minimal production stack on GCP. With just a few simple steps you will set up a service account with specifically-scoped permissions that ZenML can use to authenticate with the relevant GCP resources. While this guide focuses on Google Cloud, we are seeking contributors to create a similar guide for other cloud providers. If you are interested, please create a pull request over on GitHub. 1) Choose a GCP project In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Make sure a billing account is attached to this project to allow the use of some APIs. This is how you would do it from the CLI if this is preferred. gcloud projects create <PROJECT_ID> --billing-project=<BILLING_PROJECT> If you don't plan to keep the resources that you create in this procedure, create a new project. After you finish these steps, you can delete the project, thereby removing all resources associated with the project. 2) Enable GCloud APIs The following APIs will need to be enabled within your chosen GCP project. Cloud Functions API # For the vertex orchestrator Cloud Run Admin API # For the vertex orchestrator Cloud Build API # For the container registry Artifact Registry API # For the container registry Cloud Logging API # Generally needed 3) Create a dedicated service account The service account should have these following roles. AI Platform Service Agent Storage Object Admin These roles give permissions for full CRUD on storage objects and full permissions for compute within VertexAI. 4) Create a JSON Key for your service account
how-to
https://docs.zenml.io/v/docs/how-to/popular-integrations/gcp-guide
395
kip scoping its Resource Type during registration.a multi-instance Service Connector instance can be configured once and used to gain access to multiple resources of the same type, each identifiable by a Resource Name. Not all types of connectors and not all types of resources support multiple instances. Some Service Connectors Types like the generic Kubernetes and Docker connector types only allow single-instance configurations: a Service Connector instance can only be used to access a single Kubernetes cluster and a single Docker registry. To configure a multi-instance Service Connector, you can simply skip scoping its Resource Name during registration. The following is an example of configuring a multi-type AWS Service Connector instance capable of accessing multiple AWS resources of different types: zenml service-connector register aws-multi-type --type aws --auto-configure Example Command Output β ‹ Registering service connector 'aws-multi-type'... Successfully registered service connector `aws-multi-type` with access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ πŸ”Ά aws-generic β”‚ us-east-1 ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ πŸ“¦ s3-bucket β”‚ s3://aws-ia-mwaa-715803424590 ┃ ┃ β”‚ s3://zenfiles ┃ ┃ β”‚ s3://zenml-demos ┃ ┃ β”‚ s3://zenml-generative-chat ┃ ┃ β”‚ s3://zenml-public-datasets ┃ ┃ β”‚ s3://zenml-public-swagger-spec ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ zenhacks-cluster ┃
how-to
https://docs.zenml.io/how-to/auth-management/service-connectors-guide
437
😸Set up a project repository Setting your team up for success with a project repository. ZenML code typically lives in a git repository. Setting this repository up correctly can make a huge impact on collaboration and getting the maximum out of your ZenML deployment. This section walks users through some of the options available to create a project repository with ZenML. PreviousFinetuning LLMs with ZenML NextConnect your git repository Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/setting-up-a-project-repository
94
Use your own Dockerfiles In some cases, you might not want full control over the resulting Docker image but want to build a parent image dynamically each time a pipeline is executed. To make this process easier, ZenML allows you to specify a custom Dockerfile as well as build context directory and build options. ZenML then builds an intermediate image based on the Dockerfile you specified and uses the intermediate image as the parent image. Here is how the build process looks like: No Dockerfile specified: If any of the options regarding requirements, environment variables or copying files require us to build an image, ZenML will build this image. Otherwise the parent_image will be used to run the pipeline. Dockerfile specified: ZenML will first build an image based on the specified Dockerfile. If any of the options regarding requirements, environment variables or copying files require an additional image built on top of that, ZenML will build a second image. If not, the image build from the specified Dockerfile will be used to run the pipeline. Depending on the configuration of the DockerSettings object, requirements will be installed in the following order (each step optional): The packages installed in your local Python environment. The packages specified via the requirements attribute. The packages specified via the required_integrations and potentially stack requirements. Depending on the configuration of your Docker settings, this intermediate image might also be used directly to execute your pipeline steps. docker_settings = DockerSettings( dockerfile="/path/to/dockerfile", build_context_root="/path/to/build/context", parent_image_build_config={ "build_options": ... "dockerignore": ... @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... PreviousSpecify pip dependencies and apt packages NextWhich files are built into the image Last updated 1 day ago
how-to
https://docs.zenml.io/v/docs/how-to/customize-docker-builds/use-your-own-docker-files
362
s: πŸ”’ password Resource types: 🐳 docker-registrySupports auto-configuration: False Available locally: True Available remotely: True The ZenML Docker Service Connector allows authenticating with a Docker or OCI container registry and managing Docker clients for the registry. This connector provides pre-authenticated python-docker Python clients to Stack Components that are linked to it. No Python packages are required for this Service Connector. All prerequisites are included in the base ZenML Python package. Docker needs to be installed on environments where container images are built and pushed to the target container registry. [...] ──────────────────────────────────────────────────────────────────────────────── Please select a service connector type (kubernetes, docker, azure, aws, gcp): gcp ╔══════════════════════════════════════════════════════════════════════════════╗ β•‘ Available resource types β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• πŸ”΅ Generic GCP resource (resource type: gcp-generic) Authentication methods: implicit, user-account, service-account, oauth2-token, impersonation Supports resource instances: False Authentication methods: πŸ”’ implicit πŸ”’ user-account πŸ”’ service-account πŸ”’ oauth2-token πŸ”’ impersonation This resource type allows Stack Components to use the GCP Service Connector to connect to any GCP service or resource. When used by Stack Components, they are provided a Python google-auth credentials object populated with a GCP OAuth 2.0 token. This credentials object can then be used to create GCP Python clients for any particular GCP service. This generic GCP resource type is meant to be used with Stack Components that are not represented by other, more specific resource type, like GCS buckets, Kubernetes clusters or Docker registries. For example, it can be used with the Google Cloud Builder Image Builder stack component, or the Vertex AI
how-to
https://docs.zenml.io/how-to/auth-management/service-connectors-guide
465
┃┃ β”‚ β”‚ β”‚ β”‚ HTTP response body: ┃ ┃ β”‚ β”‚ β”‚ β”‚ {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":" ┃ ┃ β”‚ β”‚ β”‚ β”‚ Unauthorized","code":401} ┃ ┃ β”‚ β”‚ β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ β”‚ ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ More interesting is to scope the search to a particular Resource Type. This yields fewer, more accurate results, especially if you have many multi-type Service Connectors configured: zenml service-connector list-resources --resource-type kubernetes-cluster Example Command Output The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide
413
ant feedback of actual contact with the raw data.)Samples generated for inference: Your model will be making predictions on real-world data being passed in. If you store and label this data, you’ll gain a valuable set of data that you can use to compare your labels with what the model was predicting, another possible way to flag drifts of various kinds. This data can then (subject to privacy/user consent) be used in retraining or fine-tuning your model. Other ad hoc interventions: You will probably have some kind of process to identify bad labels, or to find the kinds of examples that your model finds really difficult to make correct predictions. For these, and for areas where you have clear class imbalances, you might want to do ad hoc annotation to supplement the raw materials your model has to learn from. ZenML currently offers standard steps that help you tackle the above use cases, but the stack component and abstraction will continue to be developed to make it easier to use. When to use it The annotator is an optional stack component in the ZenML Stack. We designed our abstraction to fit into the larger ML use cases, particularly the training and deployment parts of the lifecycle. The core parts of the annotation workflow include: using labels or annotations in your training steps in a seamless way handling the versioning of annotation data allow for the conversion of annotation data to and from custom formats handle annotator-specific tasks, for example, the generation of UI config files that Label Studio requires for the web annotation interface List of available annotators For production use cases, some more flavors can be found in specific integrations modules. In terms of annotators, ZenML features integrations with label_studio and pigeon.
stack-components
https://docs.zenml.io/stack-components/annotators
348
r the local cloud provider CLI (AWS in this case):AWS_PROFILE=connectors zenml service-connector register aws-sts-token --type aws --auto-configure --auth-method sts-token Example Command Output β Έ Registering service connector 'aws-sts-token'... Successfully registered service connector `aws-sts-token` with access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ πŸ”Ά aws-generic β”‚ us-east-1 ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ πŸ“¦ s3-bucket β”‚ s3://zenfiles ┃ ┃ β”‚ s3://zenml-demos ┃ ┃ β”‚ s3://zenml-generative-chat ┃ ┃ β”‚ s3://zenml-public-datasets ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ zenhacks-cluster ┃ ┠───────────────────────┼──────────────────────────────────────────────┨ ┃ 🐳 docker-registry β”‚ 715803424590.dkr.ecr.us-east-1.amazonaws.com ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ The Service Connector is now configured with a short-lived token that will expire after some time. You can verify this by inspecting the Service Connector: zenml service-connector describe aws-sts-token Example Command Output Service connector 'aws-sts-token' of type 'aws' with id '63e14350-6719-4255-b3f5-0539c8f7c303' is owned by user 'default' and is 'private'. 'aws-sts-token' aws Service Connector Details ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/best-security-practices
547
ask that requires a lot of effort and maintenance.Stack Components don't implement any kind of verification regarding the validity and permission of configured credentials. If the credentials are invalid or if they lack the proper permissions to access the remote resource or service, you will only find this out later, when running a pipeline will fail at runtime. ultimately, given that different Stack Component flavors rely on the same type of resource or cloud provider, it is not good design to duplicate the logic that handles authentication and authorization in each Stack Component implementation. These drawbacks are addressed by Service Connectors. Without Service Connectors, credentials are stored directly in the Stack Component configuration or ZenML Secret and are directly used in the runtime environment. The Stack Component implementation is directly responsible for validating credentials, authenticating and connecting to the infrastructure service. This is illustrated in the following diagram: When Service Connectors are involved in the authentication and authorization process, they can act as brokers. The credentials validation and authentication process takes place on the ZenML server. In most cases, the main credentials never have to leave the ZenML server as the Service Connector automatically converts them into short-lived credentials with a reduced set of privileges and issues these credentials to clients. Furthermore, multiple Stack Components of different flavors can use the same Service Connector to access different types or resources with the same credentials:
how-to
https://docs.zenml.io/v/docs/how-to/auth-management
265
_operator @step(step_operator=step_operator.name)def step_on_spark(...) -> ...: ... Additional configuration For additional configuration of the Spark step operator, you can pass SparkStepOperatorSettings when defining or running your pipeline. Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings. PreviousAzureML NextDevelop a Custom Step Operator Last updated 19 days ago
stack-components
https://docs.zenml.io/v/docs/stack-components/step-operators/spark-kubernetes
89
"zenml/rag_qa_embedding_questions", split="train")# Shuffle the dataset and select a random sample sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) total_tests = len(sampled_dataset) total_toxicity = 0 total_faithfulness = 0 total_helpfulness = 0 total_relevance = 0 for item in sampled_dataset: question = item["generated_questions"][0] context = item["page_content"] try: result = test_function(question, context) except json.JSONDecodeError as e: logging.error(f"Failed for question: {question}. Error: {e}") total_tests -= 1 continue total_toxicity += result.toxicity total_faithfulness += result.faithfulness total_helpfulness += result.helpfulness total_relevance += result.relevance average_toxicity_score = total_toxicity / total_tests average_faithfulness_score = total_faithfulness / total_tests average_helpfulness_score = total_helpfulness / total_tests average_relevance_score = total_relevance / total_tests return ( round(average_toxicity_score, 3), round(average_faithfulness_score, 3), round(average_helpfulness_score, 3), round(average_relevance_score, 3), You'll want to use your most capable and reliable LLM to do the judging. In our case, we used the new GPT-4 Turbo. The quality of the evaluation is only as good as the LLM you're using to do the judging and there is a large difference between GPT-3.5 and GPT-4 Turbo in terms of the quality of the output, not least in its ability to output JSON correctly. Here was the output following an evaluation for 50 randomly sampled datapoints: Step e2e_evaluation_llm_judged has started. Average toxicity: 1.0 Average faithfulness: 4.787 Average helpfulness: 4.595 Average relevance: 4.87 Step e2e_evaluation_llm_judged has finished in 8m51s. Pipeline run has finished in 8m52s. This took around 9 minutes to run using GPT-4 Turbo as the evaluator and the default GPT-3.5 as the LLM being evaluated. To take this further, there are a number of ways it might be improved:
user-guide
https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/generation
503
ntainer β”‚ service-principal β”‚ β”‚ ┃┃ β”‚ β”‚ πŸŒ€ kubernetes-cluster β”‚ access-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ 🐳 docker-registry β”‚ β”‚ β”‚ ┃ ┠──────────────────────────────┼───────────────┼───────────────────────┼───────────────────┼───────┼────────┨ ┃ AWS Service Connector β”‚ πŸ”Ά aws β”‚ πŸ”Ά aws-generic β”‚ implicit β”‚ βœ… β”‚ βœ… ┃ ┃ β”‚ β”‚ πŸ“¦ s3-bucket β”‚ secret-key β”‚ β”‚ ┃ ┃ β”‚ β”‚ πŸŒ€ kubernetes-cluster β”‚ sts-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ 🐳 docker-registry β”‚ iam-role β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ session-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ federation-token β”‚ β”‚ ┃ ┠──────────────────────────────┼───────────────┼───────────────────────┼───────────────────┼───────┼────────┨ ┃ GCP Service Connector β”‚ πŸ”΅ gcp β”‚ πŸ”΅ gcp-generic β”‚ implicit β”‚ βœ… β”‚ βœ… ┃ ┃ β”‚ β”‚ πŸ“¦ gcs-bucket β”‚ user-account β”‚ β”‚ ┃ ┃ β”‚ β”‚ πŸŒ€ kubernetes-cluster β”‚ service-account β”‚ β”‚ ┃ ┃ β”‚ β”‚ 🐳 docker-registry β”‚ oauth2-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ impersonation β”‚ β”‚ ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide
462
Kaniko Image Builder Building container images with Kaniko. The Kaniko image builder is an image builder flavor provided by the ZenML kaniko integration that uses Kaniko to build container images. When to use it You should use the Kaniko image builder if: you're unable to install or use Docker on your client machine. you're familiar with/already using Kubernetes. How to deploy it In order to use the Kaniko image builder, you need a deployed Kubernetes cluster. How to use it To use the Kaniko image builder, we need: The ZenML kaniko integration installed. If you haven't done so, runCopyzenml integration install kaniko kubectl installed. A remote container registry as part of your stack. By default, the Kaniko image builder transfers the build context using the Kubernetes API. If you instead want to transfer the build context by storing it in the artifact store, you need to register it with the store_context_in_artifact_store attribute set to True. In this case, you also need a remote artifact store as part of your stack. Optionally, you can change the timeout (in seconds) until the Kaniko pod is running in the orchestrator using the pod_running_timeout attribute. We can then register the image builder and use it in our active stack: zenml image-builder register <NAME> \ --flavor=kaniko \ --kubernetes_context=<KUBERNETES_CONTEXT> [ --pod_running_timeout=<POD_RUNNING_TIMEOUT_IN_SECONDS> ] # Register and activate a stack with the new image builder zenml stack register <STACK_NAME> -i <NAME> ... --set For more information and a full list of configurable attributes of the Kaniko image builder, check out the SDK Docs . Authentication for the container registry and artifact store The Kaniko image builder will create a Kubernetes pod that is running the build. This build pod needs to be able to pull from/push to certain container registries, and depending on the stack component configuration also needs to be able to read from the artifact store:
stack-components
https://docs.zenml.io/v/docs/stack-components/image-builders/kaniko
422
token_hex token_hex(32)or:Copyopenssl rand -hex 32Important: If you configure encryption for your SQL database secrets store, you should keep the ZENML_SECRETS_STORE_ENCRYPTION_KEY value somewhere safe and secure, as it will always be required by the ZenML server to decrypt the secrets in the database. If you lose the encryption key, you will not be able to decrypt the secrets in the database and will have to reset them. These configuration options are only relevant if you're using the AWS Secrets Manager as the secrets store backend. ZENML_SECRETS_STORE_TYPE: Set this to aws in order to set this type of secret store. The AWS Secrets Store uses the ZenML AWS Service Connector under the hood to authenticate with the AWS Secrets Manager API. This means that you can use any of the authentication methods supported by the AWS Service Connector to authenticate with the AWS Secrets Manager API. "Version": "2012-10-17", "Statement": [ "Sid": "ZenMLSecretsStore", "Effect": "Allow", "Action": [ "secretsmanager:CreateSecret", "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:PutSecretValue", "secretsmanager:TagResource", "secretsmanager:DeleteSecret" ], "Resource": "arn:aws:secretsmanager:<AWS-region>:<AWS-account-id>:secret:zenml/*" The following configuration options are supported: ZENML_SECRETS_STORE_AUTH_METHOD: The AWS Service Connector authentication method to use (e.g. secret-key or iam-role). ZENML_SECRETS_STORE_AUTH_CONFIG: The AWS Service Connector configuration, in JSON format (e.g. {"aws_access_key_id":"<aws-key-id>","aws_secret_access_key":"<aws-secret-key>","region":"<aws-region>"}). Note: The remaining configuration options are deprecated and may be removed in a future release. Instead, you should set the ZENML_SECRETS_STORE_AUTH_METHOD and ZENML_SECRETS_STORE_AUTH_CONFIG variables to use the AWS Service Connector authentication method.
getting-started
https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-docker
440
PE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────┨ ┃ bf073e06-28ce-4a4a-8100-32e7cb99dced β”‚ aws-demo-multi β”‚ πŸ”Ά aws β”‚ πŸŒ€ kubernetes-cluster β”‚ zenhacks-cluster ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━┛ ``` ```sh zenml service-connector list-resources --resource-type docker-registry ``` Example Command Output ```text The following 'docker-registry' resources can be accessed by service connectors configured in your workspace: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼────────────────────┼────────────────┼────────────────────┼─────────────────────────────────────────────────┨ ┃ bf073e06-28ce-4a4a-8100-32e7cb99dced β”‚ aws-demo-multi β”‚ πŸ”Ά aws β”‚ 🐳 docker-registry β”‚ 715803424590.dkr.ecr.us-east-1.amazonaws.com ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ ``` register and connect an S3 Artifact Store Stack Component to an S3 bucket:Copyzenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles Example Command Output ```text Running with active workspace: 'default' (repository) Running with active stack: 'default' (repository) Successfully registered artifact_store `s3-zenfiles`. ``` ```sh zenml artifact-store connect s3-zenfiles --connector aws-demo-multi ``` Example Command Output ```text Running with active workspace: 'default' (repository) Running with active stack: 'default' (repository)
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector
645
view. 2. Add ZenML as an explicit pip requirementZenML requires that ZenML itself be installed for the containers running your pipelines and steps. Therefore, you need to explicitly state that ZenML should be installed. There are several ways to specify this, but as an example, you can update the code from above as follows: from zenml.config import DockerSettings from zenml import pipeline docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["zenml==0.39.1", "torchvision"] @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... Adding these two extra settings options will ensure that CUDA is enabled for the specific steps that require GPU acceleration. Be cautious when choosing the image to avoid confusion when switching between local and remote environments. For example, you might have one version of PyTorch installed locally with a particular CUDA version, but when you switch to your remote stack or environment, you might be forced to use a different CUDA version. The core cloud operators offer prebuilt Docker images that fit with their hardware. You can find more information on them here: AWS GCP Azure Not all of these images are available on DockerHub, so ensure that the orchestrator environment your pipeline runs in has sufficient permissions to pull images from registries if you are using one of those. Reset the CUDA cache in between steps Your use case will determine whether this is necessary or makes sense to do, but we have seen that resetting the CUDA cache in between steps can help avoid issues with the GPU cache. This is particularly necessary if your training jobs are pushing the boundaries of the GPU cache. Doing so is simple; just use a helper function to reset the cache at the beginning of any GPU-enabled steps. For example, something as simple as this might suffice: import gc import torch def cleanup_memory() -> None: while gc.collect(): torch.cuda.empty_cache()
how-to
https://docs.zenml.io/v/docs/how-to/training-with-gpus
408
ons, making it easier to visualize and understand.Here's an example of grouping metadata into cards: from zenml.metadata.metadata_types import StorageSize log_artifact_metadata( metadata={ "model_metrics": { "accuracy": 0.95, "precision": 0.92, "recall": 0.90 }, "data_details": { "dataset_size": StorageSize(1500000), "feature_columns": ["age", "income", "score"] In the ZenML dashboard, "model_metrics" and "data_details" would appear as separate cards, each containing their respective key-value pairs. PreviousAttach metadata to a model NextAttach metadata to steps Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/track-metrics-metadata/attach-metadata-to-an-artifact
143
Find out which configuration was used for a run Sometimes you might want to extract the used configuration from a pipeline that has already run. You can do this simply by loading the pipeline run and accessing its config attribute. from zenml.client import Client pipeline_run = Client().get_pipeline_run("<PIPELINE_RUN_NAME>") configuration = pipeline_run.config PreviousConfiguration hierarchy NextAutogenerate a template yaml file Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/use-configuration-files/retrieve-used-configuration-of-a-run
90
Hugging Face Deploying models to Huggingface Inference Endpoints with Hugging Face :hugging_face:. Hugging Face Inference Endpoints provides a secure production solution to easily deploy any transformers, sentence-transformers, and diffusers models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the Hub. This service provides dedicated and autoscaling infrastructure managed by Hugging Face, allowing you to deploy models without dealing with containers and GPUs. When to use it? You should use Hugging Face Model Deployer: if you want to deploy Transformers, Sentence-Transformers, or Diffusion models on dedicated and secure infrastructure. if you prefer a fully-managed production solution for inference without the need to handle containers and GPUs. if your goal is to turn your models into production-ready APIs with minimal infrastructure or MLOps involvement Cost-effectiveness is crucial, and you want to pay only for the raw compute resources you use. Enterprise security is a priority, and you need to deploy models into secure offline endpoints accessible only via a direct connection to your Virtual Private Cloud (VPCs). If you are looking for a more easy way to deploy your models locally, you can use the MLflow Model Deployer flavor. How to deploy it? The Hugging Face Model Deployer flavor is provided by the Hugging Face ZenML integration, so you need to install it on your local machine to be able to deploy your models. You can do this by running the following command: zenml integration install huggingface -y To register the Hugging Face model deployer with ZenML you need to run the following command: zenml model-deployer register <MODEL_DEPLOYER_NAME> --flavor=huggingface --token=<YOUR_HF_TOKEN> --namespace=<YOUR_HF_NAMESPACE> Here, token parameter is the Hugging Face authentication token. It can be managed through Hugging Face settings.
stack-components
https://docs.zenml.io/stack-components/model-deployers/huggingface
399
_studio --api_key={{label_studio_secrets.api_key}}# for deployed instances of Label Studio, you can also pass in the URL as follows, for example: # zenml annotator register label_studio --flavor label_studio --authentication_secret="<LABEL_STUDIO_SECRET_NAME>" --instance_url="<your_label_studio_url>" --port=80 When using a deployed instance of Label Studio, the instance URL must be specified without any trailing / at the end. You should specify the port, for example, port 80 for a standard HTTP connection. Finally, add all these components to a stack and set it as your active stack. For example: zenml stack copy annotation zenml stack update annotation -a <YOUR_CLOUD_ARTIFACT_STORE> # this must be done separately so that the other required stack components are first registered zenml stack update annotation -an <YOUR_LABEL_STUDIO_ANNOTATOR> zenml stack set annotation # optionally also zenml stack describe Now if you run a simple CLI command like zenml annotator dataset list this should work without any errors. You're ready to use your annotator in your ML workflow! How do you use it? ZenML assumes that users have registered a cloud artifact store and an annotator as described above. ZenML currently only supports this setup, but we will add in the fully local stack option in the future. ZenML supports access to your data and annotations via the zenml annotator ... CLI command. You can access information about the datasets you're using with the zenml annotator dataset list. To work on annotation for a particular dataset, you can run zenml annotator dataset annotate <dataset_name>. Our computer vision end to end example is the best place to see how all the pieces of making this integration work fit together. What follows is an overview of some key components to the Label Studio integration and how it can be used. Label Studio Annotator Stack Component
stack-components
https://docs.zenml.io/stack-components/annotators/label-studio
401
ntation section. Seldon Core Installation ExampleThe following example briefly shows how you can install Seldon in an EKS Kubernetes cluster. It assumes that the EKS cluster itself is already set up and configured with IAM access. For more information or tutorials for other clouds, check out the official Seldon Core installation instructions. Configure EKS cluster access locally, e.g: aws eks --region us-east-1 update-kubeconfig --name zenml-cluster --alias zenml-eks Install Istio 1.5.0 (required for the latest Seldon Core version): curl -L [https://istio.io/downloadIstio](https://istio.io/downloadIstio) | ISTIO_VERSION=1.5.0 sh - cd istio-1.5.0/ bin/istioctl manifest apply --set profile=demo Set up an Istio gateway for Seldon Core: curl https://raw.githubusercontent.com/SeldonIO/seldon-core/master/notebooks/resources/seldon-gateway.yaml | kubectl apply -f - Install Seldon Core: helm install seldon-core seldon-core-operator \ --repo https://storage.googleapis.com/seldon-charts \ --set usageMetrics.enabled=true \ --set istio.enabled=true \ --namespace seldon-system Test that the installation is functional kubectl apply -f iris.yaml with iris.yaml defined as follows: apiVersion: machinelearning.seldon.io/v1 kind: SeldonDeployment metadata: name: iris-model namespace: default spec: name: iris predictors: graph: implementation: SKLEARN_SERVER modelUri: gs://seldon-models/v1.14.0-dev/sklearn/iris name: classifier name: default replicas: 1 Then extract the URL where the model server exposes its prediction API: export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') And use curl to send a test prediction API request to the server: curl -X POST http://$INGRESS_HOST/seldon/default/iris-model/api/v1.0/predictions \ H 'Content-Type: application/json' \ d '{ "data": { "ndarray": [[1,2,3,4]] } }' Using a Service Connector
stack-components
https://docs.zenml.io/v/docs/stack-components/model-deployers/seldon
487
upplied a custom value while creating the cluster.Run the following command. aws eks update-kubeconfig --name <NAME> --region <REGION> Get the name of the deployed cluster. zenml stack recipe output gke-cluster-name\ Figure out the region that the cluster is deployed to. By default, the region is set to europe-west1, which you should use in the next step if you haven't supplied a custom value while creating the cluster.\ Figure out the project that the cluster is deployed to. You must have passed in a project ID while creating a GCP resource for the first time.\ Run the following command. gcloud container clusters get-credentials <NAME> --region <REGION> --project <PROJECT_ID> You may already have your kubectl client configured with your cluster. Check by running kubectl get nodes before proceeding. Get the name of the deployed cluster. zenml stack recipe output k3d-cluster-name\ Set the KUBECONFIG env variable to the kubeconfig file from the cluster. export KUBECONFIG=$(k3d kubeconfig get <NAME>)\ You can now use the kubectl client to talk to the cluster. Stack Recipe Deploy The steps for the stack recipe case should be the same as the ones listed above. The only difference that you need to take into account is the name of the outputs that contain your cluster name and the default regions. Each recipe might have its own values and here's how you can ascertain those values. For the cluster name, go into the outputs.tf file in the root directory and search for the output that exposes the cluster name. For the region, check out the variables.tf or the locals.tf file for the default value assigned to it. PreviousTroubleshoot the deployed server NextCustom secret stores Last updated 15 days ago
getting-started
https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/troubleshoot-stack-components
371
━━━━━━━━━━━━━━━━┛ Explore Service Connector TypesService Connector Types are not only templates used to instantiate Service Connectors, they also form a body of knowledge that documents best security practices and guides users through the complicated world of authentication and authorization. ZenML ships with a handful of Service Connector Types that enable you right out-of-the-box to connect ZenML to cloud resources and services available from cloud providers such as AWS and GCP, as well as on-premise infrastructure. In addition to built-in Service Connector Types, ZenML can be easily extended with custom Service Connector implementations. To discover the Connector Types available with your ZenML deployment, you can use the zenml service-connector list-types CLI command: zenml service-connector list-types Example Command Output ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━━┓ ┃ NAME β”‚ TYPE β”‚ RESOURCE TYPES β”‚ AUTH METHODS β”‚ LOCAL β”‚ REMOTE ┃ ┠──────────────────────────────┼───────────────┼───────────────────────┼───────────────────┼───────┼────────┨ ┃ Kubernetes Service Connector β”‚ πŸŒ€ kubernetes β”‚ πŸŒ€ kubernetes-cluster β”‚ password β”‚ βœ… β”‚ βœ… ┃ ┃ β”‚ β”‚ β”‚ token β”‚ β”‚ ┃ ┠──────────────────────────────┼───────────────┼───────────────────────┼───────────────────┼───────┼────────┨ ┃ Docker Service Connector β”‚ 🐳 docker β”‚ 🐳 docker-registry β”‚ password β”‚ βœ… β”‚ βœ… ┃ ┠──────────────────────────────┼───────────────┼───────────────────────┼───────────────────┼───────┼────────┨ ┃ Azure Service Connector β”‚ πŸ‡¦ azure β”‚ πŸ‡¦ azure-generic β”‚ implicit β”‚ βœ… β”‚ βœ… ┃ ┃ β”‚ β”‚ πŸ“¦ blob-container β”‚ service-principal β”‚ β”‚ ┃
how-to
https://docs.zenml.io/how-to/auth-management/service-connectors-guide
499
─────────────────────────────────────────────────┨┃ πŸŒ€ kubernetes-cluster β”‚ πŸ’₯ error: connector authorization failure: Failed to list GKE clusters: 403 Required "container.clusters.list" ┃ ┃ β”‚ permission(s) for "projects/20219041791". [request_id: "0x84808facdac08541" ┃ ┃ β”‚ ] ┃ ┠───────────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ 🐳 docker-registry β”‚ gcr.io/zenml-core ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Verifying access to individual resource types will fail: zenml service-connector verify gcp-empty-sa --resource-type kubernetes-cluster Example Command Output Error: Service connector 'gcp-empty-sa' verification failed: connector authorization failure: Failed to list GKE clusters: 403 Required "container.clusters.list" permission(s) for "projects/20219041791". zenml service-connector verify gcp-empty-sa --resource-type gcs-bucket Example Command Output Error: Service connector 'gcp-empty-sa' verification failed: connector authorization failure: failed to list GCS buckets: 403 GET https://storage.googleapis.com/storage/v1/b?project=zenml-core&projection=noAcl&prettyPrint=false: [email protected] does not have storage.buckets.list access to the Google Cloud project. Permission 'storage.buckets.list' denied on resource (or it may not exist). zenml service-connector verify gcp-empty-sa --resource-type gcs-bucket --resource-id zenml-bucket-sl Example Command Output
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector
448
targeted improvements to the retrieval component.To wrap up, the retrieval evaluation process we've walked through - from manual spot-checking with carefully crafted queries to automated testing with synthetic question-document pairs - has provided a solid baseline understanding of our retrieval component's performance. The failure rates of 20% on our handpicked test cases and 16% on a larger sample of generated queries highlight clear room for improvement, but also validate that our semantic search is generally pointing in the right direction. Going forward, we have a rich set of options to refine and upgrade our evaluation approach. Generating a more diverse array of test questions, leveraging semantic similarity metrics for a nuanced view beyond binary success/failure, performing comparative evaluations of different retrieval techniques, and conducting deep error analysis on failure cases - all of these avenues promise to yield valuable insights. As our RAG pipeline grows to handle more complex and wide-ranging queries, continued investment in comprehensive retrieval evaluation will be essential to ensure we're always surfacing the most relevant information. Before we start working to improve or tweak our retrieval based on these evaluation results, let's shift gears and look at how we can evaluate the generation component of our RAG pipeline. Assessing the quality of the final answers produced by the system is equally crucial to gauging the effectiveness of our retrieval.
user-guide
https://docs.zenml.io/user-guide/llmops-guide/evaluation/retrieval
260
β”‚ s3://zenml-generative-chat ┃┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ zenml service-connector verify aws-s3-multi-instance --resource-id s3://zenml-demos Example Command Output Service connector 'aws-s3-multi-instance' is correctly configured with valid credentials and has access to the following resources: ┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────┼──────────────────┨ ┃ πŸ“¦ s3-bucket β”‚ s3://zenml-demos ┃ ┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛ Finally, verifying the single-instance Service Connector is straight-forward and requires no further explanation: zenml service-connector verify aws-s3-zenfiles Example Command Output Service connector 'aws-s3-zenfiles' is correctly configured with valid credentials and has access to the following resources: ┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────┼────────────────┨ ┃ πŸ“¦ s3-bucket β”‚ s3://zenfiles ┃ ┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ Configure local clients Yet another neat feature built into some Service Container Types that is the opposite of Service Connector auto-configuration is the ability to configure local CLI and SDK utilities installed on your host, like the Docker or Kubernetes CLI (kubectl) with credentials issued by a compatible Service Connector. You may need to exercise this feature to get direct CLI access to a remote service in order to manually manage some configurations or resources, to debug some workloads or to simply verify that the Service Connector credentials are actually working.
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide
439
LENGTH_OUT_OF_BOUNDS: dict( num_percentiles=1000,min_unique_values=3, condition_number_of_outliers_less_or_equal=dict( max_outliers=3, ), }, ... is equivalent to running the following Deepchecks tests: import deepchecks.tabular.checks as tabular_checks from deepchecks.tabular import Suite from deepchecks.tabular import Dataset train_dataset = Dataset( reference_dataset, label='class', cat_features=['country', 'state'] suite = Suite(name="custom") check = tabular_checks.OutlierSampleDetection( nearest_neighbors_percent=0.01, extent_parameter=3, check.add_condition_outlier_ratio_less_or_equal( max_outliers_ratio=0.007, outlier_score_threshold=0.5, check.add_condition_no_outliers( outlier_score_threshold=0.6, suite.add(check) check = tabular_checks.StringLengthOutOfBounds( num_percentiles=1000, min_unique_values=3, check.add_condition_number_of_outliers_less_or_equal( max_outliers=3, suite.run(train_dataset=train_dataset) You can view the complete list of configuration parameters in the SDK docs. The Deepchecks Data Validator The Deepchecks Data Validator implements the same interface as do all Data Validators, so this method forces you to maintain some level of compatibility with the overall Data Validator abstraction, which guarantees an easier migration in case you decide to switch to another Data Validator. All you have to do is call the Deepchecks Data Validator methods when you need to interact with Deepchecks to run tests, e.g.: import pandas as pd from deepchecks.core.suite import SuiteResult from zenml.integrations.deepchecks.data_validators import DeepchecksDataValidator from zenml.integrations.deepchecks.validation_checks import DeepchecksDataIntegrityCheck from zenml import step @step def data_integrity_check( dataset: pd.DataFrame, ) -> SuiteResult: """Custom data integrity check step with Deepchecks Args: dataset: input Pandas DataFrame Returns: Deepchecks test suite execution result """
stack-components
https://docs.zenml.io/v/docs/stack-components/data-validators/deepchecks
422
AWS Secret Key authentication method alternative.Generated STS tokens inherit the full set of permissions of the IAM user or AWS account root user that is calling the GetSessionToken API. Depending on your security needs, this may not be suitable for production use, as it can lead to accidental privilege escalation. Instead, it is recommended to use the AWS Federation Token or AWS IAM Role authentication methods to restrict the permissions of the generated STS tokens. For more information on session tokens and the GetSessionToken AWS API, see: the official AWS documentation on the subject. Attributes: aws_access_key_id {string, secret, required}: AWS Access Key ID aws_secret_access_key {string, secret, required}: AWS Secret Access Key region {string, required}: AWS Region endpoint_url {string, optional}: AWS Endpoint URL ──────────────────────────────────────────────────────────────────────────────── Dashboard equivalent: Not all Stack Components support being linked to a Service Connector. This is indicated in the flavor description of each Stack Component. Our example uses the S3 Artifact Store, which does support it: $ zenml artifact-store flavor describe s3 Configuration class: S3ArtifactStoreConfig [...] This flavor supports connecting to external resources with a Service Connector. It requires a 's3-bucket' resource. You can get a list of all available connectors and the compatible resources that they can access by running: 'zenml service-connector list-resources --resource-type s3-bucket' If no compatible Service Connectors are yet registered, you can register a new one by running: 'zenml service-connector register -i'
how-to
https://docs.zenml.io/how-to/auth-management
335
gmax(prediction.numpy()) return classes[maxindex]The custom predict function should get the model and the input data as arguments and return the model predictions. ZenML will automatically take care of loading the model into memory and starting the seldon-core-microservice that will be responsible for serving the model and running the predict function. After defining your custom predict function in code, you can use the seldon_custom_model_deployer_step to automatically build your function into a Docker image and deploy it as a model server by setting the predict_function argument to the path of your custom_predict function: from zenml.integrations.seldon.steps import seldon_custom_model_deployer_step from zenml.integrations.seldon.services import SeldonDeploymentConfig from zenml import pipeline @pipeline def seldon_deployment_pipeline(): model = ... seldon_custom_model_deployer_step( model=model, predict_function="<PATH.TO.custom_predict>", # TODO: path to custom code service_config=SeldonDeploymentConfig( model_name="<MODEL_NAME>", # TODO: name of the deployed model replicas=1, implementation="custom", resources=SeldonResourceRequirements( limits={"cpu": "200m", "memory": "250Mi"} ), serviceAccountName="kubernetes-service-account", ), Advanced Custom Code Deployment with Seldon Core Integration Before creating your custom model class, you should take a look at the custom Python model section of the Seldon Core documentation. The built-in Seldon Core custom deployment step is a good starting point for deploying your custom models. However, if you want to deploy more than the trained model, you can create your own custom class and a custom step to achieve this. See the ZenML custom Seldon model class as a reference. PreviousMLflow NextBentoML Last updated 18 days ago
stack-components
https://docs.zenml.io/v/docs/stack-components/model-deployers/seldon
371
───────┼─────────┼────────────────┼──────────────┨┃ πŸ‘‰ β”‚ default β”‚ ... β”‚ βž– β”‚ default β”‚ default β”‚ default ┃ ┗━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━┛ ... As you can see a stack can be active on your client. This simply means that any pipeline you run will be using the active stack as its environment. Components of a stack As you can see in the section above, a stack consists of multiple components. All stacks have at minimum an orchestrator and an artifact store. Orchestrator The orchestrator is responsible for executing the pipeline code. In the simplest case, this will be a simple Python thread on your machine. Let's explore this default orchestrator. zenml orchestrator list lets you see all orchestrators that are registered in your zenml deployment. ┏━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┓ ┃ ACTIVE β”‚ NAME β”‚ COMPONENT ID β”‚ FLAVOR β”‚ SHARED β”‚ OWNER ┃ ┠────────┼─────────┼──────────────┼────────┼────────┼─────────┨ ┃ πŸ‘‰ β”‚ default β”‚ ... β”‚ local β”‚ βž– β”‚ default ┃ ┗━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┛ Artifact store The artifact store is responsible for persisting the step outputs. As we learned in the previous section, the step outputs are not passed along in memory, rather the outputs of each step are stored in the artifact store and then loaded from there when the next step needs them. By default this will also be on your own machine: zenml artifact-store list lets you see all artifact stores that are registered in your zenml deployment. ┏━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┓ ┃ ACTIVE β”‚ NAME β”‚ COMPONENT ID β”‚ FLAVOR β”‚ SHARED β”‚ OWNER ┃ ┠────────┼─────────┼──────────────┼────────┼────────┼─────────┨ ┃ πŸ‘‰ β”‚ default β”‚ ... β”‚ local β”‚ βž– β”‚ default ┃ ┗━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┛ Other stack components
user-guide
https://docs.zenml.io/user-guide/production-guide/understand-stacks
633
> \ --environment_name=<AZURE_ENVIRONMENT_NAME> \# only pass these if using Service Principal Authentication # --tenant_id=<TENANT_ID> \ # --service_principal_id=<SERVICE_PRINCIPAL_ID> \ # --service_principal_password=<SERVICE_PRINCIPAL_PASSWORD> \ # Add the step operator to the active stack zenml stack update -s <NAME> Once you added the step operator to your active stack, you can use it to execute individual steps of your pipeline by specifying it in the @step decorator as follows: from zenml import step @step(step_operator= <NAME>) def trainer(...) -> ...: """Train a model.""" # This step will be executed in AzureML. ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your steps in AzureML. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them. Additional configuration For additional configuration of the AzureML step operator, you can pass AzureMLStepOperatorSettings when defining or running your pipeline. Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings. For more information and a full list of configurable attributes of the AzureML step operator, check out the SDK Docs . Enabling CUDA for GPU-backed hardware Note that if you wish to use this step operator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. PreviousGoogle Cloud VertexAI NextSpark Last updated 15 days ago
stack-components
https://docs.zenml.io/stack-components/step-operators/azureml
367
Control caching behavior By default steps in ZenML pipelines are cached whenever code and parameters stay unchanged. @step(enable_cache=True) # set cache behavior at step level def load_data(parameter: int) -> dict: ... @step(enable_cache=False) # settings at step level override pipeline level def train_model(data: dict) -> None: ... @pipeline(enable_cache=True) # set cache behavior at step level def simple_ml_pipeline(parameter: int): ... Caching only happens when code and parameters stay the same. Like many other step and pipeline settings, you can also change this afterward: # Same as passing it in the step decorator my_step.configure(enable_cache=...) # Same as passing it in the pipeline decorator my_pipeline.configure(enable_cache=...) Find out here how to configure this in a YAML file PreviousStep output typing and annotation NextSchedule a pipeline Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/build-pipelines/control-caching-behavior
185
me will always be admin. Additional configurationFor additional configuration of the Airflow orchestrator, you can pass AirflowOrchestratorSettings when defining or running your pipeline. Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings. Enabling CUDA for GPU-backed hardware Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. Using different Airflow operators Airflow operators specify how a step in your pipeline gets executed. As ZenML relies on Docker images to run pipeline steps, only operators that support executing a Docker image work in combination with ZenML. Airflow comes with two operators that support this: the DockerOperator runs the Docker images for executing your pipeline steps on the same machine that your Airflow server is running on. For this to work, the server environment needs to have the apache-airflow-providers-docker package installed. the KubernetesPodOperator runs the Docker image on a pod in the Kubernetes cluster that the Airflow server is deployed to. For this to work, the server environment needs to have the apache-airflow-providers-cncf-kubernetes package installed. You can specify which operator to use and additional arguments to it as follows: from zenml import pipeline, step from zenml.integrations.airflow.flavors.airflow_orchestrator_flavor import AirflowOrchestratorSettings airflow_settings = AirflowOrchestratorSettings( operator="docker", # or "kubernetes_pod" # Dictionary of arguments to pass to the operator __init__ method operator_args={} # Using the operator for a single step @step(settings={"orchestrator.airflow": airflow_settings}) def my_step(...): # Using the operator for all steps in your pipeline
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/airflow
404
Service Connectors guide The complete guide to managing Service Connectors and connecting ZenML to external resources. This documentation section contains everything that you need to use Service Connectors to connect ZenML to external resources. A lot of information is covered, so it might be useful to use the following guide to navigate it: if you're only getting started with Service Connectors, we suggest starting by familiarizing yourself with the terminology. check out the section on Service Connector Types to understand the different Service Connector implementations that are available and when to use them. jumping straight to the sections on Registering Service Connectors can get you set up quickly if you are only looking for a quick way to evaluate Service Connectors and their features. if all you need to do is connect a ZenML Stack Component to an external resource or service like a Kubernetes cluster, a Docker container registry, or an object storage bucket, and you already have some Service Connectors available, the section on connecting Stack Components to resources is all you need. In addition to this guide, there is an entire section dedicated to best security practices concerning the various authentication methods implemented by Service Connectors, such as which types of credentials to use in development or production and how to keep your security information safe. That section is particularly targeted at engineers with some knowledge of infrastructure, but it should be accessible to larger audiences. Terminology
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide
276
━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┛ ``` ```shzenml service-connector list-resources --resource-type docker-registry ``` Example Command Output ```text The following 'docker-registry' resources can be accessed by service connectors configured in your workspace: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼────────────────┼────────────────┼────────────────────┼───────────────────┨ ┃ eeeabc13-9203-463b-aa52-216e629e903c β”‚ gcp-demo-multi β”‚ πŸ”΅ gcp β”‚ 🐳 docker-registry β”‚ gcr.io/zenml-core ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┛ ``` register and connect a GCS Artifact Store Stack Component to a GCS bucket:Copyzenml artifact-store register gcs-zenml-bucket-sl --flavor gcp --path=gs://zenml-bucket-sl Example Command Output ```text Running with active workspace: 'default' (global) Running with active stack: 'default' (global) Successfully registered artifact_store `gcs-zenml-bucket-sl`. ``` ```sh zenml artifact-store connect gcs-zenml-bucket-sl --connector gcp-demo-multi ``` Example Command Output ```text Running with active workspace: 'default' (global) Running with active stack: 'default' (global) Successfully connected artifact store `gcs-zenml-bucket-sl` to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼────────────────┼────────────────┼───────────────┼──────────────────────┨ ┃ eeeabc13-9203-463b-aa52-216e629e903c β”‚ gcp-demo-multi β”‚ πŸ”΅ gcp β”‚ πŸ“¦ gcs-bucket β”‚ gs://zenml-bucket-sl ┃
how-to
https://docs.zenml.io/how-to/auth-management/gcp-service-connector
652
Which files are built into the image ZenML determines the root directory of your source files in the following order: If you've initialized zenml (zenml init), the repository root directory will be used. Otherwise, the parent directory of the Python file you're executing will be the source root. For example, running python /path/to/file.py, the source root would be /path/to. You can specify how the files inside this root directory are handled using the source_files attribute on the DockerSettings: The default behavior download_or_include: The files will be downloaded if they're inside a registered code repository and the repository has no local changes, otherwise, they will be included in the image. If you want your files to be included in the image in any case, set the source_files attribute to include. If you want your files to be downloaded in any case, set the source_files attribute to download. If this is specified, the files must be inside a registered code repository and the repository must have no local changes, otherwise the Docker build will fail. If you want to prevent ZenML from copying or downloading any of your source files, you can do so by setting the source_files attribute on the Docker settings to ignore. This is an advanced feature and will most likely cause unintended and unanticipated behavior when running your pipelines. If you use this, make sure to copy all the necessary files to the correct paths yourself. Which files get included When including files in the image, ZenML copies all contents of the root directory into the Docker image. To exclude files and keep the image smaller, use a .dockerignore file in either of the following ways: Have a file called .dockerignore in your source root directory. Explicitly specify a .dockerignore file to use:Copydocker_settings = DockerSettings(build_config={"dockerignore": "/path/to/.dockerignore"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ...
how-to
https://docs.zenml.io/v/docs/how-to/customize-docker-builds/which-files-are-built-into-the-image
394
a ZenML secret: zenml integration install openaizenml secret create openai --api_key=<YOUR_API_KEY> Then, you can use the hook in your pipeline: from zenml.integration.openai.hooks import openai_chatgpt_alerter_failure_hook from zenml import step @step(on_failure=openai_chatgpt_alerter_failure_hook) def my_step(...): ... If you had set up a Slack alerter as your alerter, for example, then you would see a message like this: You can use the suggestions as input that can help you fix whatever is going wrong in your code. If you have GPT-4 enabled for your account, you can use the openai_gpt4_alerter_failure_hook hook instead (imported from the same module). PreviousName your pipeline runs NextHyperparameter tuning Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/build-pipelines/use-failure-success-hooks
184
use case. Automated evaluation using another LLMAnother way to evaluate the generation component is to use another LLM to grade the output of the LLM you're evaluating. This is a more sophisticated approach and requires a bit more setup. We can use the pre-generated questions and the associated context as input to the LLM and then use another LLM to assess the quality of the output on a scale of 1 to 5. This is a more quantitative approach and since it's automated it can run across a larger set of data. LLMs don't always do well on this kind of evaluation where numbers are involved. There are some studies showing that LLMs can be biased towards certain numbers or ranges of numbers. This is something to keep in mind when using this approach. Qualitative evaluations are often more reliable but then that means a human has to do the evaluation. We can start by setting up a Pydantic model to hold the data we need. We set constraints to ensure that the data we're getting back are only integers between 1 and 5, inclusive: class LLMJudgedTestResult(BaseModel): toxicity: conint(ge=1, le=5) faithfulness: conint(ge=1, le=5) helpfulness: conint(ge=1, le=5) relevance: conint(ge=1, le=5) We can use this in a test function that: takes a question and a context as inputs generates an answer using the LLM we're evaluating makes a call to an (optionally different) LLM we're using to judge the quality of the answer getting back a score for each of the four categories in JSON format parses the JSON and returns the result of the evaluation as our Pydantic model instance Pydantic handles the validation of the JSON input for us, so we can be sure that we're getting the data we expect and in a form that we can use. def llm_judged_test_e2e( question: str, context: str, n_items_retrieved: int = 5, ) -> LLMJudgedTestResult: """E2E tests judged by an LLM. Args: item (dict): The item to test. n_items_retrieved (int): The number of items to retrieve. Returns: LLMJudgedTestResult: The result of the test. """
user-guide
https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/generation
499
ication method with a configured IAM role instead.The connector needs to be configured with the IAM role to be assumed accompanied by an AWS secret key associated with an IAM user or an STS token associated with another IAM role. The IAM user or IAM role must have permission to assume the target IAM role. The connector will generate temporary STS tokens upon request by calling the AssumeRole STS API. The best practice implemented with this authentication scheme is to keep the set of permissions associated with the primary IAM user or IAM role down to the bare minimum and grant permissions to the privilege-bearing IAM role instead. An AWS region is required and the connector may only be used to access AWS resources in the specified region. One or more optional IAM session policies may also be configured to further restrict the permissions of the generated STS tokens. If not specified, IAM session policies are automatically configured for the generated STS tokens to restrict them to the minimum set of permissions required to access the target resource. Refer to the documentation for each supported Resource Type for the complete list of AWS permissions automatically granted to the generated STS tokens. The default expiration period for generated STS tokens is 1 hour with a minimum of 15 minutes up to the maximum session duration setting configured for the IAM role (default is 1 hour). If you need longer-lived tokens, you can configure the IAM role to use a higher maximum expiration value (up to 12 hours) or use the AWS Federation Token or AWS Session Token authentication methods. For more information on IAM roles and the AssumeRole AWS API, see the official AWS documentation on the subject. For more information about the difference between this method and the AWS Federation Token authentication method, consult this AWS documentation page. The following assumes the local AWS CLI has a zenml AWS CLI profile already configured with an AWS Secret Key and an IAM role to be assumed:
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector
372
mmodate the following MLflow deployment scenarios:Scenario 1: This scenario requires that you use a local Artifact Store alongside the MLflow Experiment Tracker in your ZenML stack. The local Artifact Store comes with limitations regarding what other types of components you can use in the same stack. This scenario should only be used to run ZenML locally and is not suitable for collaborative and production settings. No parameters need to be supplied when configuring the MLflow Experiment Tracker, e.g: # Register the MLflow experiment tracker zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow # Register and set a stack with the new experiment tracker zenml stack register custom_stack -e mlflow_experiment_tracker ... --set Scenario 5: This scenario assumes that you have already deployed an MLflow Tracking Server enabled with proxied artifact storage access. There is no restriction regarding what other types of components it can be combined with. This option requires authentication-related parameters to be configured for the MLflow Experiment Tracker. Due to a critical severity vulnerability found in older versions of MLflow, we recommend using MLflow version 2.2.1 or higher. Databricks scenario: This scenario assumes that you have a Databricks workspace, and you want to use the managed MLflow Tracking server it provides. This option requires authentication-related parameters to be configured for the MLflow Experiment Tracker. Infrastructure Deployment The MLflow Experiment Tracker can be deployed directly from the ZenML CLI: # optionally assigning an existing bucket to the MLflow Experiment Tracker zenml experiment-tracker deploy mlflow_tracker --flavor=mlflow -x mlflow_bucket=gs://my_bucket --provider=<YOUR_PROVIDER>
stack-components
https://docs.zenml.io/v/docs/stack-components/experiment-trackers/mlflow
343
Build the pipeline without running Pipeline builds are usually done implicitly when you run a pipeline on a docker-based orchestrator. But you are also able to just build a pipeline without running this. To do this you simply do this: from zenml import pipeline @pipeline def my_pipeline(...): ... my_pipeline.build() You can see all pipeline builds with the command: This will register the build output in the ZenML database and allow you to use the built images when running a pipeline later. zenml pipeline builds list To reuse a registered build when running a pipeline, pass it as an argument in Python my_pipeline = my_pipeline.with_options(build=<BUILD_ID>) or when running a pipeline from the CLI zenml pipeline run <PIPELINE_NAME> --build=<BUILD_ID> PreviousUse code repositories to automate Docker build reuse NextDefine where an image is built Last updated 15 days ago
how-to
https://docs.zenml.io/how-to/customize-docker-builds/build-the-pipeline-without-running
185
un a CPU/GPU intensive task like training a model.The challenge comes from setting up and implementing proper authentication and authorization with the best security practices in mind, while at the same time keeping this complexity away from the day-to-day routines of coding and running pipelines. The hard-to-swallow truth is there is no single standard that unifies all authentication and authorization-related matters or a single, well-defined set of security best practices that you can follow. However, with ZenML you get the next best thing, an abstraction that keeps the complexity of authentication and authorization away from your code and makes it easier to tackle them: the ZenML Service Connectors. A representative use-case The range of features covered by Service Connectors is extensive and going through the entire Service Connector Guide can be overwhelming. If all you want is to get a quick overview of how Service Connectors work and what they can do for you, this section is for you. This is a representative example of how you would use a Service Connector to connect ZenML to a cloud service. This example uses the AWS Service Connector to connect ZenML to an AWS S3 bucket and then link an S3 Artifact Store Stack Component to it. Some details about the current alternatives to using Service Connectors and their drawbacks are provided below. Feel free to skip them if you are already familiar with them or just want to get to the good part. There are quicker alternatives to using a Service Connector to link an S3 Artifact Store to a private AWS S3 bucket. Let's lay them out first and then explain why using a Service Connector is the better option: the authentication information can be embedded directly into the Stack Component, although this is not recommended for security reasons:Copyzenml artifact-store register s3 --flavor s3 --path=s3://BUCKET_NAME --key=AWS_ACCESS_KEY --secret=AWS_SECRET_KEY
how-to
https://docs.zenml.io/v/docs/how-to/auth-management
377
Hugging Face Deploying models to Huggingface Inference Endpoints with Hugging Face :hugging_face:. Hugging Face Inference Endpoints provides a secure production solution to easily deploy any transformers, sentence-transformers, and diffusers models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the Hub. This service provides dedicated and autoscaling infrastructure managed by Hugging Face, allowing you to deploy models without dealing with containers and GPUs. When to use it? You should use Hugging Face Model Deployer: if you want to deploy Transformers, Sentence-Transformers, or Diffusion models on dedicated and secure infrastructure. if you prefer a fully-managed production solution for inference without the need to handle containers and GPUs. if your goal is to turn your models into production-ready APIs with minimal infrastructure or MLOps involvement Cost-effectiveness is crucial, and you want to pay only for the raw compute resources you use. Enterprise security is a priority, and you need to deploy models into secure offline endpoints accessible only via a direct connection to your Virtual Private Cloud (VPCs). If you are looking for a more easy way to deploy your models locally, you can use the MLflow Model Deployer flavor. How to deploy it? The Hugging Face Model Deployer flavor is provided by the Hugging Face ZenML integration, so you need to install it on your local machine to be able to deploy your models. You can do this by running the following command: zenml integration install huggingface -y To register the Hugging Face model deployer with ZenML you need to run the following command: zenml model-deployer register <MODEL_DEPLOYER_NAME> --flavor=huggingface --token=<YOUR_HF_TOKEN> --namespace=<YOUR_HF_NAMESPACE> Here, token parameter is the Hugging Face authentication token. It can be managed through Hugging Face settings.
stack-components
https://docs.zenml.io/v/docs/stack-components/model-deployers/huggingface
399
A remote container registry as part of your stack.In the remote case, the Airflow orchestrator works differently than other ZenML orchestrators. Executing a python file which runs a pipeline by calling pipeline.run() will not actually run the pipeline, but instead will create a .zip file containing an Airflow representation of your ZenML pipeline. In one additional step, you need to make sure this zip file ends up in the DAGs directory of your Airflow deployment. ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Airflow. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them. Scheduling You can schedule pipeline runs on Airflow similarly to other orchestrators. However, note that Airflow schedules always need to be set in the past, e.g.,: from datetime import datetime, timedelta from zenml.pipelines import Schedule scheduled_pipeline = fashion_mnist_pipeline.with_options( schedule=Schedule( start_time=datetime.now() - timedelta(hours=1), # start in the past end_time=datetime.now() + timedelta(hours=1), interval_second=timedelta(minutes=15), # run every 15 minutes catchup=False, scheduled_pipeline() Airflow UI Airflow comes with its own UI that you can use to find further details about your pipeline runs, such as the logs of your steps. For local Airflow, you can find the Airflow UI at http://localhost:8080 by default. If you cannot see the Airflow UI credentials in the console, you can find the password in <AIRFLOW_HOME>/standalone_admin_password.txt. AIRFLOW_HOME will usually be ~/airflow unless you've manually configured it with the AIRFLOW_HOME environment variable. You can always run airflow info to figure out the directory for the active environment. The username will always be admin. Additional configuration
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/airflow
406
ZenML SaaS Your one-stop MLOps control plane. One of the most straightforward paths to start with a deployed ZenML server is to use ZenML Pro. The ZenML Pro offering eliminates the need for you to dedicate time and resources to deploy and manage a ZenML server, allowing you to focus primarily on your MLOps workflows. If you're interested in assessing ZenML Pro, you can simply create a free account. Learn more about ZenML Pro on the ZenML Website. Key features ZenML Pro comes as a Software-as-a-Service (SaaS) platform that enhances the functionalities of the open-source ZenML product. It equips you with a centralized interface to seamlessly launch and manage ZenML server instances. While it remains rooted in the robust open-source offering, ZenML Pro offers extra features designed to optimize your machine learning workflow. Managed ZenML Server (Multi-tenancy) ZenML Pro simplifies your machine learning workflows, enabling you to deploy a managed instance of ZenML servers with just one click. This eradicates the need to handle infrastructure complexities, making the set-up and management of your machine learning pipelines a breeze. We handle all pertinent system updates and backups, thus ensuring your system stays current and robust, allowing you to zero in on your essential MLOps tasks. As a ZenML Pro user, you'll also have priority support, giving you the necessary aid to fully utilize the platform. Maximum data security
getting-started
https://docs.zenml.io/v/docs/getting-started/zenml-pro/zenml-cloud
294
Pigeon Annotating data using Pigeon. Pigeon is a lightweight, open-source annotation tool designed for quick and easy labeling of data directly within Jupyter notebooks. It provides a simple and intuitive interface for annotating various types of data, including: Text Classification Image Classification Text Captioning When would you want to use it? If you need to label a small to medium-sized dataset as part of your ML workflow and prefer the convenience of doing it directly within your Jupyter notebook, Pigeon is a great choice. It is particularly useful for: Quick labeling tasks that don't require a full-fledged annotation platform Iterative labeling during the exploratory phase of your ML project Collaborative labeling within a Jupyter notebook environment How to deploy it? To use the Pigeon annotator, you first need to install the ZenML Pigeon integration: zenml integration install pigeon Next, register the Pigeon annotator with ZenML, specifying the output directory where the annotation files will be stored: zenml annotator register pigeon --flavor pigeon --output_dir="path/to/dir" Note that the output_dir is relative to the repository or notebook root. Finally, add the Pigeon annotator to your stack and set it as the active stack: zenml stack update <YOUR_STACK_NAME> --annotator pigeon Now you're ready to use the Pigeon annotator in your ML workflow! How do you use it? With the Pigeon annotator registered and added to your active stack, you can easily access it using the ZenML client within your Jupyter notebook. For text classification tasks, you can launch the Pigeon annotator as follows: from zenml.client import Client annotator = Client().active_stack.annotator annotations = annotator.launch( data=[ 'I love this movie', 'I was really disappointed by the book' ], options=[ 'positive', 'negative' For image classification tasks, you can provide a custom display function to render the images: from zenml.client import Client
stack-components
https://docs.zenml.io/stack-components/annotators/pigeon
423
ld be accessible to larger audiences. TerminologyAs with any high-level abstraction, some terminology is needed to express the concepts and operations involved. In spite of the fact that Service Connectors cover such a large area of application as authentication and authorization for a variety of resources from a range of different vendors, we managed to keep this abstraction clean and simple. In the following expandable sections, you'll learn more about Service Connector Types, Resource Types, Resource Names, and Service Connectors. This term is used to represent and identify a particular Service Connector implementation and answer questions about its capabilities such as "what types of resources does this Service Connector give me access to", "what authentication methods does it support" and "what credentials and other information do I need to configure for it". This is analogous to the role Flavors play for Stack Components in that the Service Connector Type acts as the template from which one or more Service Connectors are created. For example, the built-in AWS Service Connector Type shipped with ZenML supports a rich variety of authentication methods and provides access to AWS resources such as S3 buckets, EKS clusters and ECR registries. The zenml service-connector list-types and zenml service-connector describe-type CLI commands can be used to explore the Service Connector Types available with your ZenML deployment. Extensive documentation is included covering supported authentication methods and Resource Types. The following are just some examples: zenml service-connector list-types Example Command Output ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━━┓ ┃ NAME β”‚ TYPE β”‚ RESOURCE TYPES β”‚ AUTH METHODS β”‚ LOCAL β”‚ REMOTE ┃ ┠──────────────────────────────┼───────────────┼───────────────────────┼───────────────────┼───────┼────────┨
how-to
https://docs.zenml.io/how-to/auth-management/service-connectors-guide
425
e found, including all files inside build context.Step 1/10 : FROM zenmldocker/zenml:0.40.0-py3.8 Step 2/10 : WORKDIR /app Step 3/10 : COPY .zenml_user_requirements . Step 4/10 : RUN pip install --default-timeout=60 --no-cache-dir -r .zenml_user_requirements Step 5/10 : COPY .zenml_integration_requirements . Step 6/10 : RUN pip install --default-timeout=60 --no-cache-dir -r .zenml_integration_requirements Step 7/10 : ENV ZENML_ENABLE_REPO_INIT_WARNINGS=False Step 8/10 : ENV ZENML_CONFIG_PATH=/app/.zenconfig Step 9/10 : COPY . . Step 10/10 : RUN chmod -R a+rw . Pushing Docker image demozenmlcontainerregistry.azurecr.io/zenml:simple_pipeline-orchestrator. Finished pushing Docker image. Finished building Docker image(s). Running pipeline simple_pipeline on stack gcp-demo (caching disabled) Waiting for Kubernetes orchestrator pod... Kubernetes orchestrator pod started. Waiting for pod of step simple_step_one to start... Step simple_step_one has started. INFO:azure.identity._internal.get_token_mixin:ClientSecretCredential.get_token succeeded INFO:azure.identity._internal.get_token_mixin:ClientSecretCredential.get_token succeeded INFO:azure.identity._internal.get_token_mixin:ClientSecretCredential.get_token succeeded INFO:azure.identity.aio._internal.get_token_mixin:ClientSecretCredential.get_token succeeded Step simple_step_one has finished in 0.396s. Pod of step simple_step_one completed. Waiting for pod of step simple_step_two to start... Step simple_step_two has started. INFO:azure.identity._internal.get_token_mixin:ClientSecretCredential.get_token succeeded INFO:azure.identity._internal.get_token_mixin:ClientSecretCredential.get_token succeeded INFO:azure.identity.aio._internal.get_token_mixin:ClientSecretCredential.get_token succeeded Hello World! Step simple_step_two has finished in 3.203s. Pod of step simple_step_two completed. Orchestration pod completed.
how-to
https://docs.zenml.io/how-to/auth-management/azure-service-connector
464
Troubleshoot stack components Learn how to troubleshoot Stack Components deployed with ZenML. There are two ways in which you can understand if something has gone wrong while deploying your stack or stack components. Error logs from the CLI The CLI will show any errors that the deployment runs into. Most of these would be coming from the underlying terraform library and could range from issues like resources with the same name existing in your cloud to a wrong naming scheme for some resource. Most of these are easy to fix and self-explanatory but feel free to ask any questions or doubts you may have to us on the ZenML Slack! πŸ™‹β€ Debugging errors with already deployed components Sometimes, an application might fail after an initial successful deployment. This section will cover steps on how to debug failures in such a case, for Kubernetes apps, since they form a majority of all tools deployed with the CLI. Other components include cloud-specific apps like Vertex AI, Sagemaker, S3 buckets, and more. Information on what has gone wrong with them would be best found on the web console for the respective clouds. Getting access to the Kubernetes Cluster The first step to figuring out the problem with a deployed Kubernetes app is to get access to the underlying cluster hosting it. When you deploy apps that require a cluster, ZenML creates a cluster for you and this is reused for all subsequent apps that need it. If you've used the zenml stack deploy flow to deploy your components, your local kubectl might already have access to the cluster. Check by running the following command: kubectl get nodes Stack Component Deploy Get the name of the deployed cluster. zenml stack recipe output eks-cluster-name Figure out the region that the cluster is deployed to. By default, the region is set to eu-west-1 , which you should use in the next step if you haven't supplied a custom value while creating the cluster.
getting-started
https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/troubleshoot-stack-components
389
⭐Introduction Welcome to ZenML! ZenML is an extensible, open-source MLOps framework for creating portable, production-ready machine learning pipelines. By decoupling infrastructure from code, ZenML enables developers across your organization to collaborate more effectively as they develop to production. ZenML enables MLOps infrastructure experts to define, deploy, and manage sophisticated production environments that are easy to share with colleagues. ZenML Pro: ZenML Pro provides a control plane that allows you to deploy a managed ZenML instance and get access to exciting new features such as CI/CD, Model Control Plane, and RBAC. Self-hosted deployment: ZenML can be deployed on any cloud provider and provides many Terraform-based utility functions to deploy other MLOps tools or even entire MLOps stacks:Copy# Deploy ZenML to any cloud zenml deploy --provider aws # Deploy MLOps tools and infrastructure to any cloud zenml orchestrator deploy kfp --flavor kubeflow --provider gcp # Deploy entire MLOps stacks at once zenml stack deploy gcp-vertexai --provider gcp -o kubeflow ... Standardization: With ZenML, you can standardize MLOps infrastructure and tooling across your organization. Simply register your staging and production environments as ZenML stacks and invite your colleagues to run ML workflows on them.Copy# Register MLOps tools and infrastructure zenml orchestrator register kfp_orchestrator -f kubeflow # Register your production environment zenml stack register production --orchestrator kubeflow ... # Make it available to your colleagues zenml stack share production Registering your environments as ZenML stacks also enables you to browse and explore them in a convenient user interface. Try it out at https://www.zenml.io/live-demo!
null
https://docs.zenml.io/
380
e the new flavor in the list of available flavors:zenml container-registry flavor list It is important to draw attention to when and how these base abstractions are coming into play in a ZenML workflow. The CustomContainerRegistryFlavor class is imported and utilized upon the creation of the custom flavor through the CLI. The CustomContainerRegistryConfig class is imported when someone tries to register/update a stack component with this custom flavor. Especially, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config object are inherently pydantic objects, you can also add your own custom validators here. The CustomContainerRegistry only comes into play when the component is ultimately in use. The design behind this interaction lets us separate the configuration of the flavor from its implementation. This way we can register flavors and components even when the major dependencies behind their implementation are not installed in our local setting (assuming the CustomContainerRegistryFlavor and the CustomContainerRegistryConfig are implemented in a different module/path than the actual CustomContainerRegistry). PreviousGitHub Container Registry NextData Validators Last updated 15 days ago
stack-components
https://docs.zenml.io/stack-components/container-registries/custom
231
ication method, expiration time, and credentials):zenml service-connector describe aws-iam-role --resource-type s3-bucket --resource-id zenfiles --client Example Command Output Service connector 'aws-iam-role (s3-bucket | s3://zenfiles client)' of type 'aws' with id '8e499202-57fd-478e-9d2f-323d76d8d211' is owned by user 'default' and is 'private'. 'aws-iam-role (s3-bucket | s3://zenfiles client)' aws Service Connector Details ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ ID β”‚ 2b99de14-6241-4194-9608-b9d478e1bcfc ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ NAME β”‚ aws-iam-role (s3-bucket | s3://zenfiles client) ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ TYPE β”‚ πŸ”Ά aws ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ AUTH METHOD β”‚ sts-token ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ RESOURCE TYPES β”‚ πŸ“¦ s3-bucket ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ RESOURCE NAME β”‚ s3://zenfiles ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ SECRET ID β”‚ ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ SESSION DURATION β”‚ N/A ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ EXPIRES IN β”‚ 59m56s ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ OWNER β”‚ default ┃
how-to
https://docs.zenml.io/how-to/auth-management/aws-service-connector
508
sting cloud resources Existing Kubernetes clusterIf you already have an existing cluster without an ingress controller, you can jump straight to the deploy command above to get going with the defaults. Please make sure that you have your local kubectl configured to talk to your cluster. Having an existing NGINX Ingress Controller The deploy command, by default, tries to create an NGINX ingress controller on your cluster. If you already have an existing controller, you can tell ZenML to not re-deploy it through the use of a config file. This file can be found in the Configuration File Templates towards the end of this guide. It offers a host of configuration options that you can leverage for advanced use cases. Check if an ingress controller is running on your cluster by running the following command. You should see an entry in the output with the hostname populated.Copy# change the namespace to any other where # You might have the controller installed kubectl get svc -n ingress-nginx Set create_ingress_controller to false. Supply your controller's hostname to the ingress_controller_hostname variable.Note: The address should not have a trailing /. You can now run the deploy command and pass the config file above, to it.Copyzenml deploy --config=/PATH/TO/FILENote: To be able to run the deploy command, you should have your cloud provider's CLI configured locally with permissions to create resources like MySQL databases and networks. Existing hosted SQL database If you also already have a database that you would want to use with the deployment, you can choose to configure it with the use of the config file. Here, we will demonstrate setting the database. Fill the fields below from the config file with values from your database.Copy# The username and password for the database. database_username: database_password: # The URL of the database to use for the ZenML server. database_url: # The path to the SSL CA certificate to use for the database connection. database_ssl_ca:
getting-started
https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-zenml-cli
399
. Authentication Methods Implicit authenticationImplicit authentication to AWS services using environment variables, local configuration files or IAM roles. This method may constitute a security risk, because it can give users access to the same cloud resources and services that the ZenML Server itself is configured to access. For this reason, all implicit authentication methods are disabled by default and need to be explicitly enabled by setting the ZENML_ENABLE_IMPLICIT_AUTH_METHODS environment variable or the helm chart enableImplicitAuthMethods configuration option to true in the ZenML deployment. This authentication method doesn't require any credentials to be explicitly configured. It automatically discovers and uses credentials from one of the following sources: environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, AWS_DEFAULT_REGION) local configuration files set up through the AWS CLI (~/aws/credentials, ~/.aws/config) IAM roles for Amazon EC2, ECS, EKS, Lambda, etc. Only works when running the ZenML server on an AWS resource with an IAM role attached to it. This is the quickest and easiest way to authenticate to AWS services. However, the results depend on how ZenML is deployed and the environment where it is used and is thus not fully reproducible: when used with the default local ZenML deployment or a local ZenML server, the credentials are the same as those used by the AWS CLI or extracted from local environment variables when connected to a ZenML server, this method only works if the ZenML server is deployed in AWS and will use the IAM role attached to the AWS resource where the ZenML server is running (e.g. an EKS cluster). The IAM role permissions may need to be adjusted to allow listing and accessing/describing the AWS resources that the connector is configured to access.
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector
356
hat's described on the feast page: How to use it?.PreviousDevelop a Custom Model Registry NextFeast Last updated 1 year ago
stack-components
https://docs.zenml.io/v/docs/stack-components/feature-stores
30
object that can be used to access any AWS service.they support multiple authentication methods. Some of these allow clients direct access to long-lived, broad-access credentials and are only recommended for local development use. Others support distributing temporary API tokens automatically generated from long-lived credentials, which are safer for production use-cases, but may be more difficult to set up. A few authentication methods even support down-scoping the permissions of temporary API tokens so that they only allow access to the target resource and restrict access to everything else. This is covered at length in the section on best practices for authentication methods. there is flexibility regarding the range of resources that a single cloud provider Service Connector instance configured with a single set of credentials can be scoped to access:a multi-type Service Connector instance can access any type of resources from the range of supported Resource Typesa multi-instance Service Connector instance can access multiple resources of the same typea single-instance Service Connector instance is scoped to access a single resource The following output shows three different Service Connectors configured from the same GCP Service Connector Type using three different scopes but with the same credentials: a multi-type GCP Service Connector that allows access to every possible resource accessible with the configured credentials a multi-instance GCS Service Connector that allows access to multiple GCS buckets a single-instance GCS Service Connector that only permits access to one GCS bucket $ zenml service-connector list ┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓ ┃ ACTIVE β”‚ NAME β”‚ ID β”‚ TYPE β”‚ RESOURCE TYPES β”‚ RESOURCE NAME β”‚ SHARED β”‚ OWNER β”‚ EXPIRES IN β”‚ LABELS ┃
how-to
https://docs.zenml.io/how-to/auth-management/service-connectors-guide
424
rflow-providers-cncf-kubernetes package installed.You can specify which operator to use and additional arguments to it as follows: from zenml import pipeline, step from zenml.integrations.airflow.flavors.airflow_orchestrator_flavor import AirflowOrchestratorSettings airflow_settings = AirflowOrchestratorSettings( operator="docker", # or "kubernetes_pod" # Dictionary of arguments to pass to the operator __init__ method operator_args={} # Using the operator for a single step @step(settings={"orchestrator.airflow": airflow_settings}) def my_step(...): # Using the operator for all steps in your pipeline @pipeline(settings={"orchestrator.airflow": airflow_settings}) def my_pipeline(...): Custom operators If you want to use any other operator to run your steps, you can specify the operator in your AirflowSettings as a path to the python operator class: from zenml.integrations.airflow.flavors.airflow_orchestrator_flavor import AirflowOrchestratorSettings airflow_settings = AirflowOrchestratorSettings( # This could also be a reference to one of your custom classes. # e.g. `my_module.MyCustomOperatorClass` as long as the class # is importable in your Airflow server environment operator="airflow.providers.docker.operators.docker.DockerOperator", # Dictionary of arguments to pass to the operator __init__ method operator_args={} Custom DAG generator file To run a pipeline in Airflow, ZenML creates a Zip archive that contains two files: A JSON configuration file that the orchestrator creates. This file contains all the information required to create the Airflow DAG to run the pipeline. A Python file that reads this configuration file and actually creates the Airflow DAG. We call this file the DAG generator and you can find the implementation here . original module . For this reason, we suggest starting by copying the original and modifying it according to your needs. Check out our docs on how to apply settings to your pipelines here.
stack-components
https://docs.zenml.io/stack-components/orchestrators/airflow
426
Creating custom visualizations Creating your own visualizations. There are two ways how you can add custom visualizations to the dashboard: If you are already handling HTML, Markdown, or CSV data in one of your steps, you can have them visualized in just a few lines of code by casting them to a special class inside your step. If you want to automatically extract visualizations for all artifacts of a certain data type, you can define type-specific visualization logic by building a custom materializer. If you want to create any other custom visualizations, you can create a custom return type class with corresponding materializer and build and return this custom return type from one of your steps. Visualization via Special Return Types If you already have HTML, Markdown, or CSV data available as a string inside your step, you can simply cast them to one of the following types and return them from your step: zenml.types.HTMLString for strings in HTML format, e.g., "<h1>Header</h1>Some text", zenml.types.MarkdownString for strings in Markdown format, e.g., "# Header\nSome text", zenml.types.CSVString for strings in CSV format, e.g., "a,b,c\n1,2,3". Example: from zenml.types import CSVString @step def my_step() -> CSVString: some_csv = "a,b,c\n1,2,3" return CSVString(some_csv) This would create the following visualization in the dashboard: Visualization via Materializers If you want to automatically extract visualizations for all artifacts of a certain data type, you can do so by overriding the save_visualizations() method of the corresponding materializer. See the materializer docs page for more information on how to create custom materializers that do this. Visualization via Custom Return Type and Materializer By combining the ideas behind the above two visualization approaches, you can visualize virtually anything you want inside your ZenML dashboard in three simple steps: Create a custom class that will hold the data that you want to visualize.
how-to
https://docs.zenml.io/v/docs/how-to/visualize-artifacts/creating-custom-visualizations
416
added as your pipeline evolves in MLOps maturity.Writing custom component flavors You can take control of how ZenML behaves by creating your own components. This is done by writing custom component flavors. To learn more, head over to the general guide on writing component flavors, or read more specialized guides for specific component types (e.g. the custom orchestrator guide). Integrations Categorizing the MLOps stack is a good way to write abstractions for an MLOps pipeline and standardize your processes. But ZenML goes further and also provides concrete implementations of these categories by integrating with various tools for each category. Once code is organized into a ZenML pipeline, you can supercharge your ML workflows with the best-in-class solutions from various MLOps areas. Airflow or Kubeflow, track experiments using MLflow Tracking or Weights & Biases, and transition seamlessly from a local MLflow deployment to a deployed model on Kubernetes using Seldon Core. There are lots of moving parts for all the MLOps tooling and infrastructure you require for ML in production and ZenML brings them all together and enables you to manage them in one place. This also allows you to delay the decision of which MLOps tool to use in your stack as you have no vendor lock-in with ZenML and can easily switch out tools as soon as your requirements change. Available integrations We have a dedicated webpage that indexes all supported ZenML integrations and their categories. Another easy way of seeing a list of integrations is to see the list of directories in the integrations directory on our GitHub. Installing ZenML integrations Before you can use integrations, you first need to install them using zenml integration install, e.g., you can install Kubeflow, MLflow Tracking, and Seldon Core, using: zenml integration install kubeflow mlflow seldon -y Under the hood, this simply installs the preferred versions of all integrations using pip, i.e., it executes in a sub-process call:
stack-components
https://docs.zenml.io/stack-components/component-guide
420
gs://zenml-core.appspot.com ┃┃ β”‚ gs://zenml-core_cloudbuild ┃ ┃ β”‚ gs://zenml-datasets ┃ ┃ β”‚ gs://zenml-internal-artifact-store ┃ ┃ β”‚ gs://zenml-kubeflow-artifact-store ┃ ┃ β”‚ gs://zenml-project-time-series-bucket ┃ ┠───────────────────────┼─────────────────────────────────────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ zenml-test-cluster ┃ ┠───────────────────────┼─────────────────────────────────────────────────┨ ┃ 🐳 docker-registry β”‚ gcr.io/zenml-core ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ The GCP user account credentials were lifted up from the local host: zenml service-connector describe gcp-user-account Example Command Output Service connector 'gcp-user-account' of type 'gcp' with id 'ddbce93f-df14-4861-a8a4-99a80972f3bc' is owned by user 'default' and is 'private'. 'gcp-user-account' gcp Service Connector Details ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ ID β”‚ ddbce93f-df14-4861-a8a4-99a80972f3bc ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ NAME β”‚ gcp-user-account ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ TYPE β”‚ πŸ”΅ gcp ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector
507
ke actions) can happen here return profile.view()Visualizing whylogs Profiles You can view visualizations of the whylogs profiles generated by your pipeline steps directly in the ZenML dashboard by clicking on the respective artifact in the pipeline run DAG. Alternatively, if you are running inside a Jupyter notebook, you can load and render the whylogs profiles using the artifact.visualize() method, e.g.: from zenml.client import Client def visualize_statistics( step_name: str, reference_step_name: Optional[str] = None ) -> None: """Helper function to visualize whylogs statistics from step artifacts. Args: step_name: step that generated and returned a whylogs profile reference_step_name: an optional second step that generated a whylogs profile to use for data drift visualization where two whylogs profiles are required. """ pipe = Client().get_pipeline(pipeline="data_profiling_pipeline") whylogs_step = pipe.last_run.steps[step_name] whylogs_step.visualize() if __name__ == "__main__": visualize_statistics("data_loader") visualize_statistics("train_data_profiler", "test_data_profiler") PreviousEvidently NextDevelop a custom data validator Last updated 19 days ago
stack-components
https://docs.zenml.io/v/docs/stack-components/data-validators/whylogs
252
ace. Try it out at https://www.zenml.io/live-demo!No Vendor Lock-In: Since infrastructure is decoupled from code, ZenML gives you the freedom to switch to a different tooling stack whenever it suits you. By avoiding vendor lock-in, you have the flexibility to transition between cloud providers or services, ensuring that you receive the best performance and pricing available in the market at any time.Copyzenml stack set gcp python run.py # Run your ML workflows in GCP zenml stack set aws python run.py # Now your ML workflow runs in AWS πŸš€ Learn More Ready to deploy and manage your MLOps infrastructure with ZenML? Here is a collection of pages you can take a look at next: Set up and manage production-ready infrastructure with ZenML. Explore the existing infrastructure and tooling integrations of ZenML. Find answers to the most frequently asked questions. ZenML gives data scientists the freedom to fully focus on modeling and experimentation while writing code that is production-ready from the get-go. Develop Locally: ZenML allows you to develop ML models in any environment using your favorite tools. This means you can start developing locally, and simply switch to a production environment once you are satisfied with your results.Copypython run.py # develop your code locally with all your favorite tools zenml stack set production python run.py # run on production infrastructure without any code changes Pythonic SDK: ZenML is designed to be as unintrusive as possible. Adding a ZenML @step or @pipeline decorator to your Python functions is enough to turn your existing code into ZenML pipelines:Copyfrom zenml import pipeline, step @step def step_1() -> str: return "world" @step def step_2(input_one: str, input_two: str) -> None: combined_str = input_one + ' ' + input_two print(combined_str) @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) my_pipeline()
docs
https://docs.zenml.io/v/docs
437
β”‚ πŸ”Ά aws β”‚ πŸ“¦ s3-bucket β”‚ s3://zenfiles ┃┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ The ZenML CLI provides an even easier and more interactive way of connecting a stack component to an external resource. Just pass the -i command line argument and follow the interactive guide: zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles zenml artifact-store connect s3-zenfiles -i The S3 Artifact Store Stack Component we just connected to the infrastructure is now ready to be used in a stack to run a pipeline: zenml stack register s3-zenfiles -o default -a s3-zenfiles --set A simple pipeline could look like this: from zenml import step, pipeline @step def simple_step_one() -> str: """Simple step one.""" return "Hello World!" @step def simple_step_two(msg: str) -> None: """Simple step two.""" print(msg) @pipeline def simple_pipeline() -> None: """Define single step pipeline.""" message = simple_step_one() simple_step_two(msg=message) if __name__ == "__main__": simple_pipeline() Save this as run.py and run it with the following command: python run.py Example Command Output Registered pipeline simple_pipeline (version 1). Running pipeline simple_pipeline on stack s3-zenfiles (caching enabled) Step simple_step_one has started. Step simple_step_one has finished in 1.065s. Step simple_step_two has started. Hello World! Step simple_step_two has finished in 5.681s. Pipeline run simple_pipeline-2023_06_15-19_29_42_159831 has finished in 12.522s. Dashboard URL: http://127.0.0.1:8237/workspaces/default/pipelines/8267b0bc-9cbd-42ac-9b56-4d18275bdbb4/runs
how-to
https://docs.zenml.io/how-to/auth-management
471
crets Store back-end does not change. For example:updating the credentials used to authenticate with the Secrets Store back-end before or after they expire switching to a different authentication method to authenticate with the same Secrets Store back-end (e.g. switching from an IAM account secret key to an IAM role in the AWS Secrets Manager) If you are a ZenML Cloud user, you can configure your cloud backend based on your deployment scenario. PreviousCustom secret stores NextZenML Pro Last updated 15 days ago
getting-started
https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/secret-management
103
lete the resources when they are no longer needed?If we generate questions for all of our chunks, we can then use these question-chunk pairs to evaluate the retrieval component. We pass the generated query to the retrieval component and then we check if the URL for the original document is in the top n results. To generate the synthetic queries we can use the following code: from typing import List from litellm import completion from structures import Document from zenml import step LOCAL_MODEL = "ollama/mixtral" def generate_question(chunk: str, local: bool = False) -> str: model = LOCAL_MODEL if local else "gpt-3.5-turbo" response = completion( model=model, messages=[ "content": f"This is some text from ZenML's documentation. Please generate a question that can be asked about this text: `{chunk}`", "role": "user", ], api_base="http://localhost:11434" if local else None, return response.choices[0].message.content @step def generate_questions_from_chunks( docs_with_embeddings: List[Document], local: bool = False, ) -> List[Document]: for doc in docs_with_embeddings: doc.generated_questions = [generate_question(doc.page_content, local)] assert all(doc.generated_questions for doc in docs_with_embeddings) return docs_with_embeddings As you can see, we're using litellm again as the wrapper for the API calls. This allows us to switch between using a cloud LLM API (like OpenAI's GPT3.5 or 4) and a local LLM (like a quantized version of Mistral AI's Mixtral made available with Ollama. This has a number of advantages: you keep your costs down by using a local model you can iterate faster by not having to wait for API calls you can use the same code for both local and cloud models For some tasks you'll want to use the best model your budget can afford, but for this task of question generation we're fine using a local and slightly less capable model. Even better is that it'll be much faster to generate the questions, especially using the basic setup we have here.
user-guide
https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/retrieval
453
What can be configured Here is an example of a sample YAML file, with the most important configuration highlighted. For brevity, we have removed all possible keys. To view a sample file with all possible keys, refer to this page. # Build ID (i.e. which Docker image to use) build: dcd6fafb-c200-4e85-8328-428bef98d804 # Enable flags (boolean flags that control behavior) enable_artifact_metadata: True enable_artifact_visualization: False enable_cache: False enable_step_logs: True # Extra dictionary to pass in arbitrary values extra: any_param: 1 another_random_key: "some_string" # Specify the "ZenML Model" model: name: "classification_model" version: production audience: "Data scientists" description: "This classifies hotdogs and not hotdogs" ethics: "No ethical implications" license: "Apache 2.0" limitations: "Only works for hotdogs" tags: ["sklearn", "hotdog", "classification"] # Parameters of the pipeline parameters: dataset_name: "another_dataset" # Name of the run run_name: "my_great_run" # Schedule, if supported on the orchestrator schedule: catchup: true cron_expression: "* * * * *" # Real-time settings for Docker and resources settings: # Controls Docker building docker: apt_packages: ["curl"] copy_files: True dockerfile: "Dockerfile" dockerignore: ".dockerignore" environment: ZENML_LOGGING_VERBOSITY: DEBUG parent_image: "zenml-io/zenml-cuda" requirements: ["torch"] skip_build: False # Control resources for the entire pipeline resources: cpu_count: 2 gpu_count: 1 memory: "4Gb" # Per step configuration steps: # Top-level key should be the name of the step invocation ID train_model: # Parameters of the step parameters: data_source: "best_dataset" # Step-only configuration experiment_tracker: "mlflow_production" step_operator: "vertex_gpu" outputs: {} failure_hook_source: {} success_hook_source: {} # Same as pipeline level configuration, if specified overrides for this step
how-to
https://docs.zenml.io/how-to/use-configuration-files/what-can-be-configured
475
dentials for authentication to the Comet platform:api_key: Mandatory API key token of your Comet account. project_name: The name of the project where you're sending the new experiment. If the project is not specified, the experiment is put in the default project associated with your API key. workspace: Optional. The name of the workspace where your project is located. If not specified, the default workspace associated with your API key will be used. This option configures the credentials for the Comet platform directly as stack component attributes. This is not recommended for production settings as the credentials won't be stored securely and will be clearly visible in the stack configuration. # Register the Comet experiment tracker zenml experiment-tracker register comet_experiment_tracker --flavor=comet \ --workspace=<workspace> --project_name=<project_name> --api_key=<key> # Register and set a stack with the new experiment tracker zenml stack register custom_stack -e comet_experiment_tracker ... --set This method requires you to configure a ZenML secret to store the Comet tracking service credentials securely. You can create the secret using the zenml secret create command: zenml secret create comet_secret \ --workspace=<WORKSPACE> \ --project_name=<PROJECT_NAME> \ --api_key=<API_KEY> Once the secret is created, you can use it to configure the Comet Experiment Tracker: # Reference the workspace, project, and api-key in our experiment tracker component zenml experiment-tracker register comet_tracker \ --flavor=comet \ --workspace={{comet_secret.workspace}} \ --project_name={{comet_secret.project_name}} \ --api_key={{comet_secret.api_key}} ... Read more about ZenML Secrets in the ZenML documentation. For more up-to-date information on the Comet Experiment Tracker implementation and its configuration, you can have a look at the SDK docs. How do you use it?
stack-components
https://docs.zenml.io/v/docs/stack-components/experiment-trackers/comet
381
Linking model binaries/data to a Model Artifacts generated during pipeline runs can be linked to models in ZenML. This connecting of artifacts provides lineage tracking and transparency into what data and models are used during training, evaluation, and inference. There are a few ways to link artifacts: Configuring the Model at a pipeline level The easiest way is to configure the model parameter on the @pipeline decorator or @step decorator: from zenml import Model, pipeline model = Model( name="my_model", version="1.0.0" @pipeline(model=model) def my_pipeline(): ... This will automatically link all artifacts from this pipeline run to the specified model configuration. Controlling artifact types and linkage A ZenML model supports linking three types of artifacts: Data artifacts: These are the default artifacts. If nothing is specified, all artifacts are grouped under this category. Model artifacts: If there is a physical model artifact like a .pkl file or a model neural network weights file, it should be grouped in this category. Deployment artifacts: These artifacts are to do with artifacts related to the endpoints and deployments of the models. You can also explicitly specify the linkage on a per-artifact basis by passing a special configuration to the Annotated output: from zenml import step, ArtifactConfig from typing import Tuple from typing_extensions import Annotated import pandas as pd @step def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[ # This third argument marks this as a Model Artifact Annotated[ClassifierMixin, ArtifactConfig("trained_model", is_model_artifact=True)], # This third argument marks this as a Data Artifact Annotated[str, ArtifactConfig("deployment_uri", is_deployment_artifact=True)], ]: ...
how-to
https://docs.zenml.io/how-to/use-the-model-control-plane/linking-model-binaries-data-to-models
373
Use failure/success hooks Running failure and success hooks after step execution. Hooks are a way to perform an action after a step has completed execution. They can be useful in a variety of scenarios, such as sending notifications, logging, or cleaning up resources after a step has been completed. A hook executes right after step execution, within the same environment as the step, therefore it has access to all the dependencies that a step has. Currently, there are two sorts of hooks that can be defined: on_failure and on_success . on_failure: This hook triggers in the event of a step failing. on_success: This hook triggers in the event of a step succeeding. Here is a short demo for hooks in ZenML: Defining hooks A hook can be defined as a callback function, and must be accessible within the repository where the pipeline and steps are located. In case of failure hooks, you can optionally add a BaseException argument to the hook, allowing you to access the concrete Exception that caused your step to fail: from zenml import step def on_failure(exception: BaseException): print(f"Step failed: {str(exception)}") def on_success(): print("Step succeeded!") @step(on_failure=on_failure) def my_failing_step() -> int: """Returns an integer.""" raise ValueError("Error") @step(on_success=on_success) def my_successful_step() -> int: """Returns an integer.""" return 1 A step can also be specified as a local user-defined function path (of the form mymodule.myfile.my_function). This is particularly useful when defining the hooks via a YAML Config. Defining hooks on a pipeline level In some cases, there is a need to define a hook on all steps of a given pipeline. Rather than having to define it on all steps individually, you can also specify any hook on the pipeline level. @pipeline(on_failure=on_failure, on_success=on_success) def my_pipeline(...): ... Note, that step-level defined hooks take precedence over pipeline-level defined hooks.
how-to
https://docs.zenml.io/v/docs/how-to/build-pipelines/use-failure-success-hooks
414
━━━━━━━━━━━━━━━━━━━━━━┛ Local client provisioningThe local AWS CLI, Kubernetes kubectl CLI and the Docker CLI can be configured with credentials extracted from or generated by a compatible AWS Service Connector. Please note that unlike the configuration made possible through the AWS CLI, the Kubernetes and Docker credentials issued by the AWS Service Connector have a short lifetime and will need to be regularly refreshed. This is a byproduct of implementing a high-security profile. Configuring the local AWS CLI with credentials issued by the AWS Service Connector results in a local AWS CLI configuration profile being created with the name inferred from the first digits of the Service Connector UUID in the form -<uuid[:8]>. For example, a Service Connector with UUID 9f3139fd-4726-421a-bc07-312d83f0c89e will result in a local AWS CLI configuration profile named zenml-9f3139fd. The following shows an example of configuring the local Kubernetes CLI to access an EKS cluster reachable through an AWS Service Connector: zenml service-connector list --name aws-session-token Example Command Output ┏━━━━━━━━┯━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━┯━━━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━┓ ┃ ACTIVE β”‚ NAME β”‚ ID β”‚ TYPE β”‚ RESOURCE TYPES β”‚ RESOURCE NAME β”‚ SHARED β”‚ OWNER β”‚ EXPIRES IN β”‚ LABELS ┃ ┠────────┼───────────────────┼──────────────────────────────────────┼────────┼───────────────────────┼───────────────┼────────┼─────────┼────────────┼────────┨ ┃ β”‚ aws-session-token β”‚ c0f8e857-47f9-418b-a60f-c3b03023da54 β”‚ πŸ”Ά aws β”‚ πŸ”Ά aws-generic β”‚ <multiple> β”‚ βž– β”‚ default β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ β”‚ πŸ“¦ s3-bucket β”‚ β”‚ β”‚ β”‚ β”‚ ┃
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector
506
Return multiple outputs from a step Use Annotated to return multiple outputs from a step and name them for easy retrieval and dashboard display. You can use the Annotated type to return multiple outputs from a step and give each output a name. Naming your step outputs will help you retrieve the specific artifact later and also improves the readability of your pipeline's dashboard. from typing import Annotated, Tuple import pandas as pd from zenml import step @step def clean_data( data: pd.DataFrame, ) -> Tuple[ Annotated[pd.DataFrame, "x_train"], Annotated[pd.DataFrame, "x_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: from sklearn.model_selection import train_test_split x = data.drop("target", axis=1) y = data["target"] x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42) return x_train, x_test, y_train, y_test Inside the step, we split the input data into features (x) and target (y), and then use train_test_split from scikit-learn to split the data into training and testing sets. The resulting DataFrames and Series are returned as a tuple, with each element annotated with its respective name. By using Annotated, we can easily identify and retrieve specific artifacts later in the pipeline. Additionally, the names will be displayed on the pipeline's dashboard, making it more readable and understandable. PreviousHow ZenML stores data NextDelete an artifact Last updated 15 days ago
how-to
https://docs.zenml.io/how-to/handle-data-artifacts/return-multiple-outputs-from-a-step
339
Fetch metadata within steps Accessing meta information in real-time within your pipeline. Using the StepContext To find information about the pipeline or step that is currently running, you can use the zenml.get_step_context() function to access the StepContext of your step: from zenml import step, get_step_context @step def my_step(): step_context = get_step_context() pipeline_name = step_context.pipeline.name run_name = step_context.pipeline_run.name step_name = step_context.step_run.name Furthermore, you can also use the StepContext to find out where the outputs of your current step will be stored and which Materializer class will be used to save them: @step def my_step(): step_context = get_step_context() # Get the URI where the output will be saved. uri = step_context.get_output_artifact_uri() # Get the materializer that will be used to save the output. materializer = step_context.get_output_materializer() See the SDK Docs for more information on which attributes and methods the StepContext provides. PreviousSpecial Metadata Types NextFetch metadata during pipeline composition Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/track-metrics-metadata/fetch-metadata-within-steps
232
s[0].hostname}') Now register the model deployer:Note: If you chose to configure your own custom credentials to authenticate to the persistent storage service where models are stored, as covered in the Advanced: Configuring a Custom Seldon Core Secret section, you will need to specify a ZenML secret reference when you configure the Seldon Core model deployer below: zenml model-deployer register seldon_deployer --flavor=seldon \ --kubernetes_context=<KUBERNETES-CONTEXT> \ --kubernetes_namespace=<KUBERNETES-NAMESPACE> \ --base_url=http://$INGRESS_HOST \ --secret=<zenml-secret-name> # Register the Seldon Core Model Deployer zenml model-deployer register seldon_deployer --flavor=seldon \ --kubernetes_context=<KUBERNETES-CONTEXT> \ --kubernetes_namespace=<KUBERNETES-NAMESPACE> \ --base_url=http://$INGRESS_HOST \ We can now use the model deployer in our stack. zenml stack update seldon_stack --model-deployer=seldon_deployer See the seldon_model_deployer_step for an example of using the Seldon Core Model Deployer to deploy a model inside a ZenML pipeline step. Configuration Within the SeldonDeploymentConfig you can configure: model_name: the name of the model in the Seldon cluster and in ZenML. replicas: the number of replicas with which to deploy the model implementation: the type of Seldon inference server to use for the model. The implementation type can be one of the following: TENSORFLOW_SERVER, SKLEARN_SERVER, XGBOOST_SERVER, custom. parameters: an optional list of parameters (SeldonDeploymentPredictorParameter) to pass to the deployment predictor in the form of:nametypevalue resources: the resources to be allocated to the model. This can be configured by passing a SeldonResourceRequirements object with the requests and limits properties. The values for these properties can be a dictionary with the cpu and memory keys. The values for these keys can be a string with the amount of CPU and memory to be allocated to the model.
stack-components
https://docs.zenml.io/stack-components/model-deployers/seldon
447
llowing credentials for authentication to Neptune:api_token: API key token of your Neptune account. You can create a free Neptune account here. If left blank, Neptune will attempt to retrieve the token from your environment variables. project: The name of the project where you're sending the new run, in the form "workspace-name/project-name". If the project is not specified, Neptune will attempt to retrieve it from your environment variables. This option configures the credentials for neptune.ai directly as stack component attributes. This is not recommended for production settings as the credentials won't be stored securely and will be clearly visible in the stack configuration. # Register the Neptune experiment tracker zenml experiment-tracker register neptune_experiment_tracker --flavor=neptune \ --project=<project_name> --api_token=<token> # Register and set a stack with the new experiment tracker zenml stack register custom_stack -e neptune_experiment_tracker ... --set This method requires you to configure a ZenML secret to store the Neptune tracking service credentials securely. You can create the secret using the zenml secret create command: zenml secret create neptune_secret \ --project=<PROJECT> --api_token=<API_TOKEN> Once the secret is created, you can use it to configure the neptune Experiment Tracker: # Reference the project and api-token in our experiment tracker component zenml experiment-tracker register neptune_secret \ --flavor=neptune \ --project={{neptune_secret.project}} \ --api_token={{neptune_secret.api_token}} ... Read more about ZenML Secrets in the ZenML documentation. For more, up-to-date information on the Neptune Experiment Tracker implementation and its configuration, you can have a look at the SDK docs . How do you use it?
stack-components
https://docs.zenml.io/v/docs/stack-components/experiment-trackers/neptune
356
registry β”‚ demozenmlcontainerregistry.azurecr.io ┃┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ zenml service-connector login azure-service-principal --resource-type docker-registry --resource-id demozenmlcontainerregistry.azurecr.io Example Command Output β Ή Attempting to configure local client using service connector 'azure-service-principal'... WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store The 'azure-service-principal' Docker Service Connector connector was used to successfully configure the local Docker/OCI container registry client/SDK. The local Docker CLI can now be used to interact with the container registry: docker push demozenmlcontainerregistry.azurecr.io/zenml:example_pipeline Example Command Output The push refers to repository [demozenmlcontainerregistry.azurecr.io/zenml] d4aef4f5ed86: Pushed 2d69a4ce1784: Pushed 204066eca765: Pushed 2da74ab7b0c1: Pushed 75c35abda1d1: Layer already exists 415ff8f0f676: Layer already exists c14cb5b1ec91: Layer already exists a1d005f5264e: Layer already exists 3a3fd880aca3: Layer already exists 149a9c50e18e: Layer already exists 1f6d3424b922: Layer already exists 8402c959ae6f: Layer already exists 419599cb5288: Layer already exists 8553b91047da: Layer already exists connectors: digest: sha256:a4cfb18a5cef5b2201759a42dd9fe8eb2f833b788e9d8a6ebde194765b42fe46 size: 3256 It is also possible to update the local Azure CLI configuration with credentials extracted from the Azure Service Connector: zenml service-connector login azure-service-principal --resource-type azure-generic Example Command Output Updated the local Azure CLI configuration with the connector's service principal credentials. The 'azure-service-principal' Azure Service Connector connector was used to successfully configure the local Generic Azure resource client/SDK.
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector
525
_init__(self, name: str): self.name = name @stepdef my_first_step() -> MyObj: """Step that returns an object of type MyObj.""" return MyObj("my_object") @step def my_second_step(my_obj: MyObj) -> None: """Step that logs the input object and returns nothing.""" logging.info( f"The following object was passed to this step: `{my_obj.name}`" @pipeline def first_pipeline(): output_1 = my_first_step() my_second_step(output_1) first_pipeline() Running the above without a custom materializer will work but print the following warning: No materializer is registered for type MyObj, so the default Pickle materializer was used. Pickle is not production ready and should only be used for prototyping as the artifacts cannot be loaded when running with a different Python version. Please consider implementing a custom materializer for type MyObj To get rid of this warning and make our pipeline more robust, we will subclass the BaseMaterializer class, listing MyObj in ASSOCIATED_TYPES, and overwriting load() and save(): import os from typing import Type from zenml.enums import ArtifactType from zenml.materializers.base_materializer import BaseMaterializer class MyMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MyObj,) ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA def load(self, data_type: Type[MyObj]) -> MyObj: """Read from artifact store.""" with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'r') as f: name = f.read() return MyObj(name=name) def save(self, my_obj: MyObj) -> None: """Write to artifact store.""" with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'w') as f: f.write(my_obj.name) Pro-tip: Use the self.artifact_store property to ensure your materialization logic works across artifact stores (local and remote like S3 buckets). Now, ZenML can use this materializer to handle the outputs and inputs of your customs object. Edit the pipeline as follows to see this in action: my_first_step.configure(output_materializers=MyMaterializer) first_pipeline()
how-to
https://docs.zenml.io/how-to/handle-data-artifacts/handle-custom-data-types
464
onSageMakerFullAccess managed policy permissions).If using a remote orchestrator: the remote environment in which the orchestrator runs needs to be able to implicitly authenticate to AWS and assume the IAM role specified when registering the SageMaker step operator. This is only possible if the orchestrator is also running in AWS and uses a form of implicit workload authentication like the IAM role of an EC2 instance. If this is not the case, you will need to use a service connector. zenml step-operator register <NAME> \ --flavor=sagemaker \ --role=<SAGEMAKER_ROLE> \ --instance_type=<INSTANCE_TYPE> \ # --experiment_name=<EXPERIMENT_NAME> # optionally specify an experiment to assign this run to zenml stack register <STACK_NAME> -s <STEP_OPERATOR_NAME> ... --set python run.py # Authenticates with `default` profile in `~/.aws/config` Once you added the step operator to your active stack, you can use it to execute individual steps of your pipeline by specifying it in the @step decorator as follows: from zenml import step @step(step_operator= <NAME>) def trainer(...) -> ...: """Train a model.""" # This step will be executed in SageMaker. ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your steps in SageMaker. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them. Additional configuration For additional configuration of the SageMaker step operator, you can pass SagemakerStepOperatorSettings when defining or running your pipeline. Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings. For more information and a full list of configurable attributes of the SageMaker step operator, check out the SDK Docs . Enabling CUDA for GPU-backed hardware
stack-components
https://docs.zenml.io/stack-components/step-operators/sagemaker
403
Migration guide 0.39.1 β†’ 0.41.0 How to migrate your ZenML pipelines and steps from version <=0.39.1 to 0.41.0. ZenML versions 0.40.0 to 0.41.0 introduced a new and more flexible syntax to define ZenML steps and pipelines. This page contains code samples that show you how to upgrade your steps and pipelines to the new syntax. Newer versions of ZenML still work with pipelines and steps defined using the old syntax, but the old syntax is deprecated and will be removed in the future. Overview from typing import Optional from zenml.steps import BaseParameters, Output, StepContext, step from zenml.pipelines import pipeline # Define a Step class MyStepParameters(BaseParameters): param_1: int param_2: Optional[float] = None @step def my_step( params: MyStepParameters, context: StepContext, ) -> Output(int_output=int, str_output=str): result = int(params.param_1 * (params.param_2 or 1)) result_uri = context.get_output_artifact_uri() return result, result_uri # Run the Step separately my_step.entrypoint() # Define a Pipeline @pipeline def my_pipeline(my_step): my_step() step_instance = my_step(params=MyStepParameters(param_1=17)) pipeline_instance = my_pipeline(my_step=step_instance) # Configure and run the Pipeline pipeline_instance.configure(enable_cache=False) schedule = Schedule(...) pipeline_instance.run(schedule=schedule) # Fetch the Pipeline Run last_run = pipeline_instance.get_runs()[0] int_output = last_run.get_step["my_step"].outputs["int_output"].read() from typing import Annotated, Optional, Tuple from zenml import get_step_context, pipeline, step from zenml.client import Client # Define a Step @step def my_step( param_1: int, param_2: Optional[float] = None ) -> Tuple[Annotated[int, "int_output"], Annotated[str, "str_output"]]: result = int(param_1 * (param_2 or 1)) result_uri = get_step_context().get_output_artifact_uri() return result, result_uri # Run the Step separately my_step() # Define a Pipeline @pipeline
reference
https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-forty
487
Deploy with Helm Deploying ZenML in a Kubernetes cluster with Helm. If you wish to manually deploy and manage ZenML in a Kubernetes cluster of your choice, ZenML also includes a Helm chart among its available deployment options. You can find the chart on this ArtifactHub repository, along with the templates, default values and instructions on how to install it. Read on to find detailed explanations on prerequisites, configuration, and deployment scenarios. Prerequisites You'll need the following: A Kubernetes cluster Optional, but recommended: a MySQL-compatible database reachable from the Kubernetes cluster (e.g. one of the managed databases offered by Google Cloud, AWS, or Azure). A MySQL server version of 8.0 or higher is required the Kubernetes client already installed on your machine and configured to access your cluster Helm installed on your machine Optional: an external Secrets Manager service (e.g. one of the managed secrets management services offered by Google Cloud, AWS, Azure, or HashiCorp Vault). By default, ZenML stores secrets inside the SQL database that it's connected to, but you also have the option of using an external cloud Secrets Manager service if you already happen to use one of those cloud or service providers ZenML Helm Configuration You can start by taking a look at the values.yaml file and familiarize yourself with some of the configuration settings that you can customize for your ZenML deployment. In addition to tools and infrastructure, you will also need to collect and prepare information related to your database and information related to your external secrets management service to be used for the Helm chart configuration and you may also want to install additional optional services in your cluster. When you are ready, you can proceed to the installation section. Collect information from your SQL database service
getting-started
https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-helm
355
┃┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ EXPIRES IN β”‚ N/A ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ OWNER β”‚ default ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ WORKSPACE β”‚ default ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ SHARED β”‚ βž– ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ CREATED_AT β”‚ 2023-06-19 19:36:28.619751 ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ UPDATED_AT β”‚ 2023-06-19 19:36:28.619753 ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Configuration ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠───────────────────────┼───────────┨ ┃ region β”‚ us-east-1 ┃ ┠───────────────────────┼───────────┨ ┃ aws_access_key_id β”‚ [HIDDEN] ┃ ┠───────────────────────┼───────────┨ ┃ aws_secret_access_key β”‚ [HIDDEN] ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━┛ However, clients receive temporary STS tokens instead of the AWS Secret Key configured in the connector (note the authentication method, expiration time, and credentials): zenml service-connector describe aws-federation-token --resource-type s3-bucket --resource-id zenfiles --client Example Command Output
how-to
https://docs.zenml.io/how-to/auth-management/aws-service-connector
483
ace. Try it out at https://www.zenml.io/live-demo!No Vendor Lock-In: Since infrastructure is decoupled from code, ZenML gives you the freedom to switch to a different tooling stack whenever it suits you. By avoiding vendor lock-in, you have the flexibility to transition between cloud providers or services, ensuring that you receive the best performance and pricing available in the market at any time.Copyzenml stack set gcp python run.py # Run your ML workflows in GCP zenml stack set aws python run.py # Now your ML workflow runs in AWS πŸš€ Learn More Ready to deploy and manage your MLOps infrastructure with ZenML? Here is a collection of pages you can take a look at next: Set up and manage production-ready infrastructure with ZenML. Explore the existing infrastructure and tooling integrations of ZenML. Find answers to the most frequently asked questions. ZenML gives data scientists the freedom to fully focus on modeling and experimentation while writing code that is production-ready from the get-go. Develop Locally: ZenML allows you to develop ML models in any environment using your favorite tools. This means you can start developing locally, and simply switch to a production environment once you are satisfied with your results.Copypython run.py # develop your code locally with all your favorite tools zenml stack set production python run.py # run on production infrastructure without any code changes Pythonic SDK: ZenML is designed to be as unintrusive as possible. Adding a ZenML @step or @pipeline decorator to your Python functions is enough to turn your existing code into ZenML pipelines:Copyfrom zenml import pipeline, step @step def step_1() -> str: return "world" @step def step_2(input_one: str, input_two: str) -> None: combined_str = input_one + ' ' + input_two print(combined_str) @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) my_pipeline()
null
https://docs.zenml.io/
437
Evidently How to keep your data quality in check and guard against data and model drift with Evidently profiling The Evidently Data Validator flavor provided with the ZenML integration uses Evidently to perform data quality, data drift, model drift and model performance analyses, to generate reports and run checks. The reports and check results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation. When would you want to use it? Evidently is an open-source library that you can use to monitor and debug machine learning models by analyzing the data that they use through a powerful set of data profiling and visualization features, or to run a variety of data and model validation reports and tests, from data integrity tests that work with a single dataset to model evaluation tests to data drift analysis and model performance comparison tests. All this can be done with minimal configuration input from the user, or customized with specialized conditions that the validation tests should perform. Evidently currently works with tabular data in pandas.DataFrame or CSV file formats and can handle both regression and classification tasks. You should use the Evidently Data Validator when you need the following data and/or model validation features that are possible with Evidently: Data Quality reports and tests: provides detailed feature statistics and a feature behavior overview for a single dataset. It can also compare any two datasets. E.g. you can use it to compare train and test data, reference and current data, or two subgroups of one dataset. Data Drift reports and tests: helps detects and explore feature distribution changes in the input data by comparing two datasets with identical schema.
stack-components
https://docs.zenml.io/v/docs/stack-components/data-validators/evidently
336
added as your pipeline evolves in MLOps maturity.Writing custom component flavors You can take control of how ZenML behaves by creating your own components. This is done by writing custom component flavors. To learn more, head over to the general guide on writing component flavors, or read more specialized guides for specific component types (e.g. the custom orchestrator guide). Integrations Categorizing the MLOps stack is a good way to write abstractions for an MLOps pipeline and standardize your processes. But ZenML goes further and also provides concrete implementations of these categories by integrating with various tools for each category. Once code is organized into a ZenML pipeline, you can supercharge your ML workflows with the best-in-class solutions from various MLOps areas. Airflow or Kubeflow, track experiments using MLflow Tracking or Weights & Biases, and transition seamlessly from a local MLflow deployment to a deployed model on Kubernetes using Seldon Core. There are lots of moving parts for all the MLOps tooling and infrastructure you require for ML in production and ZenML brings them all together and enables you to manage them in one place. This also allows you to delay the decision of which MLOps tool to use in your stack as you have no vendor lock-in with ZenML and can easily switch out tools as soon as your requirements change. Available integrations We have a dedicated webpage that indexes all supported ZenML integrations and their categories. Another easy way of seeing a list of integrations is to see the list of directories in the integrations directory on our GitHub. Installing ZenML integrations Before you can use integrations, you first need to install them using zenml integration install, e.g., you can install Kubeflow, MLflow Tracking, and Seldon Core, using: zenml integration install kubeflow mlflow seldon -y Under the hood, this simply installs the preferred versions of all integrations using pip, i.e., it executes in a sub-process call:
stack-components
https://docs.zenml.io/v/docs/stack-components/component-guide
420
Neptune Logging and visualizing experiments with neptune.ai The Neptune Experiment Tracker is an Experiment Tracker flavor provided with the Neptune-ZenML integration that uses neptune.ai to log and visualize information from your pipeline steps (e.g. models, parameters, metrics). When would you want to use it? Neptune is a popular tool that you would normally use in the iterative ML experimentation phase to track and visualize experiment results or as a model registry for your production-ready models. Neptune can also track and visualize the results produced by your automated pipeline runs, as you make the transition towards a more production-oriented workflow. You should use the Neptune Experiment Tracker: if you have already been using neptune.ai to track experiment results for your project and would like to continue doing so as you are incorporating MLOps workflows and best practices in your project through ZenML. if you are looking for a more visually interactive way of navigating the results produced from your ZenML pipeline runs (e.g. models, metrics, datasets) if you would like to connect ZenML to neptune.ai to share the artifacts and metrics logged by your pipelines with your team, organization, or external stakeholders You should consider one of the other Experiment Tracker flavors if you have never worked with neptune.ai before and would rather use another experiment tracking tool that you are more familiar with. How do you deploy it? The Neptune Experiment Tracker flavor is provided by the Neptune-ZenML integration. You need to install it on your local machine to be able to register the Neptune Experiment Tracker and add it to your stack: zenml integration install neptune -y The Neptune Experiment Tracker needs to be configured with the credentials required to connect to Neptune using an API token. Authentication Methods You need to configure the following credentials for authentication to Neptune:
stack-components
https://docs.zenml.io/v/docs/stack-components/experiment-trackers/neptune
361