page_content
stringlengths 74
2.86k
| parent_section
stringclasses 7
values | url
stringlengths 21
129
| token_count
int64 17
755
|
---|---|---|---|
ial to gauging the effectiveness of our retrieval.Retrieval is only half the story. The true test of our system is the quality of the final answers it generates by combining retrieved content with LLM intelligence. In the next section, we'll dive into a parallel evaluation process for the generation component, exploring both automated metrics and human assessment to get a well-rounded picture of our RAG pipeline's end-to-end performance. By shining a light on both halves of the RAG architecture, we'll be well-equipped to iterate and optimize our way to an ever more capable and reliable question-answering system.
Code Example
To explore the full code, visit the Complete Guide repository and for this section, particularly the eval_retrieval.py file.
PreviousEvaluation in 65 lines of code
NextGeneration evaluation
Last updated 1 month ago | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/retrieval | 170 |
ld be accessible to larger audiences.
TerminologyAs with any high-level abstraction, some terminology is needed to express the concepts and operations involved. In spite of the fact that Service Connectors cover such a large area of application as authentication and authorization for a variety of resources from a range of different vendors, we managed to keep this abstraction clean and simple. In the following expandable sections, you'll learn more about Service Connector Types, Resource Types, Resource Names, and Service Connectors.
This term is used to represent and identify a particular Service Connector implementation and answer questions about its capabilities such as "what types of resources does this Service Connector give me access to", "what authentication methods does it support" and "what credentials and other information do I need to configure for it". This is analogous to the role Flavors play for Stack Components in that the Service Connector Type acts as the template from which one or more Service Connectors are created.
For example, the built-in AWS Service Connector Type shipped with ZenML supports a rich variety of authentication methods and provides access to AWS resources such as S3 buckets, EKS clusters and ECR registries.
The zenml service-connector list-types and zenml service-connector describe-type CLI commands can be used to explore the Service Connector Types available with your ZenML deployment. Extensive documentation is included covering supported authentication methods and Resource Types. The following are just some examples:
zenml service-connector list-types
Example Command Output
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโฏโโโโโโโโโ
โ NAME โ TYPE โ RESOURCE TYPES โ AUTH METHODS โ LOCAL โ REMOTE โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโผโโโโโโโโผโโโโโโโโโจ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 425 |
Attach metadata to an artifact
Learn how to log metadata for artifacts and models in ZenML.
Metadata plays a critical role in ZenML, providing context and additional information about various entities within the platform. Anything which is metadata in ZenML can be compared in the dashboard.
This guide will explain how to log metadata for artifacts and models in ZenML and detail the types of metadata that can be logged.
Logging Metadata for Artifacts
Artifacts in ZenML are outputs of steps within a pipeline, such as datasets, models, or evaluation results. Associating metadata with artifacts can help users understand the nature and characteristics of these outputs.
here.
Here's an example of logging metadata for an artifact:
from zenml import step, log_artifact_metadata
from zenml.metadata.metadata_types import StorageSize
@step
def process_data_step(dataframe: pd.DataFrame) -> Annotated[pd.DataFrame, "processed_data"],:
"""Process a dataframe and log metadata about the result."""
# Perform processing on the dataframe...
processed_dataframe = ...
# Log metadata about the processed dataframe
log_artifact_metadata(
artifact_name="processed_data",
metadata={
"row_count": len(processed_dataframe),
"columns": list(processed_dataframe.columns),
"storage_size": StorageSize(processed_dataframe.memory_usage().sum())
return processed_dataframe
Fetching logged metadata
Once metadata has been logged in an artifact, or step, we can easily fetch the metadata with the ZenML Client:
from zenml.client import Client
client = Client()
artifact = client.get_artifact_version("my_artifact", "my_version")
print(artifact.run_metadata["metadata_key"].value)
Grouping Metadata in the Dashboard
When logging metadata passing a dictionary of dictionaries in the metadata parameter will group the metadata into cards in the ZenML dashboard. This feature helps organize metadata into logical sections, making it easier to visualize and understand. | how-to | https://docs.zenml.io/v/docs/how-to/track-metrics-metadata/attach-metadata-to-an-artifact | 380 |
or register flavors.my_flavor.MyOrchestratorFlavorZenML resolves the flavor class by taking the path where you initialized zenml (via zenml init) as the starting point of resolution. Therefore, please ensure you follow the best practice of initializing zenml at the root of your repository.
If ZenML does not find an initialized ZenML repository in any parent directory, it will default to the current working directory, but usually it's better to not have to rely on this mechanism, and initialize zenml at the root.
Afterward, you should see the new flavor in the list of available flavors:
zenml orchestrator flavor list
See the docs on extensibility of the different components here or get inspired by the many integrations that are already implemented such as the MLflow experiment tracker.
Step 3: Create an integration class
Once you are finished with your flavor implementations, you can start the process of packaging them into your integration and ultimately the base ZenML package. Follow this checklist to prepare everything:
1. Clone Repo
Once your stack components work as a custom flavor, you can now clone the main zenml repository and follow the contributing guide to set up your local environment for develop.
2. Create the integration directory
All integrations live within src/zenml/integrations/ in their own sub-folder. You should create a new folder in this directory with the name of your integration.
An example integration directory would be structured as follows:
/src/zenml/integrations/ <- ZenML integration directory
<example-integration> <- Root integration directory
โโโ artifact-stores <- Separated directory for
| โโโ __init_.py every type
| โโโ <example-artifact-store> <- Implementation class for the
| artifact store flavor
โโโ flavors
| โโโ __init_.py
| โโโ <example-artifact-store-flavor> <- Config class and flavor | how-to | https://docs.zenml.io/how-to/stack-deployment/implement-a-custom-integration | 403 |
r โ zenhacks-cluster โโ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ณ docker-registry โ 715803424590.dkr.ecr.us-east-1.amazonaws.com โ
โโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The Service Connector configuration shows long-lived credentials were lifted from the local environment and the AWS Session Token authentication method was configured:
zenml service-connector describe aws-session-token
Example Command Output
Service connector 'aws-session-token' of type 'aws' with id '3ae3e595-5cbc-446e-be64-e54e854e0e3f' is owned by user 'default' and is 'private'.
'aws-session-token' aws Service Connector Details
โโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ PROPERTY โ VALUE โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ID โ c0f8e857-47f9-418b-a60f-c3b03023da54 โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ NAME โ aws-session-token โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ TYPE โ ๐ถ aws โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ AUTH METHOD โ session-token โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ RESOURCE TYPES โ ๐ถ aws-generic, ๐ฆ s3-bucket, ๐ kubernetes-cluster, ๐ณ docker-registry โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 481 |
๐๏ธHandle Data/Artifacts
Step outputs in ZenML are stored in the artifact store. This enables caching, lineage and auditability. Using type annotations helps with transparency, passing data between steps, and serializing/des
For best results, use type annotations for your outputs. This is good coding practice for transparency, helps ZenML handle passing data between steps, and also enables ZenML to serialize and deserialize (referred to as 'materialize' in ZenML) the data.
@step
def load_data(parameter: int) -> Dict[str, Any]:
# do something with the parameter here
training_data = [[1, 2], [3, 4], [5, 6]]
labels = [0, 1, 0]
return {'features': training_data, 'labels': labels}
@step
def train_model(data: Dict[str, Any]) -> None:
total_features = sum(map(sum, data['features']))
total_labels = sum(data['labels'])
# Train some model here
print(f"Trained model using {len(data['features'])} data points. "
f"Feature sum is {total_features}, label sum is {total_labels}")
@pipeline
def simple_ml_pipeline(parameter: int):
dataset = load_data(parameter=parameter) # Get the output
train_model(dataset) # Pipe the previous step output into the downstream step
In this code, we define two steps: load_data and train_model. The load_data step takes an integer parameter and returns a dictionary containing training data and labels. The train_model step receives the dictionary from load_data, extracts the features and labels, and trains a model (not shown here).
Finally, we define a pipeline simple_ml_pipeline that chains the load_data and train_model steps together. The output from load_data is passed as input to train_model, demonstrating how data flows between steps in a ZenML pipeline.
PreviousDisable colorful logging
NextHow ZenML stores data
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts | 409 |
or Details
โโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโ PROPERTY โ VALUE โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโจ
โ NAME โ gcp-interactive โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโจ
โ TYPE โ ๐ต gcp โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโจ
โ AUTH METHOD โ user-account โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโจ
โ RESOURCE TYPES โ ๐ฆ gcs-bucket โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโจ
โ RESOURCE NAME โ <multiple> โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโจ
โ SESSION DURATION โ N/A โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโจ
โ EXPIRES IN โ N/A โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโจ
โ SHARED โ โ โ
โโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโ
Configuration
โโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโ
โ PROPERTY โ VALUE โ
โ โโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโจ
โ project_id โ zenml-core โ
โ โโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโจ
โ user_account_json โ [HIDDEN] โ
โโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโ
No labels are set for this service connector.
The service connector configuration has access to the following resources:
โโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ RESOURCE TYPE โ RESOURCE NAMES โ
โ โโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ฆ gcs-bucket โ gs://annotation-gcp-store โ
โ โ gs://zenml-bucket-sl โ
โ โ gs://zenml-core.appspot.com โ
โ โ gs://zenml-core_cloudbuild โ
โ โ gs://zenml-datasets โ
โโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Would you like to continue with the auto-discovered configuration or switch to manual ? (auto, manual) [auto]:
The following GCP GCS bucket instances are reachable through this connector:
gs://annotation-gcp-store
gs://zenml-bucket-sl
gs://zenml-core.appspot.com | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 651 |
build to finish. More information: Build Timeout.We can register the image builder and use it in our active stack:
zenml image-builder register <IMAGE_BUILDER_NAME> \
--flavor=gcp \
--cloud_builder_image=<BUILDER_IMAGE_NAME> \
--network=<DOCKER_NETWORK> \
--build_timeout=<BUILD_TIMEOUT_IN_SECONDS>
# Register and activate a stack with the new image builder
zenml stack register <STACK_NAME> -i <IMAGE_BUILDER_NAME> ... --set
You also need to set up authentication required to access the Cloud Build GCP services.
Authentication Methods
Integrating and using a GCP Image Builder in your pipelines is not possible without employing some form of authentication. If you're looking for a quick way to get started locally, you can use the Local Authentication method. However, the recommended way to authenticate to the GCP cloud platform is through a GCP Service Connector. This is particularly useful if you are configuring ZenML stacks that combine the GCP Image Builder with other remote stack components also running in GCP.
This method uses the implicit GCP authentication available in the environment where the ZenML code is running. On your local machine, this is the quickest way to configure a GCP Image Builder. You don't need to supply credentials explicitly when you register the GCP Image Builder, as it leverages the local credentials and configuration that the Google Cloud CLI stores on your local machine. However, you will need to install and set up the Google Cloud CLI on your machine as a prerequisite, as covered in the Google Cloud documentation , before you register the GCP Image Builder.
Stacks using the GCP Image Builder set up with local authentication are not portable across environments. To make ZenML pipelines fully portable, it is recommended to use a GCP Service Connector to authenticate your GCP Image Builder to the GCP cloud platform. | stack-components | https://docs.zenml.io/stack-components/image-builders/gcp | 378 |
board.
The Great Expectation's data profiler stepThe standard Great Expectation's data profiler step builds an Expectation Suite automatically by running a UserConfigurableProfiler on an input pandas.DataFrame dataset. The generated Expectation Suite is saved in the Great Expectations Expectation Store, but also returned as an ExpectationSuite artifact that is versioned and saved in the ZenML Artifact Store. The step automatically rebuilds the Data Docs.
At a minimum, the step configuration expects a name to be used for the Expectation Suite:
from zenml.integrations.great_expectations.steps import (
great_expectations_profiler_step,
ge_profiler_step = great_expectations_profiler_step.with_options(
parameters={
"expectation_suite_name": "steel_plates_suite",
"data_asset_name": "steel_plates_train_df",
The step can then be inserted into your pipeline where it can take in a pandas dataframe, e.g.:
from zenml import pipeline
docker_settings = DockerSettings(required_integrations=[SKLEARN, GREAT_EXPECTATIONS])
@pipeline(settings={"docker": docker_settings})
def profiling_pipeline():
"""Data profiling pipeline for Great Expectations.
The pipeline imports a reference dataset from a source then uses the builtin
Great Expectations profiler step to generate an expectation suite (i.e.
validation rules) inferred from the schema and statistical properties of the
reference dataset.
Args:
importer: reference data importer step
profiler: data profiler step
"""
dataset, _ = importer()
ge_profiler_step(dataset)
profiling_pipeline()
As can be seen from the step definition , the step takes in a pandas.DataFrame dataset, and it returns a Great Expectations ExpectationSuite object:
@step
def great_expectations_profiler_step(
dataset: pd.DataFrame,
expectation_suite_name: str,
data_asset_name: Optional[str] = None,
profiler_kwargs: Optional[Dict[str, Any]] = None,
overwrite_existing_suite: bool = True,
) -> ExpectationSuite:
... | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/great-expectations | 403 |
o_utils.read_file_contents_as_string(artifact_uri)using a temporary local file/folder to serialize and copy in-memory objects to/from the artifact store (heavily used in Materializers to transfer information between the Artifact Store and external libraries that don't support writing/reading directly to/from the artifact store backend):
import os
import tempfile
import external_lib
root_path = Repository().active_stack.artifact_store.path
artifact_path = os.path.join(root_path, "artifacts", "examples")
artifact_uri = os.path.join(artifact_path, "test.json")
fileio.makedirs(artifact_path)
with tempfile.NamedTemporaryFile(
mode="w", suffix=".json", delete=True
) as f:
external_lib.external_object.save_to_file(f.name)
# Copy it into artifact store
fileio.copy(f.name, artifact_uri)
import os
import tempfile
import external_lib
root_path = Repository().active_stack.artifact_store.path
artifact_path = os.path.join(root_path, "artifacts", "examples")
artifact_uri = os.path.join(artifact_path, "test.json")
with tempfile.NamedTemporaryFile(
mode="w", suffix=".json", delete=True
) as f:
# Copy the serialized object from the artifact store
fileio.copy(artifact_uri, f.name)
external_lib.external_object.load_from_file(f.name)
PreviousDevelop a custom orchestrator
NextLocal Artifact Store
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores | 290 |
๐ณCustomize Docker builds
Using Docker images to run your pipeline.
ZenML executes pipeline steps sequentially in the active Python environment when running locally. However, with remote orchestrators or step operators, ZenML builds Docker images to run your pipeline in an isolated, well-defined environment.
This section discusses how to control this dockerization process.
PreviousAutogenerate a template yaml file
NextDocker settings on a pipeline
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/customize-docker-builds | 92 |
the case.
Well-known dependency resolution issuesSome of ZenML's integrations come with strict dependency and package version requirements. We try to keep these dependency requirements ranges as wide as possible for the integrations developed by ZenML, but it is not always possible to make this work completely smoothly. Here is one of the known issues:
click: ZenML currently requires click~=8.0.3 for its CLI. This is on account of another dependency of ZenML. Using versions of click in your own project that are greater than 8.0.3 may cause unanticipated behaviors.
Manually bypassing ZenML's integration installation
It is possible to skip ZenML's integration installation process and install dependencies manually. This is not recommended, but it is possible and can be run at your own risk.
Note that the zenml integration install ... command runs a pip install ... under the hood as part of its implementation, taking the dependencies listed in the integration object and installing them. For example, zenml integration install gcp will run pip install "kfp==1.8.16" "gcsfs" "google-cloud-secret-manager" ... and so on, since they are specified in the integration definition.
To do this, you will need to install the dependencies for the integration you want to use manually. You can find the dependencies for the integrations by running the following:
# to have the requirements exported to a file
zenml integration export-requirements --output-file integration-requirements.txt INTEGRATION_NAME
# to have the requirements printed to the console
zenml integration export-requirements INTEGRATION_NAME
You can then amend and tweak those requirements as you see fit. Note that if you are using a remote orchestrator, you would then have to place the updated versions for the dependencies in a DockerSettings object (described in detail here) which will then make sure everything is working as you need.
PreviousConfigure Python environments
NextConfigure the server environment
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/configure-python-environments/handling-dependencies | 405 |
on file_that_runs_a_zenml_pipeline.py
Kubeflow UIKubeflow comes with its own UI that you can use to find further details about your pipeline runs, such as the logs of your steps. For any runs executed on Kubeflow, you can get the URL to the Kubeflow UI in Python using the following code snippet:
from zenml.client import Client
pipeline_run = Client().get_pipeline_run("<PIPELINE_RUN_NAME>")
orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value
Additional configuration
For additional configuration of the Kubeflow orchestrator, you can pass KubeflowOrchestratorSettings which allows you to configure (among others) the following attributes:
client_args: Arguments to pass when initializing the KFP client.
user_namespace: The user namespace to use when creating experiments and runs.
pod_settings: Node selectors, affinity, and tolerations to apply to the Kubernetes Pods running your pipeline. These can be either specified using the Kubernetes model objects or as dictionaries.
from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings
from kubernetes.client.models import V1Toleration
kubeflow_settings = KubeflowOrchestratorSettings(
client_args={},
user_namespace="my_namespace",
pod_settings={
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
"matchExpressions": [
"key": "node.kubernetes.io/name",
"operator": "In",
"values": ["my_powerful_node_group"],
},
"tolerations": [
V1Toleration(
key="node.kubernetes.io/name",
operator="Equal",
value="",
effect="NoSchedule"
@pipeline(
settings={
"orchestrator.kubeflow": kubeflow_settings
...
Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings.
Enabling CUDA for GPU-backed hardware | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/kubeflow | 433 |
Group metadata
Learn how to group key-value pairs in the dashboard.
When logging metadata passing a dictionary of dictionaries in the metadata parameter will group the metadata into cards in the ZenML dashboard. This feature helps organize metadata into logical sections, making it easier to visualize and understand.
Here's an example of grouping metadata into cards:
from zenml.metadata.metadata_types import StorageSize
log_artifact_metadata(
metadata={
"model_metrics": {
"accuracy": 0.95,
"precision": 0.92,
"recall": 0.90
},
"data_details": {
"dataset_size": StorageSize(1500000),
"feature_columns": ["age", "income", "score"]
In the ZenML dashboard, "model_metrics" and "data_details" would appear as separate cards, each containing their respective key-value pairs.
PreviousAttach metadata to steps
NextSpecial Metadata Types
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/track-metrics-metadata/grouping-metadata | 187 |
peline
from zenml.client import Client
@pipelinedef feature_engineering_pipeline():
dataset = load_data()
# This returns artifacts called "iris_training_dataset" and "iris_testing_dataset"
train_data, test_data = prepare_data()
@pipeline
def training_pipeline():
client = Client()
# Fetch by name alone - uses the latest version of this artifact
train_data = client.get_artifact_version(name="iris_training_dataset")
# For test, we want a particular version
test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023")
# We can now send these directly into ZenML steps
sklearn_classifier = model_trainer(train_data)
model_evaluator(model, sklearn_classifier)
materialized in memory in the
Pattern 2: Artifact exchange between pipelines through a Model
While passing around artifacts with IDs or names is very useful, it is often desirable to have the ZenML Model be the point of reference instead.
ZenML Model. Each time the
On the other side, the do_predictions pipeline simply picks up the latest promoted model and runs batch inference on it. It need not know of the IDs or names of any of the artifacts produced by the training pipeline's many runs. This way these two pipelines can independently be run, but can rely on each other's output.
In code, this is very simple. Once the pipelines are configured to use a particular model, we can use get_step_context to fetch the configured model within a step directly. Assuming there is a predict step in the do_predictions pipeline, we can fetch the production model like so:
from zenml import step, get_step_context
# IMPORTANT: Cache needs to be disabled to avoid unexpected behavior
@step(enable_cache=False)
def predict(
data: pd.DataFrame,
) -> Annotated[pd.Series, "predictions"]:
# model name and version are derived from pipeline context
model = get_step_context().model
# Fetch the model directly from the model control plane
model = model.get_model_artifact("trained_model")
# Make predictions | how-to | https://docs.zenml.io/how-to/use-the-model-control-plane/connecting-artifacts-via-a-model | 420 |
ountered errors among users and solutions to each.Error initializing rest store
Typically, the error presents itself as:
RuntimeError: Error initializing rest store with URL 'http://127.0.0.1:8237': HTTPConnectionPool(host='127.0.0.1', port=8237): Max retries exceeded with url: /api/v1/login (Caused by
NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9abb198550>: Failed to establish a new connection: [Errno 61] Connection refused'))
If you restarted your machine after deploying ZenML then you have to run zenml up again after each restart. Local ZenML deployments don't survive machine restarts.
Column 'step_configuration' cannot be null
sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, "Column 'step_configuration' cannot be null")
This happens when a step configuration is too long. We changed the limit from 4K to 65K chars, but it could still happen if you have excessively long strings in your config.
'NoneType' object has no attribute 'name'
This is also a common error you might encounter when you do not have the necessary stack components registered on the stack. For example:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /home/dnth/Documents/zenml-projects/nba-pipeline/run_pipeline.py:24 in <module> โ
โ โ
โ 21 โ reference_data_splitter, โ
โ 22 โ TrainingSplitConfig, โ
โ 23 ) โ
โ โฑ 24 from steps.trainer import random_forest_trainer โ
โ 25 from steps.encoder import encode_columns_and_clean โ | how-to | https://docs.zenml.io/how-to/debug-and-solve-issues | 394 |
h='/local/path/to/config.yaml'
# Run the pipelinetraining_pipeline()
The reference to a local file will change depending on where you are executing the pipeline and code from, so please bear this in mind. It is best practice to put all config files in a configs directory at the root of your repository and check them into git history.
A simple version of such a YAML file could be:
parameters:
gamma: 0.01
Please note that this would take precedence over any parameters passed in the code.
If you are unsure how to format this config file, you can generate a template config file from a pipeline.
training_pipeline.write_run_configuration_template(path='/local/path/to/config.yaml')
Check out this section for advanced configuration options.
Full Code Example
This section combines all the code from this section into one simple script that you can use to run easily:
from typing_extensions import Tuple, Annotated
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.base import ClassifierMixin
from sklearn.svm import SVC
from zenml import pipeline, step
@step
def training_data_loader() -> Tuple[
Annotated[pd.DataFrame, "X_train"],
Annotated[pd.DataFrame, "X_test"],
Annotated[pd.Series, "y_train"],
Annotated[pd.Series, "y_test"],
]:
"""Load the iris dataset as tuple of Pandas DataFrame / Series."""
iris = load_iris(as_frame=True)
X_train, X_test, y_train, y_test = train_test_split(
iris.data, iris.target, test_size=0.2, shuffle=True, random_state=42
return X_train, X_test, y_train, y_test
@step
def svc_trainer(
X_train: pd.DataFrame,
y_train: pd.Series,
gamma: float = 0.001,
) -> Tuple[
Annotated[ClassifierMixin, "trained_model"],
Annotated[float, "training_acc"],
]:
"""Train a sklearn SVC classifier and log to MLflow."""
model = SVC(gamma=gamma)
model.fit(X_train.to_numpy(), y_train.to_numpy())
train_acc = model.score(X_train.to_numpy(), y_train.to_numpy())
print(f"Train accuracy: {train_acc}") | user-guide | https://docs.zenml.io/user-guide/starter-guide/create-an-ml-pipeline | 470 |
on and credentials passed as environment variablessome form of implicit authentication attached to the workload environment itself. This is only available in virtual environments that are already running inside the same cloud where other resources are available for use. This is called differently depending on the cloud provider in question, but they are essentially the same thing:in AWS, if you're running on Amazon EC2, ECS, EKS, Lambda, or some other form of AWS cloud workload, credentials can be loaded directly from the instance metadata service. This uses the IAM role attached to your workload to authenticate to other AWS services without the need to configure explicit credentials.in GCP, a similar metadata service allows accessing other GCP cloud resources via the service account attached to the GCP workload (e.g. GCP VMs or GKE clusters).in Azure, the Azure Managed Identity services can be used to gain access to other Azure services without requiring explicit credentials
There are a few caveats that you should be aware of when choosing an implicit authentication method. It may seem like the easiest way out, but it carries with it some implications that may impact portability and usability later down the road:
when used with a local ZenML deployment, like the default deployment, or a local ZenML server started with zenml up, the implicit authentication method will use the configuration files and credentials or environment variables set up on your local machine. These will not be available to anyone else outside your local environment and will also not be accessible to workloads running in other environments on your local host. This includes for example local K3D Kubernetes clusters and local Docker containers. | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 324 |
ials by impersonating another GCP service account.The connector needs to be configured with the email address of the target GCP service account to be impersonated, accompanied by a GCP service account key JSON for the primary service account. The primary service account must have permission to generate tokens for the target service account (i.e. the Service Account Token Creator role). The connector will generate temporary OAuth 2.0 tokens upon request by using GCP direct service account impersonation. The tokens have a configurable limited lifetime of up to 1 hour.
The best practice implemented with this authentication scheme is to keep the set of permissions associated with the primary service account down to the bare minimum and grant permissions to the privilege-bearing service account instead.
A GCP project is required and the connector may only be used to access GCP resources in the specified project.
If you already have the GOOGLE_APPLICATION_CREDENTIALS environment variable configured to point to the primary service account key JSON file, it will be automatically picked up when auto-configuration is used.
For this example, we have the following set up in GCP:
a primary [email protected] GCP service account with no permissions whatsoever aside from the "Service Account Token Creator" role that allows it to impersonate the secondary service account below. We also generate a service account key for this account.
a secondary [email protected] GCP service account that only has permission to access the zenml-bucket-sl GCS bucket
First, let's show that the empty-connectors service account has no permission to access any GCS buckets or any other resources for that matter. We'll register a regular GCP Service Connector that uses the service account key (long-lived credentials) directly: | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 364 |
Attach metadata to steps
You might want to log metadata and have that be attached to a specific step during the course of your work. This is possible by using the log_step_metadata method. This method allows you to attach a dictionary of key-value pairs as metadata to a step. The metadata can be any JSON-serializable value, including custom classes such as Uri, Path, DType, and StorageSize.
You can call this method from within a step or from outside. If you call it from within it will attach the metadata to the step and run that is currently being executed.
from zenml import step, log_step_metadata, ArtifactConfig, get_step_context
from typing import Annotated
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.base import ClassifierMixin
@step
def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier", is_model_artifact=True)]:
"""Train a model"""
# Fit the model and compute metrics
classifier = RandomForestClassifier().fit(dataset)
accuracy, precision, recall = ...
# Log metadata at the step level
# This associates the metadata with the ZenML step run
log_step_metadata(
metadata={
"evaluation_metrics": {
"accuracy": accuracy,
"precision": precision,
"recall": recall
},
return classifier
If you call it from outside you can attach the metadata to a specific step run from any pipeline and step. This is useful if you want to attach the metadata after you've run the step.
from zenml import log_step_metadata
# run some step
# subsequently log the metadata for the step
log_step_metadata(
metadata={
"some_metadata": {"a_number": 3}
},
pipeline_name_id_or_prefix="my_pipeline",
step_name="my_step",
run_id="my_step_run_id"
Fetching logged metadata
Once metadata has been logged in an artifact, model, we can easily fetch the metadata with the ZenML Client:
from zenml.client import Client
client = Client() | how-to | https://docs.zenml.io/v/docs/how-to/track-metrics-metadata/attach-metadata-to-steps | 414 |
โโ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ OWNER โ default โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ WORKSPACE โ default โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ SHARED โ โ โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ CREATED_AT โ 2023-06-19 19:23:39.982950 โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ UPDATED_AT โ 2023-06-19 19:23:39.982952 โ
โโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Configuration
โโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโ
โ PROPERTY โ VALUE โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโจ
โ region โ us-east-1 โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโจ
โ aws_access_key_id โ [HIDDEN] โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโจ
โ aws_secret_access_key โ [HIDDEN] โ
โโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโ
AWS STS Token
Uses temporary STS tokens explicitly configured by the user or auto-configured from a local environment.
This method has the major limitation that the user must regularly generate new tokens and update the connector configuration as STS tokens expire. On the other hand, this method is ideal in cases where the connector only needs to be used for a short period of time, such as sharing access temporarily with someone else in your team. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 478 |
ources that the connector is configured to access.Note that the discovered credentials inherit the full set of permissions of the local Azure CLI configuration, environment variables or remote Azure managed identity. Depending on the extent of those permissions, this authentication method might not be recommended for production use, as it can lead to accidental privilege escalation. Instead, it is recommended to use the Azure service principal authentication method to limit the validity and/or permissions of the credentials being issued to connector clients.
The following assumes the local Azure CLI has already been configured with user account credentials by running the az login command:
zenml service-connector register azure-implicit --type azure --auth-method implicit --auto-configure
Example Command Output
โ Registering service connector 'azure-implicit'...
Successfully registered service connector `azure-implicit` with access to the following resources:
โโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ RESOURCE TYPE โ RESOURCE NAMES โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ฆ azure-generic โ ZenML Subscription โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ฆ blob-container โ az://demo-zenmlartifactstore โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ kubernetes-cluster โ demo-zenml-demos/demo-zenml-terraform-cluster โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ณ docker-registry โ demozenmlcontainerregistry.azurecr.io โ
โโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
No credentials are stored with the Service Connector:
zenml service-connector describe azure-implicit
Example Command Output | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 454 |
zenml logs -f
Fixing database connection problemsIf you are using a MySQL database, you might face issues connecting to it. The logs from the zenml-db-init container should give you a good idea of what the problem is. Here are some common issues and how to fix them:
If you see an error like ERROR 1045 (28000): Access denied for user <USER> using password YES, it means that the username or password is incorrect. Make sure that the username and password are correctly set for whatever deployment method you are using.
If you see an error like ERROR 2003 (HY000): Can't connect to MySQL server on <HOST> (<IP>), it means that the host is incorrect. Make sure that the host is correctly set for whatever deployment method you are using.
You can test the connection and the credentials by running the following command from your machine:
mysql -h <HOST> -u <USER> -p
If you are using a Kubernetes deployment, you can use the kubectl port-forward command to forward the MySQL port to your local machine. This will allow you to connect to the database from your machine.
Fixing database initialization problems
If youโve migrated from a newer ZenML version to an older version and see errors like Revision not found in your zenml-db-init logs, one way out is to drop the database and create a new one with the same name.
Log in to your MySQL instance.Copymysql -h <HOST> -u <NAME> -p
Drop the database for the server.Copydrop database <NAME>;
Create the database with the same name.Copycreate database <NAME>;
Restart the Kubernetes pods or the docker container running your server to trigger the database initialization again.
PreviousUpgrade the version of the ZenML server
NextTroubleshoot stack components
Last updated 1 year ago | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/manage-the-deployed-services/troubleshoot-your-deployed-server | 375 |
`aws-auto` with access to the following resources:โโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ RESOURCE TYPE โ RESOURCE NAMES โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ถ aws-generic โ us-east-1 โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ฆ s3-bucket โ s3://zenbytes-bucket โ
โ โ s3://zenfiles โ
โ โ s3://zenml-demos โ
โ โ s3://zenml-generative-chat โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ kubernetes-cluster โ zenhacks-cluster โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ณ docker-registry โ 715803424590.dkr.ecr.us-east-1.amazonaws.com โ
โโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The Service Connector configuration shows how credentials have automatically been fetched from the local AWS CLI configuration:
zenml service-connector describe aws-auto
Example Command Output
Service connector 'aws-auto' of type 'aws' with id '9f3139fd-4726-421a-bc07-312d83f0c89e' is owned by user 'default' and is 'private'.
'aws-auto' aws Service Connector Details
โโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ PROPERTY โ VALUE โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ID โ 9cdc926e-55d7-49f0-838e-db5ac34bb7dc โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ NAME โ aws-auto โ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 548 |
)
print(model.run_metadata["metadata_key"].value)PreviousTrack metrics and metadata
NextAttach metadata to an artifact
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/track-metrics-metadata/attach-metadata-to-a-model | 30 |
ary credentials to authenticate with the back-end.The ZenML secrets store reuses the ZenML Service Connector authentication mechanisms to authenticate with the secrets store back-end. This means that the same authentication methods and configuration parameters that are supported by the available Service Connectors are also reflected in the ZenML secrets store configuration. It is recommended to practice the principle of least privilege when configuring the ZenML secrets store and to use credentials with the documented minimum required permissions to access the secrets store back-end.
The ZenML secrets store configured for the ZenML Server can be updated at any time by updating the ZenML Server configuration and redeploying the server. This allows you to easily switch between different secrets store back-ends and authentication mechanisms. However, it is recommended to follow the documented secret store migration strategy to minimize downtime and to ensure that existing secrets are also properly migrated, in case the location where secrets are stored in the back-end changes.
For more information on how to deploy a ZenML server and configure the secrets store back-end, refer to your deployment strategy inside the deployment guide.
Backup secrets store
The ZenML Server deployment may be configured to optionally connect to a second Secrets Store to provide additional features such as high-availability, backup and disaster recovery as well as an intermediate step in the process of migrating secrets from one secrets store location to another. For example, the primary Secrets Store may be configured to use the internal database, while the backup Secrets Store may be configured to use the AWS Secrets Manager. Or two different AWS Secrets Manager accounts or regions may be used. | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/secret-management | 312 |
Service Connectors guide
The complete guide to managing Service Connectors and connecting ZenML to external resources.
This documentation section contains everything that you need to use Service Connectors to connect ZenML to external resources. A lot of information is covered, so it might be useful to use the following guide to navigate it:
if you're only getting started with Service Connectors, we suggest starting by familiarizing yourself with the terminology.
check out the section on Service Connector Types to understand the different Service Connector implementations that are available and when to use them.
jumping straight to the sections on Registering Service Connectors can get you set up quickly if you are only looking for a quick way to evaluate Service Connectors and their features.
if all you need to do is connect a ZenML Stack Component to an external resource or service like a Kubernetes cluster, a Docker container registry, or an object storage bucket, and you already have some Service Connectors available, the section on connecting Stack Components to resources is all you need.
In addition to this guide, there is an entire section dedicated to best security practices concerning the various authentication methods implemented by Service Connectors, such as which types of credentials to use in development or production and how to keep your security information safe. That section is particularly targeted at engineers with some knowledge of infrastructure, but it should be accessible to larger audiences.
Terminology | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 276 |
Schedule a pipeline
Learn how to set, pause and stop a schedule for pipelines.
Schedules don't work for all orchestrators. Here is a list of all supported orchestrators.
Orchestrator Scheduling Support LocalOrchestrator โ๏ธ LocalDockerOrchestrator โ๏ธ KubernetesOrchestrator โ
KubeflowOrchestrator โ
VertexOrchestrator โ
TektonOrchestrator โ๏ธ AirflowOrchestrator โ
Set a schedule
from zenml.config.schedule import Schedule
from zenml import pipeline
from datetime import datetime
@pipeline()
def my_pipeline(...):
...
# Use cron expressions
schedule = Schedule(cron_expression="5 14 * * 3")
# or alternatively use human-readable notations
schedule = Schedule(start_time=datetime.now(), interval_second=1800)
my_pipeline = my_pipeline.with_options(schedule=schedule)
my_pipeline()
Check out our SDK docs to learn more about the different scheduling options.
Pause/Stop a schedule
The way pipelines are scheduled depends on the orchestrator you are using. For example, if you are using Kubeflow, you can use the Kubeflow UI to stop or pause a scheduled run. However, the exact steps for stopping or pausing a scheduled run may vary depending on the orchestrator you are using. We recommend consulting the documentation for your orchestrator to learn the current method for stopping or pausing a scheduled run.
Note that ZenML only gets involved to schedule a run, but maintaining the lifecycle of the schedule (as explained above) is the responsibility of the user. If you run a pipeline containing a schedule two times, two scheduled pipelines (with different/unique names) will be created.
See Also:
Schedules rely on remote orchestrators, learn about those here
PreviousControl caching behavior
NextDeleting a pipeline
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/build-pipelines/schedule-a-pipeline | 391 |
Deleting a Model
Learn how to delete models.
Deleting a model or a specific model version means removing all links between the Model entity and artifacts + pipeline runs, and will also delete all metadata associated with that Model.
Deleting all versions of a model
zenml model delete <MODEL_NAME>
from zenml.client import Client
Client().delete_model(<MODEL_NAME>)
Delete a specific version of a model
zenml model version delete <MODEL_VERSION_NAME>
from zenml.client import Client
Client().delete_model_version(<MODEL_VERSION_ID>)
PreviousRegistering a Model
NextAssociate a pipeline with a Model
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/use-the-model-control-plane/delete-a-model | 130 |
Displaying visualizations in the dashboard
Displaying visualizations in the dashboard.
In order for the visualizations to show up on the dashboard, the following must be true:
Configuring a Service Connector
Visualizations are usually stored alongside the artifact, in the artifact store. Therefore, if a user would like to see the visualization displayed on the ZenML dashboard, they must give access to the server to connect to the artifact store.
The service connector documentation goes deeper into the concept of service connectors and how they can be configured to give the server permission to access the artifact store. For a concrete example, see the AWS S3 artifact store documentation.
When using the default/local artifact store with a deployed ZenML, the server naturally does not have access to your local files. In this case, the visualizations are also not displayed on the dashboard.
Please use a service connector enabled and remote artifact store alongside a deployed ZenML to view visualizations.
Configuring Artifact Stores
If all visualizations of a certain pipeline run are not showing up in the dashboard, it might be that your ZenML server does not have the required dependencies or permissions to access that artifact store. See the custom artifact store docs page for more information.
PreviousCreating custom visualizations
NextDisabling visualizations
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/visualize-artifacts/visualizations-in-dashboard | 263 |
dashboard.
Warning! Usage in remote orchestratorsThe current ZenML version has a limitation in its base Docker image that requires a workaround for all pipelines using Deepchecks with a remote orchestrator (e.g. Kubeflow , Vertex). The limitation being that the base Docker image needs to be extended to include binaries that are required by opencv2, which is a package that Deepchecks requires.
While these binaries might be available on most operating systems out of the box (and therefore not a problem with the default local orchestrator), we need to tell ZenML to add them to the containerization step when running in remote settings. Here is how:
First, create a file called deepchecks-zenml.Dockerfile and place it on the same level as your runner script (commonly called run.py). The contents of the Dockerfile are as follows:
ARG ZENML_VERSION=0.20.0
FROM zenmldocker/zenml:${ZENML_VERSION} AS base
RUN apt-get update
RUN apt-get install ffmpeg libsm6 libxext6 -y
Then, place the following snippet above your pipeline definition. Note that the path of the dockerfile are relative to where the pipeline definition file is. Read the containerization guide for more details:
import zenml
from zenml import pipeline
from zenml.config import DockerSettings
from pathlib import Path
import sys
docker_settings = DockerSettings(
dockerfile="deepchecks-zenml.Dockerfile",
build_options={
"buildargs": {
"ZENML_VERSION": f"{zenml.__version__}"
},
},
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
# same code as always
...
From here on, you can continue to use the deepchecks integration as is explained below.
The Deepchecks standard steps
ZenML wraps the Deepchecks functionality for tabular data in the form of four standard steps:
DeepchecksDataIntegrityCheckStep: use it in your pipelines to run data integrity tests on a single dataset
DeepchecksDataDriftCheckStep: use it in your pipelines to run data drift tests on two datasets as input: target and reference. | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/deepchecks | 444 |
ry_similar_docs(
question: str,
url_ending: str,use_reranking: bool = False,
returned_sample_size: int = 5,
) -> Tuple[str, str, List[str]]:
"""Query similar documents for a given question and URL ending."""
embedded_question = get_embeddings(question)
db_conn = get_db_conn()
num_docs = 20 if use_reranking else returned_sample_size
# get (content, url) tuples for the top n similar documents
top_similar_docs = get_topn_similar_docs(
embedded_question, db_conn, n=num_docs, include_metadata=True
if use_reranking:
reranked_docs_and_urls = rerank_documents(question, top_similar_docs)[
:returned_sample_size
urls = [doc[1] for doc in reranked_docs_and_urls]
else:
urls = [doc[1] for doc in top_similar_docs] # Unpacking URLs
return (question, url_ending, urls)
We get the embeddings for the question being passed into the function and connect to our PostgreSQL database. If we're using reranking, we get the top 20 documents similar to our query and rerank them using the rerank_documents helper function. We then extract the URLs from the reranked documents and return them. Note that we only return 5 URLs, but in the case of reranking we get a larger number of documents and URLs back from the database to pass to our reranker, but in the end we always choose the top five reranked documents to return.
Now that we've added reranking to our pipeline, we can evaluate the performance of our reranker and see how it affects the quality of the retrieved documents.
Code Example
To explore the full code, visit the Complete Guide repository and for this section, particularly the eval_retrieval.py file.
PreviousUnderstanding reranking
NextEvaluating reranking performance
Last updated 15 days ago | user-guide | https://docs.zenml.io/user-guide/llmops-guide/reranking/implementing-reranking | 397 |
Develop a custom container registry
Learning how to develop a custom container registry.
Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.
Base Abstraction
In the current version of ZenML, container registries have a rather basic base abstraction. In essence, their base configuration only features a uri and their implementation features a non-abstract prepare_image_push method for validation.
from abc import abstractmethod
from typing import Type
from zenml.enums import StackComponentType
from zenml.stack import Flavor
from zenml.stack.authentication_mixin import (
AuthenticationConfigMixin,
AuthenticationMixin,
from zenml.utils import docker_utils
class BaseContainerRegistryConfig(AuthenticationConfigMixin):
"""Base config for a container registry."""
uri: str
class BaseContainerRegistry(AuthenticationMixin):
"""Base class for all ZenML container registries."""
def prepare_image_push(self, image_name: str) -> None:
"""Conduct necessary checks/preparations before an image gets pushed."""
def push_image(self, image_name: str) -> str:
"""Pushes a Docker image."""
if not image_name.startswith(self.config.uri):
raise ValueError(
f"Docker image `{image_name}` does not belong to container "
f"registry `{self.config.uri}`."
self.prepare_image_push(image_name)
return docker_utils.push_image(image_name)
class BaseContainerRegistryFlavor(Flavor):
"""Base flavor for container registries."""
@property
@abstractmethod
def name(self) -> str:
"""Returns the name of the flavor."""
@property
def type(self) -> StackComponentType:
"""Returns the flavor type."""
return StackComponentType.CONTAINER_REGISTRY
@property
def config_class(self) -> Type[BaseContainerRegistryConfig]:
"""Config class for this flavor."""
return BaseContainerRegistryConfig | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/custom | 385 |
โโโโโโโโโโโโโโโโโโโโโโโ
Local client provisioningThe local AWS CLI, Kubernetes kubectl CLI and the Docker CLI can be configured with credentials extracted from or generated by a compatible AWS Service Connector. Please note that unlike the configuration made possible through the AWS CLI, the Kubernetes and Docker credentials issued by the AWS Service Connector have a short lifetime and will need to be regularly refreshed. This is a byproduct of implementing a high-security profile.
Configuring the local AWS CLI with credentials issued by the AWS Service Connector results in a local AWS CLI configuration profile being created with the name inferred from the first digits of the Service Connector UUID in the form -<uuid[:8]>. For example, a Service Connector with UUID 9f3139fd-4726-421a-bc07-312d83f0c89e will result in a local AWS CLI configuration profile named zenml-9f3139fd.
The following shows an example of configuring the local Kubernetes CLI to access an EKS cluster reachable through an AWS Service Connector:
zenml service-connector list --name aws-session-token
Example Command Output
โโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโฏโโโโโโโโโฏโโโโโโโโโโฏโโโโโโโโโโโโโฏโโโโโโโโโ
โ ACTIVE โ NAME โ ID โ TYPE โ RESOURCE TYPES โ RESOURCE NAME โ SHARED โ OWNER โ EXPIRES IN โ LABELS โ
โ โโโโโโโโโผโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโจ
โ โ aws-session-token โ c0f8e857-47f9-418b-a60f-c3b03023da54 โ ๐ถ aws โ ๐ถ aws-generic โ <multiple> โ โ โ default โ โ โ
โ โ โ โ โ ๐ฆ s3-bucket โ โ โ โ โ โ | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 506 |
as the ZenML quickstart. You can clone it like so:git clone --depth 1 [email protected]:zenml-io/zenml.git
cd zenml/examples/quickstart
pip install -r requirements.txt
zenml init
To run a pipeline using the new stack:
Set the stack as active on your clientCopyzenml stack set a_new_local_stack
Run your pipeline code:Copypython run.py --training-pipeline
Keep this code handy as we'll be using it in the next chapters!
PreviousDeploying ZenML
NextConnecting remote storage
Last updated 19 days ago | user-guide | https://docs.zenml.io/v/docs/user-guide/production-guide/understand-stacks | 126 |
โ๏ธBuild a pipeline
Building pipelines is as simple as adding the `@step` and `@pipeline` decorators to your code.
@step # Just add this decorator
def load_data() -> dict:
training_data = [[1, 2], [3, 4], [5, 6]]
labels = [0, 1, 0]
return {'features': training_data, 'labels': labels}
@step
def train_model(data: dict) -> None:
total_features = sum(map(sum, data['features']))
total_labels = sum(data['labels'])
# Train some model here
print(f"Trained model using {len(data['features'])} data points. "
f"Feature sum is {total_features}, label sum is {total_labels}")
@pipeline # This function combines steps together
def simple_ml_pipeline():
dataset = load_data()
train_model(dataset)
You can now run this pipeline by simply calling the function:
simple_ml_pipeline()
When this pipeline is executed, the run of the pipeline gets logged to the ZenML dashboard where you can now go to look at its DAG and all the associated metadata. To access the dashboard you need to have a ZenML server either running locally or remotely. See our documentation on this here.
Check below for more advanced ways to build and interact with your pipeline.
Configure pipeline/step parameters
Name and annotate step outputs
Control caching behavior
Run pipeline from a pipeline
Control the execution order of steps
Customize the step invocation ids
Name your pipeline runs
Use failure/success hooks
Hyperparameter tuning
Attach metadata to steps
Fetch metadata within steps
Fetch metadata during pipeline composition
Version pipelines
Enable or disable logs storing
Special Metadata Types
Access secrets in a step
PreviousBest practices
NextUse pipeline/step parameters
Last updated 14 days ago | how-to | https://docs.zenml.io/how-to/build-pipelines | 378 |
Prodigy
Annotating data using Prodigy.
Prodigy is a modern annotation tool for creating training and evaluation data for machine learning models. You can also use Prodigy to help you inspect and clean your data, do error analysis and develop rule-based systems to use in combination with your statistical models.
Prodigy is a paid annotation tool. You will need a Prodigy is a paid tool. A license is required to download and use it with ZenML.
The Prodigy Python library includes a range of pre-built workflows and command-line commands for various tasks, and well-documented components for implementing your own workflow scripts. Your scripts can specify how the data is loaded and saved, change which questions are asked in the annotation interface, and can even define custom HTML and JavaScript to change the behavior of the front-end. The web application is optimized for fast, intuitive and efficient annotation.
When would you want to use it?
If you need to label data as part of your ML workflow, that is the point at which you could consider adding the optional annotator stack component as part of your ZenML stack.
How to deploy it?
The Prodigy Annotator flavor is provided by the Prodigy ZenML integration. You need to install it to be able to register it as an Annotator and add it to your stack:
zenml integration export-requirements --output-file prodigy-requirements.txt prodigy
Note that you'll need to install Prodigy separately since it requires a license. Please visit the Prodigy docs for information on how to install it. Currently Prodigy also requires the urllib3<2 dependency, so make sure to install that.
Then register your annotator with ZenML:
zenml annotator register prodigy --flavor prodigy
# optionally also pass in --custom_config_path="<PATH_TO_CUSTOM_CONFIG_FILE>"
See https://prodi.gy/docs/install#config for more on custom Prodigy config files. Passing a custom_config_path allows you to override the default Prodigy config. | stack-components | https://docs.zenml.io/stack-components/annotators/prodigy | 418 |
HyperAI Orchestrator
Orchestrating your pipelines to run on HyperAI.ai instances.
HyperAI is a cutting-edge cloud compute platform designed to make AI accessible for everyone. The HyperAI orchestrator is an orchestrator flavor that allows you to easily deploy your pipelines on HyperAI instances.
This component is only meant to be used within the context of a remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!
When to use it
You should use the HyperAI orchestrator if:
you're looking for a managed solution for running your pipelines.
you're a HyperAI customer.
Prerequisites
You will need to do the following to start using the HyperAI orchestrator:
Have a running HyperAI instance. It must be accessible from the internet (or at least from the IP addresses of your ZenML users) and allow SSH key based access (passwords are not supported).
Ensure that a recent version of Docker is installed. This version must include Docker Compose, meaning that the command docker compose works.
Ensure that the appropriate NVIDIA Driver is installed on the HyperAI instance (if not already installed by the HyperAI team).
Ensure that the NVIDIA Container Toolkit is installed and configured on the HyperAI instance.
Note that it is possible to omit installing the NVIDIA Driver and NVIDIA Container Toolkit. However, you will then be unable to use the GPU from within your ZenML pipeline. Additionally, you will then need to disable GPU access within the container when configuring the Orchestrator component, or the pipeline will not start correctly.
How it works | stack-components | https://docs.zenml.io/stack-components/orchestrators/hyperai | 318 |
our ZenML instance. See here for more information.Checkout the documentation on fetching runs for more information on the various ways how you can fetch and use the pipeline, pipeline run, step run, and artifact resources in code.
Stacks, Infrastructure, Authentication
Stack: The stacks registered in your ZenML instance.
Stack Components: The stack components registered in your ZenML instance, e.g., all orchestrators, artifact stores, model deployers, ...
Flavors: The stack component flavors available to you, including:Built-in flavors like the local orchestrator,Integration-enabled flavors like the Kubeflow orchestrator,Custom flavors that you have created yourself.
User: The users registered in your ZenML instance. If you are running locally, there will only be a single default user.
Secrets: The infrastructure authentication secrets that you have registered in the ZenML Secret Store.
Service Connectors: The service connectors that you have set up to connect ZenML to your infrastructure.
Client Methods
Reading and Writing Resources
List Methods
Get a list of resources, e.g.:
client.list_pipeline_runs(
stack_id=client.active_stack_model.id, # filter by stack
user_id=client.active_user.id, # filter by user
sort_by="desc:start_time", # sort by start time descending
size=10, # limit page size to 10
These methods always return a Page of resources, which behaves like a standard Python list and contains, by default, the first 50 results. You can modify the page size by passing the size argument or fetch a subsequent page by passing the page argument to the list method. | reference | https://docs.zenml.io/v/docs/reference/python-client | 333 |
requirements and remain agnostic of their vendor.The term Resource Type is used in ZenML everywhere resources accessible through Service Connectors are involved. For example, to list all Service Connector Types that can be used to broker access to Kubernetes Clusters, you can pass the --resource-type flag to the CLI command:
zenml service-connector list-types --resource-type kubernetes-cluster
Example Command Output
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโฏโโโโโโโโโ
โ NAME โ TYPE โ RESOURCE TYPES โ AUTH METHODS โ LOCAL โ REMOTE โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโผโโโโโโโโผโโโโโโโโโจ
โ Kubernetes Service Connector โ ๐ kubernetes โ ๐ kubernetes-cluster โ password โ โ
โ โ
โ
โ โ โ โ token โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโผโโโโโโโโผโโโโโโโโโจ
โ Azure Service Connector โ ๐ฆ azure โ ๐ฆ azure-generic โ implicit โ โ
โ โ
โ
โ โ โ ๐ฆ blob-container โ service-principal โ โ โ
โ โ โ ๐ kubernetes-cluster โ access-token โ โ โ
โ โ โ ๐ณ docker-registry โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโผโโโโโโโโผโโโโโโโโโจ
โ AWS Service Connector โ ๐ถ aws โ ๐ถ aws-generic โ implicit โ โ
โ โ
โ
โ โ โ ๐ฆ s3-bucket โ secret-key โ โ โ
โ โ โ ๐ kubernetes-cluster โ sts-token โ โ โ | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 508 |
onfig class and add your configuration parameters.Bring both the implementation and the configuration together by inheriting from the BaseModelDeployerFlavor class. Make sure that you give a name to the flavor through its abstract property.
Create a service class that inherits from the BaseService class and implements the abstract methods. This class will be used to represent the deployed model server in ZenML.
Once you are done with the implementation, you can register it through the CLI. Please ensure you point to the flavor class via dot notation:
zenml model-deployer flavor register <path.to.MyModelDeployerFlavor>
For example, if your flavor class MyModelDeployerFlavor is defined in flavors/my_flavor.py, you'd register it by doing:
zenml model-deployer flavor register flavors.my_flavor.MyModelDeployerFlavor
ZenML resolves the flavor class by taking the path where you initialized zenml (via zenml init) as the starting point of resolution. Therefore, please ensure you follow the best practice of initializing zenml at the root of your repository.
If ZenML does not find an initialized ZenML repository in any parent directory, it will default to the current working directory, but usually, it's better to not have to rely on this mechanism and initialize zenml at the root.
Afterward, you should see the new flavor in the list of available flavors:
zenml model-deployer flavor list
It is important to draw attention to when and how these base abstractions are coming into play in a ZenML workflow.
The CustomModelDeployerFlavor class is imported and utilized upon the creation of the custom flavor through the CLI.
The CustomModelDeployerConfig class is imported when someone tries to register/update a stack component with this custom flavor. Especially, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config objects are inherently pydantic objects, you can also add your own custom validators here. | stack-components | https://docs.zenml.io/stack-components/model-deployers/custom | 404 |
e the AWS Service Connector authentication method.ZENML_SECRETS_STORE_REGION_NAME: The AWS region to use. This must be set to the region where the AWS Secrets Manager service that you want to use is located.
ZENML_SECRETS_STORE_AWS_ACCESS_KEY_ID: The AWS access key ID to use for authentication. This must be set to a valid AWS access key ID that has access to the AWS Secrets Manager service that you want to use. If you are using an IAM role attached to an EKS cluster to authenticate, you can omit this variable.
ZENML_SECRETS_STORE_AWS_SECRET_ACCESS_KEY: The AWS secret access key to use for authentication. This must be set to a valid AWS secret access key that has access to the AWS Secrets Manager service that you want to use. If you are using an IAM role attached to an EKS cluster to authenticate, you can omit this variable.
These configuration options are only relevant if you're using the GCP Secrets Manager as the secrets store backend.
ZENML_SECRETS_STORE_TYPE: Set this to gcp in order to set this type of secret store.
The GCP Secrets Store uses the ZenML GCP Service Connector under the hood to authenticate with the GCP Secrets Manager API. This means that you can use any of the authentication methods supported by the GCP Service Connector to authenticate with the GCP Secrets Manager API.
The minimum set of permissions that must be attached to the implicit or configured GCP credentials are as follows:
secretmanager.secrets.create for the target GCP project (i.e. no condition on the name prefix)
secretmanager.secrets.get, secretmanager.secrets.update, secretmanager.versions.access, secretmanager.versions.add and secretmanager.secrets.delete for the target GCP project and for secrets that have a name starting with zenml- | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 375 |
zenml logs -f
Fixing database connection problemsIf you are using a MySQL database, you might face issues connecting to it. The logs from the zenml-db-init container should give you a good idea of what the problem is. Here are some common issues and how to fix them:
If you see an error like ERROR 1045 (28000): Access denied for user <USER> using password YES, it means that the username or password is incorrect. Make sure that the username and password are correctly set for whatever deployment method you are using.
If you see an error like ERROR 2003 (HY000): Can't connect to MySQL server on <HOST> (<IP>), it means that the host is incorrect. Make sure that the host is correctly set for whatever deployment method you are using.
You can test the connection and the credentials by running the following command from your machine:
mysql -h <HOST> -u <USER> -p
If you are using a Kubernetes deployment, you can use the kubectl port-forward command to forward the MySQL port to your local machine. This will allow you to connect to the database from your machine.
Fixing database initialization problems
If youโve migrated from a newer ZenML version to an older version and see errors like Revision not found in your zenml-db-init logs, one way out is to drop the database and create a new one with the same name.
Log in to your MySQL instance.Copymysql -h <HOST> -u <NAME> -p
Drop the database for the server.Copydrop database <NAME>;
Create the database with the same name.Copycreate database <NAME>;
Restart the Kubernetes pods or the docker container running your server to trigger the database initialization again.
PreviousUpgrade the version of the ZenML server
NextTroubleshoot stack components
Last updated 15 days ago | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/troubleshoot-your-deployed-server | 375 |
Kubeflow Orchestrator
Orchestrating your pipelines to run on Kubeflow.
The Kubeflow orchestrator is an orchestrator flavor provided by the ZenML kubeflow integration that uses Kubeflow Pipelines to run your pipelines.
This component is only meant to be used within the context of a remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!
When to use it
You should use the Kubeflow orchestrator if:
you're looking for a proven production-grade orchestrator.
you're looking for a UI in which you can track your pipeline runs.
you're already using Kubernetes or are not afraid of setting up and maintaining a Kubernetes cluster.
you're willing to deploy and maintain Kubeflow Pipelines on your cluster.
How to deploy it
The Kubeflow orchestrator supports two different modes: Local and remote. In case you want to run the orchestrator on a local Kubernetes cluster running on your machine, there is no additional infrastructure setup necessary.
If you want to run your pipelines on a remote cluster instead, you'll need to set up a Kubernetes cluster and deploy Kubeflow Pipelines:
Have an existing AWS EKS cluster set up.
Make sure you have the AWS CLI set up.
Download and install kubectl and configure it to talk to your EKS cluster using the following command:Copyaws eks --region REGION update-kubeconfig --name CLUSTER_NAME
Install Kubeflow Pipelines onto your cluster.
( optional) set up an AWS Service Connector to grant ZenML Stack Components easy and secure access to the remote EKS cluster.
Have an existing GCP GKE cluster set up.
Make sure you have the Google Cloud CLI set up first.
Download and install kubectl and configure it to talk to your GKE cluster using the following command:Copygcloud container clusters get-credentials CLUSTER_NAME
Install Kubeflow Pipelines onto your cluster.
( optional) set up a GCP Service Connector to grant ZenML Stack Components easy and secure access to the remote GKE cluster. | stack-components | https://docs.zenml.io/stack-components/orchestrators/kubeflow | 423 |
Deploy with Docker
Deploying ZenML in a Docker container.
The ZenML server container image is available at zenmldocker/zenml-server and can be used to deploy ZenML with a container management or orchestration tool like Docker and docker-compose, or a serverless platform like Cloud Run, Container Apps, and more! This guide walks you through the various configuration options that the ZenML server container expects as well as a few deployment use cases.
Try it out locally first
If you're just looking for a quick way to deploy the ZenML server using a container, without going through the hassle of interacting with a container management tool like Docker and manually configuring your container, you can use the ZenML CLI to do so. You only need to have Docker installed and running on your machine:
zenml up --docker
This command deploys a ZenML server locally in a Docker container, then connects your client to it. Similar to running plain zenml up, the server and the local ZenML client share the same SQLite database.
The rest of this guide is addressed to advanced users who are looking to manually deploy and manage a containerized ZenML server.
ZenML server configuration options
If you're planning on deploying a custom containerized ZenML server yourself, you probably need to configure some settings for it like the database it should use, the default user details, and more. The ZenML server container image uses sensible defaults, so you can simply start a container without worrying too much about the configuration. However, if you're looking to connect the ZenML server to an external MySQL database or secrets management service, to persist the internal SQLite database, or simply want to control other settings like the default account, you can do so by customizing the container's environment variables.
The following environment variables can be passed to the container: | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-docker | 370 |
Implement a custom integration
Creating an external integration and contributing to ZenML
One of the main goals of ZenML is to find some semblance of order in the ever-growing MLOps landscape. ZenML already provides numerous integrations into many popular tools, and allows you to come up with ways to implement your own stack component flavors in order to fill in any gaps that are remaining.
However, what if you want to make your extension of ZenML part of the main codebase, to share it with others? If you are such a person, e.g., a tooling provider in the ML/MLOps space, or just want to contribute a tooling integration to ZenML, this guide is intended for you.
Step 1: Plan out your integration
In the previous page, we looked at the categories and abstractions that core ZenML defines. In order to create a new integration into ZenML, you would need to first find the categories that your integration belongs to. The list of categories can be found here as well.
Note that one integration may belong to different categories: For example, the cloud integrations (AWS/GCP/Azure) contain container registries, artifact stores etc.
Step 2: Create individual stack component flavors
Each category selected above would correspond to a stack component type. You can now start developing individual stack component flavors for this type by following the detailed instructions on the respective pages.
Before you package your new components into an integration, you may want to use/test them as a regular custom flavor. For instance, if you are developing a custom orchestrator and your flavor class MyOrchestratorFlavor is defined in flavors/my_flavor.py, you can register it by using:
zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor | how-to | https://docs.zenml.io/v/docs/how-to/stack-deployment/implement-a-custom-integration | 365 |
๐ธSet up a project repository
Setting your team up for success with a project repository.
ZenML code typically lives in a git repository. Setting this repository up correctly can make a huge impact on collaboration and getting the maximum out of your ZenML deployment. This section walks users through some of the options available to create a project repository with ZenML.
PreviousFinetuning LLMs with ZenML
NextConnect your git repository
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/setting-up-a-project-repository | 94 |
Upgrade the version of the ZenML server
Learn how to upgrade your server to a new version of ZenML for the different deployment options.
The way to upgrade your ZenML server depends a lot on how you deployed it.
To upgrade your ZenML server that was deployed with the zenml deploy command to a newer version, you can follow the steps below.
In the config file, set zenmlserver_image_tag to the version that you want your ZenML server to be running.
Run the deploy command again with this config file:Copyzenml deploy --config=/PATH/TO/FILE
Any database schema updates are automatically handled by ZenML and unless mentioned otherwise, all of your data is migrated to the new version, intact.
To upgrade to a new version with docker, you have to delete the existing container and then run the new version of the zenml-server image.
Check that your data is persisted (either on persistent storage or on an external MySQL instance) before doing this.
Optionally also perform a backup before the upgrade.
Delete the existing ZenML container, for example like this:Copy# find your container ID
docker psCopy# stop the container
docker stop <CONTAINER_ID>
# remove the container
docker rm <CONTAINER_ID>
Deploy the version of the zenml-server image that you want to use. Find all versions here.Copydocker run -it -d -p 8080:8080 --name <CONTAINER_NAME> zenmldocker/zenml-server:<VERSION>
To upgrade your ZenML server Helm release to a new version, follow the steps below:
Pull the latest version of the Helm chart from the ZenML GitHub repository, or a version of your choice, e.g.:
# If you haven't cloned the ZenML repository yet
git clone https://github.com/zenml-io/zenml.git
# Optional: checkout an explicit release tag
# git checkout 0.21.1
git pull
# Switch to the directory that hosts the helm chart
cd src/zenml/zen_server/deploy/helm/ | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/upgrade-the-version-of-the-zenml-server | 424 |
e steps
from zenml.steps import StepContext, stepfrom zenml.environment import Environment
@step
def my_step(context: StepContext) -> Any: # Old: `StepContext` class defined as arg
env = Environment().step_environment
output_uri = context.get_output_artifact_uri()
step_name = env.step_name # Old: Run info accessible via `StepEnvironment`
...
from zenml import get_step_context, step
@step
def my_step() -> Any: # New: StepContext is no longer an argument of the step
context = get_step_context()
output_uri = context.get_output_artifact_uri()
step_name = context.step_name # New: StepContext now has ALL run/step info
...
Check out this page for more information on how to fetch run information inside your steps using get_step_context().
PreviousMigration guide 0.23.0 โ 0.30.0
NextMigration guide 0.58.2 โ 0.60.0
Last updated 19 days ago | reference | https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-forty | 216 |
s/30267569827/locations/global/workloadIdentityP โโ โ ools/mypool/providers/myprovider", โ
โ โ "subject_token_type": "urn:ietf:params:aws:token-type:aws4_request", โ
โ โ "service_account_impersonation_url": โ
โ โ "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/myrole@ โ
โ โ zenml-core.iam.gserviceaccount.com:generateAccessToken", โ
โ โ "token_url": "https://sts.googleapis.com/v1/token", โ
โ โ "credential_source": { โ
โ โ "environment_id": "aws1", โ
โ โ "region_url": โ
โ โ "http://169.254.169.254/latest/meta-data/placement/availability-zone", โ
โ โ "url": โ
โ โ "http://169.254.169.254/latest/meta-data/iam/security-credentials", โ
โ โ "regional_cred_verification_url": โ
โ โ "https://sts.{region}.amazonaws.com?Action=GetCallerIdentity&Version=2011-06- โ
โ โ 15" โ
โ โ } โ
โ โ } โ
โโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
GCP OAuth 2.0 token | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 401 |
Evaluating reranking performance
Evaluate the performance of your reranking model.
We've already set up an evaluation pipeline, so adding reranking evaluation is relatively straightforward. In this section, we'll explore how to evaluate the performance of your reranking model using ZenML.
Evaluating Reranking Performance
The simplest first step in evaluating the reranking model is to compare the retrieval performance before and after reranking. You can use the same metrics we discussed in the evaluation section to assess the performance of the reranking model.
If you recall, we have a hand-crafted set of queries and relevant documents that we use to evaluate the performance of our retrieval system. We also have a set that was generated by LLMs. The actual retrieval test is implemented as follows:
def perform_retrieval_evaluation(
sample_size: int, use_reranking: bool
) -> float:
"""Helper function to perform the retrieval evaluation."""
dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train")
sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size))
total_tests = len(sampled_dataset)
failures = 0
for item in sampled_dataset:
generated_questions = item["generated_questions"]
question = generated_questions[
] # Assuming only one question per item
url_ending = item["filename"].split("/")[
1
] # Extract the URL ending from the filename
# using the method above to query similar documents
# we pass in whether we want to use reranking or not
_, _, urls = query_similar_docs(question, url_ending, use_reranking)
if all(url_ending not in url for url in urls):
logging.error(
f"Failed for question: {question}. Expected URL ending: {url_ending}. Got: {urls}"
failures += 1
logging.info(f"Total tests: {total_tests}. Failures: {failures}")
failure_rate = (failures / total_tests) * 100
return round(failure_rate, 2) | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/reranking/evaluating-reranking-performance | 420 |
โญIntroduction
Welcome to ZenML!
ZenML is an extensible, open-source MLOps framework for creating portable, production-ready machine learning pipelines. By decoupling infrastructure from code, ZenML enables developers across your organization to collaborate more effectively as they develop to production.
ZenML enables MLOps infrastructure experts to define, deploy, and manage sophisticated production environments that are easy to share with colleagues.
ZenML Pro: ZenML Pro provides a control plane that allows you to deploy a managed ZenML instance and get access to exciting new features such as CI/CD, Model Control Plane, and RBAC.
Self-hosted deployment: ZenML can be deployed on any cloud provider and provides many Terraform-based utility functions to deploy other MLOps tools or even entire MLOps stacks:Copy# Deploy ZenML to any cloud
zenml deploy --provider aws
# Deploy MLOps tools and infrastructure to any cloud
zenml orchestrator deploy kfp --flavor kubeflow --provider gcp
# Deploy entire MLOps stacks at once
zenml stack deploy gcp-vertexai --provider gcp -o kubeflow ...
Standardization: With ZenML, you can standardize MLOps infrastructure and tooling across your organization. Simply register your staging and production environments as ZenML stacks and invite your colleagues to run ML workflows on them.Copy# Register MLOps tools and infrastructure
zenml orchestrator register kfp_orchestrator -f kubeflow
# Register your production environment
zenml stack register production --orchestrator kubeflow ...
# Make it available to your colleagues
zenml stack share production
Registering your environments as ZenML stacks also enables you to browse and explore them in a convenient user interface. Try it out at https://www.zenml.io/live-demo! | docs | https://docs.zenml.io/v/docs/ | 380 |
COUNT_NAME>@<PROJECT_NAME>.iam.gserviceaccount.comUsing the Azure Key Vault as a secrets store backend
The Azure Secrets Store uses the ZenML Azure Service Connector under the hood to authenticate with the Azure Key Vault API. This means that you can use any of the authentication methods supported by the Azure Service Connector to authenticate with the Azure Key Vault API.
Example configuration for the Azure Key Vault Secrets Store:
zenml:
# ...
# Secrets store settings. This is used to store centralized secrets.
secretsStore:
# Set to false to disable the secrets store.
enabled: true
# The type of the secrets store
type: azure
# Configuration for the Azure Key Vault secrets store
azure:
# The name of the Azure Key Vault. This must be set to point to the Azure
# Key Vault instance that you want to use.
key_vault_name:
# The Azure Service Connector authentication method to use.
authMethod: service-principal
# The Azure Service Connector configuration.
authConfig:
# The Azure application service principal credentials to use to
# authenticate with the Azure Key Vault API.
client_id: <your Azure client ID>
client_secret: <your Azure client secret>
tenant_id: <your Azure tenant ID>
Using the HashiCorp Vault as a secrets store backend
To use the HashiCorp Vault service as a Secrets Store back-end, it must be configured in the Helm values:
zenml:
# ...
# Secrets store settings. This is used to store centralized secrets.
secretsStore:
# Set to false to disable the secrets store.
enabled: true
# The type of the secrets store
type: hashicorp
# Configuration for the HashiCorp Vault secrets store
hashicorp:
# The url of the HashiCorp Vault server to use
vault_addr: https://vault.example.com
# The token used to authenticate with the Vault server
vault_token: <your Vault token>
# The Vault Enterprise namespace. Not required for Vault OSS.
vault_namespace: <your Vault namespace>
Using a custom secrets store backend implementation | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-helm | 419 |
ifact): # rather than pd.DataFrame
pass
ExampleThe following shows an example of how unmaterialized artifacts can be used in the steps of a pipeline. The pipeline we define will look like this:
s1 -> s3
s2 -> s4
from typing_extensions import Annotated # or `from typing import Annotated on Python 3.9+
from typing import Dict, List, Tuple
from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact
from zenml import pipeline, step
@step
def step_1() -> Tuple[
Annotated[Dict[str, str], "dict_"],
Annotated[List[str], "list_"],
]:
return {"some": "data"}, []
@step
def step_2() -> Tuple[
Annotated[Dict[str, str], "dict_"],
Annotated[List[str], "list_"],
]:
return {"some": "data"}, []
@step
def step_3(dict_: Dict, list_: List) -> None:
assert isinstance(dict_, dict)
assert isinstance(list_, list)
@step
def step_4(
dict_: UnmaterializedArtifact,
list_: UnmaterializedArtifact,
) -> None:
print(dict_.uri)
print(list_.uri)
@pipeline
def example_pipeline():
step_3(*step_1())
step_4(*step_2())
example_pipeline()
Interaction with custom artifact stores
When creating a custom artifact store, you may encounter a situation where the default materializers do not function properly. Specifically, the self.artifact_store.open method used in these materializers may not be compatible with your custom store due to not being implemented properly.
In this case, you can create a modified version of the failing materializer by copying it and modifying it to copy the artifact to a local path, then opening it from there. For example, consider the following implementation of a custom PandasMaterializer that works with a custom artifact store. In this implementation, we copy the artifact to a local path because we want to use the pandas.read_csv method to read it. If we were to use the self.artifact_store.open method instead, we would not need to make this copy. | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/handle-custom-data-types | 453 |
Local Image Builder
Building container images locally.
The local image builder is an image builder flavor that comes built-in with ZenML and uses the local Docker installation on your client machine to build container images.
ZenML uses the official Docker Python library to build and push your images. This library loads its authentication credentials to push images from the default config location: $HOME/.docker/config.json. If your Docker configuration is stored in a different directory, you can use the environment variable DOCKER_CONFIG to override this behavior:
export DOCKER_CONFIG=/path/to/config_dir
The directory that you specify here must contain your Docker configuration in a file called config.json.
When to use it
You should use the local image builder if:
you're able to install and use Docker on your client machine.
you want to use remote components that require containerization without the additional hassle of configuring infrastructure for an additional component.
How to deploy it
The local image builder comes with ZenML and works without any additional setup.
How to use it
To use the Local image builder, we need:
Docker installed and running.
The Docker client authenticated to push to the container registry that you intend to use in the same stack.
We can then register the image builder and use it to create a new stack:
zenml image-builder register <NAME> --flavor=local
# Register and activate a stack with the new image builder
zenml stack register <STACK_NAME> -i <NAME> ... --set
For more information and a full list of configurable attributes of the local image builder, check out the SDK Docs .
PreviousImage Builders
NextKaniko Image Builder
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/image-builders/local | 338 |
ctivated by installing the respective integration:Integration Materializer Handled Data Types Storage Format bentoml BentoMaterializer bentoml.Bento .bento deepchecks DeepchecksResultMateriailzer deepchecks.CheckResult , deepchecks.SuiteResult .json evidently EvidentlyProfileMaterializer evidently.Profile .json great_expectations GreatExpectationsMaterializer great_expectations.ExpectationSuite , great_expectations.CheckpointResult .json huggingface HFDatasetMaterializer datasets.Dataset , datasets.DatasetDict Directory huggingface HFPTModelMaterializer transformers.PreTrainedModel Directory huggingface HFTFModelMaterializer transformers.TFPreTrainedModel Directory huggingface HFTokenizerMaterializer transformers.PreTrainedTokenizerBase Directory lightgbm LightGBMBoosterMaterializer lgbm.Booster .txt lightgbm LightGBMDatasetMaterializer lgbm.Dataset .binary neural_prophet NeuralProphetMaterializer NeuralProphet .pt pillow PillowImageMaterializer Pillow.Image .PNG polars PolarsMaterializer pl.DataFrame , pl.Series .parquet pycaret PyCaretMaterializer Any sklearn , xgboost , lightgbm or catboost model .pkl pytorch PyTorchDataLoaderMaterializer torch.Dataset , torch.DataLoader .pt pytorch PyTorchModuleMaterializer torch.Module .pt scipy SparseMaterializer scipy.spmatrix .npz spark SparkDataFrameMaterializer pyspark.DataFrame .parquet spark SparkModelMaterializer pyspark.Transformer pyspark.Estimator tensorflow KerasMaterializer tf.keras.Model Directory tensorflow TensorflowDatasetMaterializer tf.Dataset Directory whylogs WhylogsMaterializer whylogs.DatasetProfileView .pb xgboost XgboostBoosterMaterializer xgb.Booster .json xgboost XgboostDMatrixMaterializer xgb.DMatrix .binary
If you are running pipelines with a Docker-based orchestrator, you need to specify the corresponding integration as required_integrations in the DockerSettings of your pipeline in order to have the integration materializer available inside your Docker container. See the pipeline configuration documentation for more information. | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/handle-custom-data-types | 425 |
ure Container Registry to the remote ACR registry.To set up the Azure Container Registry to authenticate to Azure and access an ACR registry, it is recommended to leverage the many features provided by the Azure Service Connector such as auto-configuration, local login, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components.
If you don't already have an Azure Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You have the option to configure an Azure Service Connector that can be used to access a ACR registry or even more than one type of Azure resource:
zenml service-connector register --type azure -i
A non-interactive CLI example that uses Azure Service Principal credentials to configure an Azure Service Connector targeting a single ACR registry is:
zenml service-connector register <CONNECTOR_NAME> --type azure --auth-method service-principal --tenant_id=<AZURE_TENANT_ID> --client_id=<AZURE_CLIENT_ID> --client_secret=<AZURE_CLIENT_SECRET> --resource-type docker-registry --resource-id <REGISTRY_URI>
Example Command Output
$ zenml service-connector register azure-demo --type azure --auth-method service-principal --tenant_id=a79f3633-8f45-4a74-a42e-68871c17b7fb --client_id=8926254a-8c3f-430a-a2fd-bdab234d491e --client_secret=AzureSuperSecret --resource-type docker-registry --resource-id demozenmlcontainerregistry.azurecr.io
โ ธ Registering service connector 'azure-demo'...
Successfully registered service connector `azure-demo` with access to the following resources:
โโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ RESOURCE TYPE โ RESOURCE NAMES โ
โ โโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ณ docker-registry โ demozenmlcontainerregistry.azurecr.io โ
โโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | stack-components | https://docs.zenml.io/stack-components/container-registries/azure | 476 |
for GCS, Docker and Kubernetes Python clients. Italso allows for the configuration of local Docker and Kubernetes CLIs.
The GCP Service Connector is part of the GCP ZenML integration. You can either
install the entire integration or use a pypi extra to install it independently
of the integration:
pip install "zenml[connectors-gcp]" installs only prerequisites for the GCP
Service Connector Type
zenml integration install gcp installs the entire GCP ZenML integration
It is not required to install and set up the GCP CLI on your local machine to
use the GCP Service Connector to link Stack Components to GCP resources and
services. However, it is recommended to do so if you are looking for a quick
setup that includes using the auto-configuration Service Connector features.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Fetching details about the GCP kubernetes-cluster resource type (i.e. the GKE cluster):
zenml service-connector describe-type gcp --resource-type kubernetes-cluster
Example Command Output
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ GCP GKE Kubernetes cluster (resource type: kubernetes-cluster) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Authentication methods: implicit, user-account, service-account, oauth2-token,
impersonation
Supports resource instances: True
Authentication methods:
๐ implicit
๐ user-account
๐ service-account
๐ oauth2-token
๐ impersonation
Allows Stack Components to access a GKE registry as a standard Kubernetes
cluster resource. When used by Stack Components, they are provided a
pre-authenticated Python Kubernetes client instance.
The configured credentials must have at least the following GCP permissions
associated with the GKE clusters that it can access:
container.clusters.list
container.clusters.get
In addition to the above permissions, the credentials should include permissions | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 466 |
ine/reference/commandline/login/#credentials-storeThe 'aws-us-east-1' Docker Service Connector connector was used to successfully configure the local Docker/OCI container registry client/SDK.
For more information and a full list of configurable attributes of the AWS container registry, check out the SDK Docs.
PreviousDockerHub
NextGoogle Cloud Container Registry
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/container-registries/aws | 75 |
p a custom alerter as described on the Feast page,and where can I find the 'How to use it?' guide?". Expected URL ending: feature-stores.
Got: ['https://docs.zenml.io/stacks-and-components/component-guide/alerters/custom',
'https://docs.zenml.io/v/docs/stacks-and-components/component-guide/alerters/custom',
'https://docs.zenml.io/v/docs/reference/how-do-i', 'https://docs.zenml.io/stacks-and-components/component-guide/alerters',
'https://docs.zenml.io/stacks-and-components/component-guide/alerters/slack']
Loading default flashrank model for language en
Default Model: ms-marco-MiniLM-L-12-v2
Loading FlashRankRanker model ms-marco-MiniLM-L-12-v2
Loading model FlashRank model ms-marco-MiniLM-L-12-v2...
Running pairwise ranking..
Step retrieval_evaluation_full_with_reranking has finished in 4m20s.
We can see here a specific example of a failure in the reranking evaluation. It's quite a good one because we can see that the question asked was actually an anomaly in the sense that the LLM has generated two questions and included its meta-discussion of the two questions it generated. Obviously this is not a representative question for the dataset, and if we saw a lot of these we might want to take some time to both understand why the LLM is generating these questions and how we can filter them out.
Visualising our reranking performance
Since ZenML can display visualizations in its dashboard, we can showcase the results of our experiments in a visual format. For example, we can plot the failure rates of the retrieval system with and without reranking to see the impact of reranking on the performance.
Our documentation explains how to set up your outputs so that they appear as visualizations in the ZenML dashboard. You can find more information here. There are lots of options, but we've chosen to plot our failure rates as a bar chart and export them as a PIL.Image object. We also plotted the other evaluation scores so as to get a quick global overview of our performance. | user-guide | https://docs.zenml.io/user-guide/llmops-guide/reranking/evaluating-reranking-performance | 446 |
d generate results in the form of a Report object.One of Evidently's notable characteristics is that it only requires datasets as input. Even when running model performance comparison analyses, no model needs to be present. However, that does mean that the input data needs to include additional target and prediction columns for some profiling reports and, you have to include additional information about the dataset columns in the form of column mappings. Depending on how your data is structured, you may also need to include additional steps in your pipeline before the data validation step to insert the additional target and prediction columns into your data. This may also require interacting with one or more models.
There are three ways you can use Evidently to generate data reports in your ZenML pipelines that allow different levels of flexibility:
instantiate, configure and insert the standard Evidently report step shipped with ZenML into your pipelines. This is the easiest way and the recommended approach.
call the data validation methods provided by the Evidently Data Validator in your custom step implementation. This method allows for more flexibility concerning what can happen in the pipeline step.
use the Evidently library directly in your custom step implementation. This gives you complete freedom in how you are using Evidently's features.
You can visualize Evidently reports in Jupyter notebooks or view them directly in the ZenML dashboard by clicking on the respective artifact in the pipeline run DAG.
The Evidently Report step
ZenML wraps the Evidently data profiling functionality in the form of a standard Evidently report pipeline step that you can simply instantiate and insert in your pipeline. Here you can see how instantiating and configuring the standard Evidently report step can be done:
from zenml.integrations.evidently.metrics import EvidentlyMetricConfig
from zenml.integrations.evidently.steps import (
EvidentlyColumnMapping,
evidently_report_step, | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/evidently | 382 |
_settings})
def my_pipeline() -> None:
my_step()# Or configure the pipelines options
my_pipeline = my_pipeline.with_options(
settings={"docker": docker_settings}
Configuring them on a step gives you more fine-grained control and enables you to build separate specialized Docker images for different steps of your pipelines:
docker_settings = DockerSettings()
# Either add it to the decorator
@step(settings={"docker": docker_settings})
def my_step() -> None:
pass
# Or configure the step options
my_step = my_step.with_options(
settings={"docker": docker_settings}
Using a YAML configuration file as described here:
settings:
docker:
...
steps:
step_name:
settings:
docker:
...
Check out this page for more information on the hierarchy and precedence of the various ways in which you can supply the settings.
Specifying Docker build options
If you want to specify build options that get passed to the build method of the image builder. For the default local image builder, these options get passed to the docker build command.
docker_settings = DockerSettings(build_config={"build_options": {...}})
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
If you're running your pipelines on MacOS with ARM architecture, the local Docker caching does not work unless you specify the target platform of the image:
docker_settings = DockerSettings(build_config={"build_options": {"platform": "linux/amd64"}})
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
Using a custom parent image
By default, ZenML performs all the steps described above on top of the official ZenML image for the Python and ZenML version in the active Python environment. To have more control over the entire environment used to execute your pipelines, you can either specify a custom pre-built parent image or a Dockerfile that ZenML uses to build a parent image for you. | how-to | https://docs.zenml.io/v/docs/how-to/customize-docker-builds/docker-settings-on-a-pipeline | 379 |
s = KubeflowOrchestratorSettings(
client_args={},user_namespace="my_namespace",
pod_settings={
"affinity": {...},
"tolerations": [...]
@pipeline(
settings={
"orchestrator.kubeflow": kubeflow_settings
This allows specifying client arguments, user namespace, pod affinity/tolerations, and more.
Multi-Tenancy Deployments
For multi-tenant Kubeflow deployments, specify the kubeflow_hostname ending in /pipeline when registering the orchestrator:
zenml orchestrator register <NAME> \
--flavor=kubeflow \
--kubeflow_hostname=<KUBEFLOW_HOSTNAME> # e.g. https://mykubeflow.example.com/pipeline
And provide the namespace, username and password in the orchestrator settings:
kubeflow_settings = KubeflowOrchestratorSettings(
client_username="admin",
client_password="abc123",
user_namespace="namespace_name"
@pipeline(
settings={
"orchestrator.kubeflow": kubeflow_settings
For more advanced options and details, refer to the full Kubeflow Orchestrator documentation.
PreviousRun on GCP
NextKubernetes
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/popular-integrations/kubeflow | 257 |
iple> โ โ โ default โ 40m58s โ โโ โ โ โ โ ๐ฆ blob-container โ โ โ โ โ โ
โ โ โ โ โ ๐ kubernetes-cluster โ โ โ โ โ โ
โ โ โ โ โ ๐ณ docker-registry โ โ โ โ โ โ
โโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโทโโโโโโโโโทโโโโโโโโโโทโโโโโโโโโโโโโทโโโโโโโโโ
Auto-configuration
The Azure Service Connector allows auto-discovering and fetching credentials and configuration set up by the Azure CLI on your local host.
The Azure service connector auto-configuration comes with two limitations:
it can only pick up temporary Azure access tokens and therefore cannot be used for long-term authentication scenarios
it doesn't support authenticating to the Azure blob storage service. The Azure service principal authentication method can be used instead.
For an auto-configuration example, please refer to the section about Azure access tokens.
Local client provisioning
The local Azure CLI, Kubernetes kubectl CLI and the Docker CLI can be configured with credentials extracted from or generated by a compatible Azure Service Connector.
Note that the Azure local CLI can only be configured with credentials issued by the Azure Service Connector if the connector is configured with the service principal authentication method.
The following shows an example of configuring the local Kubernetes CLI to access an AKS cluster reachable through an Azure Service Connector:
zenml service-connector list --name azure-service-principal
Example Command Output | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 412 |
and see how they react!
Conclusion and next stepsThe production guide has now hopefully landed you with an end-to-end MLOps project, powered by a ZenML server connected to your cloud infrastructure. You are now ready to dive deep into writing your own pipelines and stacks. If you are looking to learn more advanced concepts, the how-to section is for you. Until then, we wish you the best of luck chasing your MLOps dreams!
PreviousSet up CI/CD
NextLLMOps guide
Last updated 15 days ago | user-guide | https://docs.zenml.io/user-guide/production-guide/end-to-end | 109 |
โโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจโ b57f5f5c-0378-434c-8d50-34b492486f30 โ gcp-multi โ ๐ต gcp โ ๐ kubernetes-cluster โ zenml-test-cluster โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ d6fc6004-eb76-4fd7-8fa1-ec600cced680 โ azure-multi โ ๐ฆ azure โ ๐ kubernetes-cluster โ demo-zenml-demos/demo-zenml-terraform-cluster โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
After having set up or decided on a Service Connector to use to connect to the target Kubernetes cluster where Seldon Core is installed, you can register the Seldon Core Model Deployer as follows:
# Register the Seldon Core Model Deployer
zenml model-deployer register <MODEL_DEPLOYER_NAME> --flavor=seldon \
--kubernetes_namespace=<KUBERNETES-NAMESPACE> \
--base_url=http://$INGRESS_HOST
# Connect the Seldon Core Model Deployer to the target cluster via a Service Connector
zenml model-deployer connect <MODEL_DEPLOYER_NAME> -i
A non-interactive version that connects the Seldon Core Model Deployer to a target Kubernetes cluster through a Service Connector:
zenml model-deployer connect <MODEL_DEPLOYER_NAME> --connector <CONNECTOR_ID> --resource-id <CLUSTER_NAME>
Example Command Output
$ zenml model-deployer connect seldon-test --connector gcp-multi --resource-id zenml-test-cluster
Successfully connected model deployer `seldon-test` to the following resources:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโ
โ CONNECTOR ID โ CONNECTOR NAME โ CONNECTOR TYPE โ RESOURCE TYPE โ RESOURCE NAMES โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโจ | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/seldon | 592 |
to your stack:
zenml integration install azure -yHaving trouble with this command? You can use poetry or pip to install the requirements of any ZenML integration directly. In order to obtain the exact requirements of the Azure integration you can use zenml integration requirements azure.
The only configuration parameter mandatory for registering an Azure Artifact Store is the root path URI, which needs to point to an Azure Blog Storage container and take the form az://container-name or abfs://container-name. Please read the Azure Blob Storage documentation on how to provision an Azure Blob Storage container.
With the URI to your Azure Blob Storage container known, registering an Azure Artifact Store can be done as follows:
# Register the Azure artifact store
zenml artifact-store register cloud_artifact_store -f azure --path=az://container-name
For more information, read the dedicated Azure artifact store flavor guide.
You can create a remote artifact store in pretty much any environment, including other cloud providers using a cloud-agnostic artifact storage such as Minio.
It is also relatively simple to create a custom stack component flavor for your use case.
Having trouble with setting up infrastructure? Join the ZenML community and ask for help!
Configuring permissions with your first service connector
While you can go ahead and run your pipeline on your stack if your local client is configured to access it, it is best practice to use a service connector for this purpose. Service connectors are quite a complicated concept (We have a whole docs section on them) - but we're going to be starting with a very basic approach. | user-guide | https://docs.zenml.io/v/docs/user-guide/production-guide/remote-storage | 313 |
โโโโโผโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโจโ b57f5f5c-0378-434c-8d50-34b492486f30 โ gcp-multi โ ๐ต gcp โ ๐ kubernetes-cluster โ zenml-test-cluster โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโ
A similar experience is available when you configure the Seldon Core Model Deployer through the ZenML dashboard:
Managing Seldon Core Authentication
The Seldon Core Model Deployer requires access to the persistent storage where models are located. In most cases, you will use the Seldon Core model deployer to serve models that are trained through ZenML pipelines and stored in the ZenML Artifact Store, which implies that the Seldon Core model deployer needs to access the Artifact Store.
If Seldon Core is already running in the same cloud as the Artifact Store (e.g. S3 and an EKS cluster for AWS, or GCS and a GKE cluster for GCP), there are ways of configuring cloud workloads to have implicit access to other cloud resources like persistent storage without requiring explicit credentials. However, if Seldon Core is running in a different cloud, or on-prem, or if implicit in-cloud workload authentication is not enabled, then you need to configure explicit credentials for the Artifact Store to allow other components like the Seldon Core model deployer to authenticate to it. Every cloud Artifact Store flavor supports some way of configuring explicit credentials and this is documented for each individual flavor in the Artifact Store documentation. | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/seldon | 438 |
โโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโ CONNECTOR ID โ CONNECTOR NAME โ CONNECTOR TYPE โ RESOURCE TYPE โ RESOURCE NAMES โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโจ
โ ffc01795-0c0a-4f1d-af80-b84aceabcfcf โ gcp-implicit โ ๐ต gcp โ ๐ณ docker-registry โ gcr.io/zenml-core โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโจ
โ 561b776a-af8b-491c-a4ed-14349b440f30 โ gcp-zenml-core โ ๐ต gcp โ ๐ณ docker-registry โ gcr.io/zenml-core โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโ
After having set up or decided on a GCP Service Connector to use to connect to the target GCR registry, you can register the GCP Container Registry as follows:
# Register the GCP container registry and reference the target GCR registry URI
zenml container-registry register <CONTAINER_REGISTRY_NAME> -f gcp \
--uri=<REGISTRY_URL>
# Connect the GCP container registry to the target GCR registry via a GCP Service Connector
zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i
A non-interactive version that connects the GCP Container Registry to a target GCR registry through a GCP Service Connector:
zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID>
Linking the GCP Container Registry to a Service Connector means that your local Docker client is no longer authenticated to access the remote registry. If you need to manually interact with the remote registry via the Docker CLI, you can use the local login Service Connector feature to temporarily authenticate your local Docker client to the remote registry:
zenml service-connector login <CONNECTOR_NAME> --resource-type docker-registry
Example Command Output
$ zenml service-connector login gcp-zenml-core --resource-type docker-registry | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/gcp | 556 |
ons. Try it out at https://www.zenml.io/live-demo!Automated Deployments: With ZenML, you no longer need to upload custom Docker images to the cloud whenever you want to deploy a new model to production. Simply define your ML workflow as a ZenML pipeline, let ZenML handle the containerization, and have your model automatically deployed to a highly scalable Kubernetes deployment service like Seldon.Copyfrom zenml.integrations.seldon.steps import seldon_model_deployer_step
from my_organization.steps import data_loader_step, model_trainer_step
@pipeline
def my_pipeline():
data = data_loader_step()
model = model_trainer_step(data)
seldon_model_deployer_step(model)
๐ Learn More
Ready to manage your ML lifecycles end-to-end with ZenML? Here is a collection of pages you can take a look at next:
Get started with ZenML and learn how to build your first pipeline and stack.
Discover advanced ZenML features like config management and containerization.
Explore ZenML through practical use-case examples.
NextInstallation
Last updated 18 days ago | docs | https://docs.zenml.io/v/docs/ | 228 |
pd.Series(model.predict(data))
return predictionsHowever, this approach has the downside that if the step is cached, then it could lead to unexpected results. You could simply disable the cache in the above step or the corresponding pipeline. However, one other way of achieving this would be to resolve the artifact at the pipeline level:
from typing_extensions import Annotated
from zenml import get_pipeline_context, pipeline, Model
from zenml.enums import ModelStages
import pandas as pd
from sklearn.base import ClassifierMixin
@step
def predict(
model: ClassifierMixin,
data: pd.DataFrame,
) -> Annotated[pd.Series, "predictions"]:
predictions = pd.Series(model.predict(data))
return predictions
@pipeline(
model=Model(
name="iris_classifier",
# Using the production stage
version=ModelStages.PRODUCTION,
),
def do_predictions():
# model name and version are derived from pipeline context
model = get_pipeline_context().model
inference_data = load_data()
predict(
# Here, we load in the `trained_model` from a trainer step
model=model.get_model_artifact("trained_model"),
data=inference_data,
if __name__ == "__main__":
do_predictions()
Ultimately, both approaches are fine. You should decide which one to use based on your own preferences.
PreviousLoad artifacts into memory
NextVisualizing artifacts
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/passing-artifacts-between-pipelines | 282 |
e alone - uses the latest version of this artifacttrain_data = client.get_artifact_version(name="iris_training_dataset")
# For test, we want a particular version
test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023")
# We can now send these directly into ZenML steps
sklearn_classifier = model_trainer(train_data)
model_evaluator(model, sklearn_classifier)
materialized in memory in the
Pattern 2: Artifact exchange between pipelines through a Model
While passing around artifacts with IDs or names is very useful, it is often desirable to have the ZenML Model be the point of reference instead.
ZenML Model. Each time the
On the other side, the do_predictions pipeline simply picks up the latest promoted model and runs batch inference on it. It need not know of the IDs or names of any of the artifacts produced by the training pipeline's many runs. This way these two pipelines can independently be run, but can rely on each other's output.
In code, this is very simple. Once the pipelines are configured to use a particular model, we can use get_step_context to fetch the configured model within a step directly. Assuming there is a predict step in the do_predictions pipeline, we can fetch the production model like so:
from zenml import step, get_step_context
# IMPORTANT: Cache needs to be disabled to avoid unexpected behavior
@step(enable_cache=False)
def predict(
data: pd.DataFrame,
) -> Annotated[pd.Series, "predictions"]:
# model name and version are derived from pipeline context
model = get_step_context().model
# Fetch the model directly from the model control plane
model = model.get_model_artifact("trained_model")
# Make predictions
predictions = pd.Series(model.predict(data))
return predictions | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/passing-artifacts-between-pipelines | 366 |
sourceSettings(...)})
def my_step() -> None:
...Deprecating the requirements and required_integrations parameters
Users used to be able to pass requirements and required_integrations directly in the @pipeline decorator, but now need to pass them through settings:
How to migrate: Simply remove the parameters and use the DockerSettings instead
from zenml.config import DockerSettings
@step(settings={"docker": DockerSettings(requirements=[...], requirements_integrations=[...])})
def my_step() -> None:
...
Read more here.
A new pipeline intermediate representation
All the aforementioned configurations as well as additional information required to run a ZenML pipelines are now combined into an intermediate representation called PipelineDeployment. Instead of the user-facing BaseStep and BasePipeline classes, all the ZenML orchestrators and step operators now use this intermediate representation to run pipelines and steps.
How to migrate: If you have written a custom orchestrator or step operator, then you should see the new base abstractions (seen in the links). You can adjust your stack component implementations accordingly.
PipelineSpec now uniquely defines pipelines
Once a pipeline has been executed, it is represented by a PipelineSpec that uniquely identifies it. Therefore, users are no longer able to edit a pipeline once it has been run once. There are now three options to get around this:
Pipeline runs can be created without being associated with a pipeline explicitly: We call these unlisted runs. Read more about unlisted runs here.
Pipelines can be deleted and created again.
Pipelines can be given unique names each time they are run to uniquely identify them.
How to migrate: No code changes, but rather keep in mind the behavior (e.g. in a notebook setting) when quickly iterating over pipelines as experiments.
New post-execution workflow
The Post-execution workflow has changed as follows: | reference | https://docs.zenml.io/reference/migration-guide/migration-zero-twenty | 370 |
โโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโ CONNECTOR ID โ CONNECTOR NAME โ CONNECTOR TYPE โ RESOURCE TYPE โ RESOURCE NAMES โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโจ
โ bfdb657d-d808-47e7-9974-9ba6e4919d83 โ gcp-generic โ ๐ต gcp โ ๐ต gcp-generic โ zenml-core โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโ
As a final step, you can use the GCP Image Builder in a ZenML Stack:
# Register and set a stack with the new image builder
zenml stack register <STACK_NAME> -i <IMAGE_BUILDER_NAME> ... --set
When you register the GCP Image Builder, you can generate a GCP Service Account Key, save it to a local file and then reference it in the Image Builder configuration.
This method has the advantage that you don't need to install and configure the GCP CLI on your host, but it's still not as secure as using a GCP Service Connector and the stack component configuration is not portable to other hosts.
For this method, you need to create a user-managed GCP service account, and grant it privileges to access the Cloud Build API and to run Cloud Builder jobs (e.g. the Cloud Build Editor IAM role.
With the service account key downloaded to a local file, you can register the GCP Image Builder as follows:
zenml image-builder register <IMAGE_BUILDER_NAME> \
--flavor=gcp \
--project=<GCP_PROJECT_ID> \
--service_account_path=<PATH_TO_SERVICE_ACCOUNT_KEY> \
--cloud_builder_image=<BUILDER_IMAGE_NAME> \
--network=<DOCKER_NETWORK> \
--build_timeout=<BUILD_TIMEOUT_IN_SECONDS>
# Register and set a stack with the new image builder
zenml stack register <STACK_NAME> -i <IMAGE_BUILDER_NAME> ... --set
Caveats | stack-components | https://docs.zenml.io/v/docs/stack-components/image-builders/gcp | 496 |
artifact-store-flavor> <- Config class and flavorโโโ __init_.py <- Integration class
3. Define the name of your integration in constants
In zenml/integrations/constants.py, add:
EXAMPLE_INTEGRATION = "<name-of-integration>"
This will be the name of the integration when you run:
zenml integration install <name-of-integration>
4. Create the integration class __init__.py
In src/zenml/integrations/<YOUR_INTEGRATION>/init__.py you must now create a new class, which is a subclass of the Integration class, set some important attributes (NAME and REQUIREMENTS), and overwrite the flavors class method.
from zenml.integrations.constants import <EXAMPLE_INTEGRATION>
from zenml.integrations.integration import Integration
from zenml.stack import Flavor
# This is the flavor that will be used when registering this stack component
# `zenml <type-of-stack-component> register ... -f example-orchestrator-flavor`
EXAMPLE_ORCHESTRATOR_FLAVOR = <"example-orchestrator-flavor">
# Create a Subclass of the Integration Class
class ExampleIntegration(Integration):
"""Definition of Example Integration for ZenML."""
NAME = <EXAMPLE_INTEGRATION>
REQUIREMENTS = ["<INSERT PYTHON REQUIREMENTS HERE>"]
@classmethod
def flavors(cls) -> List[Type[Flavor]]:
"""Declare the stack component flavors for the <EXAMPLE> integration."""
from zenml.integrations.<example_flavor> import <ExampleFlavor>
return [<ExampleFlavor>]
ExampleIntegration.check_installation() # this checks if the requirements are installed
Have a look at the MLflow Integration as an example for how it is done.
5. Import in all the right places
The Integration itself must be imported within src/zenml/integrations/__init__.py.
Step 4: Create a PR and celebrate ๐
You can now create a PR to ZenML and wait for the core maintainers to take a look. Thank you so much for your contribution to the codebase, rock on! ๐
PreviousImplement a custom stack component
NextConfigure Python environments
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/stack-deployment/implement-a-custom-integration | 457 |
e,
onerror: Optional[Callable[..., None]] = None,) -> Iterable[Tuple[PathType, List[PathType], List[PathType]]]:
"""Return an iterator that walks the contents of the given directory."""
class BaseArtifactStoreFlavor(Flavor):
"""Base class for artifact store flavors."""
@property
@abstractmethod
def name(self) -> Type["BaseArtifactStore"]:
"""Returns the name of the flavor."""
@property
def type(self) -> StackComponentType:
"""Returns the flavor type."""
return StackComponentType.ARTIFACT_STORE
@property
def config_class(self) -> Type[StackComponentConfig]:
"""Config class."""
return BaseArtifactStoreConfig
@property
@abstractmethod
def implementation_class(self) -> Type["BaseArtifactStore"]:
"""Implementation class."""
This is a slimmed-down version of the base implementation which aims to highlight the abstraction layer. In order to see the full implementation and get the complete docstrings, please check the SDK docs .
The effect on the zenml.io.fileio
If you created an instance of an artifact store, added it to your stack, and activated the stack, it will create a filesystem each time you run a ZenML pipeline and make it available to the zenml.io.fileio module.
This means that when you utilize a method such as fileio.open(...) with a file path that starts with one of the SUPPORTED_SCHEMES within your steps or materializers, it will be able to use the open(...) method that you defined within your artifact store.
Build your own custom artifact store
If you want to implement your own custom Artifact Store, you can follow the following steps:
Create a class that inherits from the BaseArtifactStore class and implements the abstract methods.
Create a class that inherits from the BaseArtifactStoreConfig class and fill in the SUPPORTED_SCHEMES based on your file system.
Bring both of these classes together by inheriting from the BaseArtifactStoreFlavor class. | stack-components | https://docs.zenml.io/stack-components/artifact-stores/custom | 398 |
-b9d478e1bcfc โโ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ NAME โ aws-iam-role โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ TYPE โ ๐ถ aws โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ AUTH METHOD โ iam-role โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ RESOURCE TYPES โ ๐ถ aws-generic, ๐ฆ s3-bucket, ๐ kubernetes-cluster, ๐ณ docker-registry โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ RESOURCE NAME โ <multiple> โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ SECRET ID โ 87795fdd-b70e-4895-b0dd-8bca5fd4d10e โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ SESSION DURATION โ 3600s โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ EXPIRES IN โ N/A โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ OWNER โ default โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ WORKSPACE โ default โ | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 377 |
Develop a Custom Alerter
Learning how to develop a custom alerter.
Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.
Base Abstraction
The base abstraction for alerters is very basic, as it only defines two abstract methods that subclasses should implement:
post() takes a string, posts it to the desired chat service, and returns True if the operation succeeded, else False.
ask() does the same as post(), but after sending the message, it waits until someone approves or rejects the operation from within the chat service (e.g., by sending "approve" / "reject" to the bot as a response). ask() then only returns True if the operation succeeded and was approved, else False.
Then base abstraction looks something like this:
class BaseAlerter(StackComponent, ABC):
"""Base class for all ZenML alerters."""
def post(
self, message: str, params: Optional[BaseAlerterStepParameters]
) -> bool:
"""Post a message to a chat service."""
return True
def ask(
self, question: str, params: Optional[BaseAlerterStepParameters]
) -> bool:
"""Post a message to a chat service and wait for approval."""
return True
This is a slimmed-down version of the base implementation. To see the full docstrings and imports, please check the source code on GitHub.
Building your own custom alerter
Creating your own custom alerter can be done in three steps:
Create a class that inherits from the BaseAlerter and implement the post() and ask() methods.Copyfrom typing import Optional
from zenml.alerter import BaseAlerter, BaseAlerterStepParameters
class MyAlerter(BaseAlerter):
"""My alerter class.""" | stack-components | https://docs.zenml.io/stack-components/alerters/custom | 395 |
configuration documentation for more information.Custom materializers
Configuring a step/pipeline to use a custom materializer
Defining which step uses what materializer
ZenML automatically detects if your materializer is imported in your source code and registers them for the corresponding data type (defined in ASSOCIATED_TYPES). Therefore, just having a custom materializer definition in your code is enough to enable the respective data type to be used in your pipelines.
However, it is best practice to explicitly define which materializer to use for a specific step and not rely on the ASSOCIATED_TYPES to make that connection:
class MyObj:
...
class MyMaterializer(BaseMaterializer):
"""Materializer to read data to and from MyObj."""
ASSOCIATED_TYPES = (MyObj)
ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA
# Read below to learn how to implement this materializer
# You can define it at the decorator level
@step(output_materializers=MyMaterializer)
def my_first_step() -> MyObj:
return 1
# No need to explicitly specify materializer here:
# it is coupled with Artifact Version generated by
# `my_first_step` already.
def my_second_step(a: MyObj):
print(a)
# or you can use the `configure()` method of the step. E.g.:
my_first_step.configure(output_materializers=MyMaterializer)
When there are multiple outputs, a dictionary of type {<OUTPUT_NAME>: <MATERIALIZER_CLASS>} can be supplied to the decorator or the .configure(...) method:
class MyObj1:
...
class MyObj2:
...
class MyMaterializer1(BaseMaterializer):
"""Materializer to read data to and from MyObj1."""
ASSOCIATED_TYPES = (MyObj1)
ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA
class MyMaterializer2(BaseMaterializer):
"""Materializer to read data to and from MyObj2."""
ASSOCIATED_TYPES = (MyObj2)
ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA
# This is where we connect the objects to the materializer
@step(output_materializers={"1": MyMaterializer1, "2": MyMaterializer2}) | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/handle-custom-data-types | 450 |
version must be a valid registered model version.silent_daemon: set to True to suppress the output of the daemon (i.e., redirect stdout and stderr to /dev/null). If False, the daemon output will be redirected to a log file.
blocking: set to True to run the service in the context of the current process and block until the service is stopped instead of running the service as a daemon process. Useful for operating systems that do not support daemon processes.
model_uri: The URI of the model to be deployed. This can be a local file path, a run ID, or a model name and version.
workers: The number of workers to be used by the MLflow prediction server.
mlserver: If True, the MLflow prediction server will be started as a MLServer instance.
timeout: The timeout in seconds to wait for the MLflow prediction server to start or stop.
Run inference on a deployed model
The following code example shows how you can load a deployed model in Python and run inference against it:
Load a prediction service deployed in another pipeline
import json
import requests
from zenml import step
from zenml.integrations.mlflow.model_deployers.mlflow_model_deployer import (
MLFlowModelDeployer,
from zenml.integrations.mlflow.services import MLFlowDeploymentService
# Load a prediction service deployed in another pipeline
@step(enable_cache=False)
def prediction_service_loader(
pipeline_name: str,
pipeline_step_name: str,
model_name: str = "model",
) -> None:
"""Get the prediction service started by the deployment pipeline.
Args:
pipeline_name: name of the pipeline that deployed the MLflow prediction
server
step_name: the name of the step that deployed the MLflow prediction
server
running: when this flag is set, the step only returns a running service
model_name: the name of the model that is deployed
"""
# get the MLflow model deployer stack component
model_deployer = MLFlowModelDeployer.get_active_model_deployer()
# fetch existing services with same pipeline name, step name and model name | stack-components | https://docs.zenml.io/stack-components/model-deployers/mlflow | 426 |
โโโโโโโโโโโโโโโโโโโ
Basic Service Connector TypesService Connector Types like the Kubernetes Service Connector and Docker Service Connector can only handle one resource at a time: a Kubernetes cluster and a Docker container registry respectively. These basic Service Connector Types are the easiest to instantiate and manage, as each Service Connector instance is tied exactly to one resource (i.e. they are single-instance connectors).
The following output shows two Service Connector instances configured from basic Service Connector Types:
a Docker Service Connector that grants authenticated access to the DockerHub registry and allows pushing/pulling images that are stored in private repositories belonging to a DockerHub account
a Kubernetes Service Connector that authenticates access to a Kubernetes cluster running on-premise and allows managing containerized workloads running there.
$ zenml service-connector list
โโโโโโโโโโฏโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโฏโโโโโโโโโฏโโโโโโโโโโฏโโโโโโโโโโโโโฏโโโโโโโโโ
โ ACTIVE โ NAME โ ID โ TYPE โ RESOURCE TYPES โ RESOURCE NAME โ SHARED โ OWNER โ EXPIRES IN โ LABELS โ
โ โโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโจ
โ โ dockerhub โ b485626e-7fee-4525-90da-5b26c72331eb โ ๐ณ docker โ ๐ณ docker-registry โ docker.io โ โ โ default โ โ โ
โ โโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโจ
โ โ kube-on-prem โ 4315e8eb-fcbd-4938-a4d7-a9218ab372a1 โ ๐ kubernetes โ ๐ kubernetes-cluster โ 192.168.0.12 โ โ โ default โ โ โ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 519 |
โโ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ TYPE โ ๐ถ aws โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ AUTH METHOD โ iam-role โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ RESOURCE TYPES โ ๐ถ aws-generic, ๐ฆ s3-bucket, ๐ kubernetes-cluster, ๐ณ docker-registry โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ RESOURCE NAME โ <multiple> โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ SECRET ID โ a137151e-1778-4f50-b64b-7cf6c1f715f5 โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ SESSION DURATION โ 3600s โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ EXPIRES IN โ N/A โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ OWNER โ default โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ WORKSPACE โ default โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ SHARED โ โ โ | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 368 |
Handle custom data types
Using materializers to pass custom data types through steps.
A ZenML pipeline is built in a data-centric way. The outputs and inputs of steps define how steps are connected and the order in which they are executed. Each step should be considered as its very own process that reads and writes its inputs and outputs from and to the artifact store. This is where materializers come into play.
A materializer dictates how a given artifact can be written to and retrieved from the artifact store and also contains all serialization and deserialization logic. Whenever you pass artifacts as outputs from one pipeline step to other steps as inputs, the corresponding materializer for the respective data type defines how this artifact is first serialized and written to the artifact store, and then deserialized and read in the next step.
Built-In Materializers
ZenML already includes built-in materializers for many common data types. These are always enabled and are used in the background without requiring any user interaction / activation:
Materializer Handled Data Types Storage Format BuiltInMaterializer bool , float , int , str , None .json BytesInMaterializer bytes .txt BuiltInContainerMaterializer dict , list , set , tuple Directory NumpyMaterializer np.ndarray .npy PandasMaterializer pd.DataFrame , pd.Series .csv (or .gzip if parquet is installed) PydanticMaterializer pydantic.BaseModel .json ServiceMaterializer zenml.services.service.BaseService .json StructuredStringMaterializer zenml.types.CSVString , zenml.types.HTMLString , zenml.types.MarkdownString .csv / .html / .md (depending on type) | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/handle-custom-data-types | 330 |
Kubernetes
Learn how to deploy ZenML pipelines on a Kubernetes cluster.
The ZenML Kubernetes Orchestrator allows you to run your ML pipelines on a Kubernetes cluster without writing Kubernetes code. It's a lightweight alternative to more complex orchestrators like Airflow or Kubeflow.
Prerequisites
To use the Kubernetes Orchestrator, you'll need:
ZenML kubernetes integration installed (zenml integration install kubernetes)
Docker installed and running
kubectl installed
A remote artifact store and container registry in your ZenML stack
A deployed Kubernetes cluster
A configured kubectl context pointing to the cluster (optional, see below)
Deploying the Orchestrator
You can deploy the orchestrator from the ZenML CLI:
zenml orchestrator deploy k8s_orchestrator --flavor=kubernetes --provider=<YOUR_PROVIDER>
Configuring the Orchestrator
There are two ways to configure the orchestrator:
Using a Service Connector to connect to the remote cluster. This is the recommended approach, especially for cloud-managed clusters. No local kubectl context is needed.
zenml orchestrator register <ORCHESTRATOR_NAME> --flavor kubernetes
zenml service-connector list-resources --resource-type kubernetes-cluster -e
zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME>
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
Configuring kubectl with a context pointing to the remote cluster and setting the kubernetes_context in the orchestrator config:
zenml orchestrator register <ORCHESTRATOR_NAME> \
--flavor=kubernetes \
--kubernetes_context=<KUBERNETES_CONTEXT>
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
Running a Pipeline
Once configured, you can run any ZenML pipeline using the Kubernetes Orchestrator:
python your_pipeline.py
This will create a Kubernetes pod for each step in your pipeline. You can interact with the pods using kubectl commands. | how-to | https://docs.zenml.io/how-to/popular-integrations/kubernetes | 418 |
Run pipelines asynchronously
The best way to trigger a pipeline run so that it runs in the background
By default your pipelines will run synchronously. This means your terminal will follow along the logs as the pipeline is being built/runs.
This behavior can be changed in multiple ways. Either the orchestrator can be configured to always run asynchronously by setting synchronous=False. The other option is to temporarily set this at the pipeline configuration level during runtime.
from zenml import pipeline
@pipeline(settings = {"orchestrator.<STACK_NAME>": {"synchronous": False}})
def my_pipeline():
...
or in a yaml config file:
settings:
orchestrator.<STACK_NAME>:
synchronous: false
Learn more about orchestrators here
PreviousTrigger a pipeline from another
NextControl execution order of steps
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/build-pipelines/run-pipelines-asynchronously | 168 |
ation:
from zenml.enums import StackComponentTypefrom zenml.stack import StackComponent, StackComponentConfig
PathType = Union[bytes, str]
class BaseArtifactStoreConfig(StackComponentConfig):
"""Config class for `BaseArtifactStore`."""
path: str
SUPPORTED_SCHEMES: ClassVar[Set[str]]
class BaseArtifactStore(StackComponent):
"""Base class for all ZenML artifact stores."""
@abstractmethod
def open(self, name: PathType, mode: str = "r") -> Any:
"""Open a file at the given path."""
@abstractmethod
def copyfile(
self, src: PathType, dst: PathType, overwrite: bool = False
) -> None:
"""Copy a file from the source to the destination."""
@abstractmethod
def exists(self, path: PathType) -> bool:
"""Returns `True` if the given path exists."""
@abstractmethod
def glob(self, pattern: PathType) -> List[PathType]:
"""Return the paths that match a glob pattern."""
@abstractmethod
def isdir(self, path: PathType) -> bool:
"""Returns whether the given path points to a directory."""
@abstractmethod
def listdir(self, path: PathType) -> List[PathType]:
"""Returns a list of files under a given directory in the filesystem."""
@abstractmethod
def makedirs(self, path: PathType) -> None:
"""Make a directory at the given path, recursively creating parents."""
@abstractmethod
def mkdir(self, path: PathType) -> None:
"""Make a directory at the given path; parent directory must exist."""
@abstractmethod
def remove(self, path: PathType) -> None:
"""Remove the file at the given path. Dangerous operation."""
@abstractmethod
def rename(
self, src: PathType, dst: PathType, overwrite: bool = False
) -> None:
"""Rename source file to destination file."""
@abstractmethod
def rmtree(self, path: PathType) -> None:
"""Deletes dir recursively. Dangerous operation."""
@abstractmethod
def stat(self, path: PathType) -> Any:
"""Return the stat descriptor for a given file path."""
@abstractmethod
def walk(
self,
top: PathType,
topdown: bool = True,
onerror: Optional[Callable[..., None]] = None, | stack-components | https://docs.zenml.io/stack-components/artifact-stores/custom | 471 |
AWS Service Connector
Configuring AWS Service Connectors to connect ZenML to AWS resources like S3 buckets, EKS Kubernetes clusters and ECR container registries.
The ZenML AWS Service Connector facilitates the authentication and access to managed AWS services and resources. These encompass a range of resources, including S3 buckets, ECR container repositories, and EKS clusters. The connector provides support for various authentication methods, including explicit long-lived AWS secret keys, IAM roles, short-lived STS tokens, and implicit authentication.
To ensure heightened security measures, this connector also enables the generation of temporary STS security tokens that are scoped down to the minimum permissions necessary for accessing the intended resource. Furthermore, it includes automatic configuration and detection of credentials locally configured through the AWS CLI.
This connector serves as a general means of accessing any AWS service by issuing pre-authenticated boto3 sessions. Additionally, the connector can handle specialized authentication for S3, Docker, and Kubernetes Python clients. It also allows for the configuration of local Docker and Kubernetes CLIs.
$ zenml service-connector list-types --type aws
โโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโฏโโโโโโโโฏโโโโโโโโโ
โ NAME โ TYPE โ RESOURCE TYPES โ AUTH METHODS โ LOCAL โ REMOTE โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโผโโโโโโโโโจ
โ AWS Service Connector โ ๐ถ aws โ ๐ถ aws-generic โ implicit โ โ
โ โ
โ
โ โ โ ๐ฆ s3-bucket โ secret-key โ โ โ
โ โ โ ๐ kubernetes-cluster โ sts-token โ โ โ
โ โ โ ๐ณ docker-registry โ iam-role โ โ โ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 435 |
ta stores you want to migrate, then upgrade ZenML.Decide the ZenML deployment model that you want to follow for your projects. See the ZenML deployment documentation for available deployment scenarios. If you decide on using a local or remote ZenML server to manage your pipelines, make sure that you first connect your client to it by running zenml connect.
Use the zenml pipeline runs migrate CLI command to migrate your old pipeline runs:
If you want to migrate from a local SQLite metadata store, you only need to pass the path to the metadata store to the command, e.g.:
zenml pipeline runs migrate PATH/TO/LOCAL/STORE/metadata.db
If you would like to migrate any other store, you will need to set --database_type=mysql and provide the MySQL host, username, and password in addition to the database, e.g.:
zenml pipeline runs migrate DATABASE_NAME \
--database_type=mysql \
--mysql_host=URL/TO/MYSQL \
--mysql_username=MYSQL_USERNAME \
--mysql_password=MYSQL_PASSWORD
๐พ The New Way (CLI Command Cheat Sheet)
Deploy the server
zenml deploy --aws (maybe donโt do this :) since it spins up infrastructure on AWSโฆ)
Spin up a local ZenML Server
zenml up
Connect to a pre-existing server
zenml connect (pass in URL / etc, or zenml connect --config + yaml file)
List your deployed server details
zenml status
The ZenML Dashboard is now available
The new ZenML Dashboard is now bundled into the ZenML Python package and can be launched directly from Python. The source code lives in the ZenML Dashboard repository.
To launch it locally, simply run zenml up on your machine and follow the instructions:
$ zenml up
Deploying a local ZenML server with name 'local'.
Connecting ZenML to the 'local' local ZenML server (http://127.0.0.1:8237).
Updated the global store configuration.
Connected ZenML to the 'local' local ZenML server (http://127.0.0.1:8237).
The local ZenML dashboard is available at 'http://127.0.0.1:8237'. You can
connect to it using the 'default' username and an empty password. | reference | https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-twenty | 467 |
Load artifacts from Model
One of the more common use-cases for a Model is to pass artifacts between pipelines (a pattern we have seen before). However, when and how to load these artifacts is important to know as well.
As an example, let's have a look at a two-pipeline project, where the first pipeline is running training logic and the second runs batch inference leveraging trained model artifact(s):
from typing_extensions import Annotated
from zenml import get_pipeline_context, pipeline, Model
from zenml.enums import ModelStages
import pandas as pd
from sklearn.base import ClassifierMixin
@step
def predict(
model: ClassifierMixin,
data: pd.DataFrame,
) -> Annotated[pd.Series, "predictions"]:
predictions = pd.Series(model.predict(data))
return predictions
@pipeline(
model=Model(
name="iris_classifier",
# Using the production stage
version=ModelStages.PRODUCTION,
),
def do_predictions():
# model name and version are derived from pipeline context
model = get_pipeline_context().model
inference_data = load_data()
predict(
# Here, we load in the `trained_model` from a trainer step
model=model.get_model_artifact("trained_model"),
data=inference_data,
if __name__ == "__main__":
do_predictions()
In the example above we used get_pipeline_context().model property to acquire the model context in which the pipeline is running. During pipeline compilation this context will not yet have been evaluated, because Production model version is not a stable version name and another model version can become Production before it comes to the actual step execution. The same applies to calls like model.get_model_artifact("trained_model"); it will get stored in the step configuration for delayed materialization which will only happen during the step run itself.
It is also possible to achieve the same using bare Client methods reworking the pipeline code as follows:
from zenml.client import Client
@pipeline
def do_predictions(): | how-to | https://docs.zenml.io/v/docs/how-to/use-the-model-control-plane/load-artifacts-from-model | 396 |
Reference secrets in stack configuration
Reference secrets in stack component attributes and settings
Some of the components in your stack require you to configure them with sensitive information like passwords or tokens, so they can connect to the underlying infrastructure. Secret references allow you to configure these components in a secure way by not specifying the value directly but instead referencing a secret by providing the secret name and key. Referencing a secret for the value of any string attribute of your stack components, simply specify the attribute using the following syntax: {{<SECRET_NAME>.<SECRET_KEY>}}
For example:
# Register a secret called `mlflow_secret` with key-value pairs for the
# username and password to authenticate with the MLflow tracking server
# Using central secrets management
zenml secret create mlflow_secret \
--username=admin \
--password=abc123
# Then reference the username and password in our experiment tracker component
zenml experiment-tracker register mlflow \
--flavor=mlflow \
--tracking_username={{mlflow_secret.username}} \
--tracking_password={{mlflow_secret.password}} \
...
When using secret references in your stack, ZenML will validate that all secrets and keys referenced in your stack components exist before running a pipeline. This helps us fail early so your pipeline doesn't fail after running for some time due to some missing secret.
This validation by default needs to fetch and read every secret to make sure that both the secret and the specified key-value pair exist. This can take quite some time and might fail if you don't have permission to read secrets.
You can use the environment variable ZENML_SECRET_VALIDATION_LEVEL to disable or control the degree to which ZenML validates your secrets:
Setting it to NONE disables any validation. | how-to | https://docs.zenml.io/how-to/stack-deployment/reference-secrets-in-stack-configuration | 347 |
l version is associated with a model registration.ModelVersionStage: A model version stage is a state in that a model version can be. It can be one of the following: None, Staging, Production, Archived. The model version stage is used to track the lifecycle of a model version. For example, a model version can be in the Staging stage while it is being tested and then moved to the Production stage once it is ready for deployment.
When to use it
ZenML provides a built-in mechanism for storing and versioning pipeline artifacts through its mandatory Artifact Store. While this is a powerful way to manage artifacts programmatically, it can be challenging to use without a visual interface.
Model registries, on the other hand, offer a visual way to manage and track model metadata, particularly when using a remote orchestrator. They make it easy to retrieve and load models from storage, thanks to built-in integrations. A model registry is an excellent choice for interacting with all the models in your pipeline and managing their state in a centralized way.
Using a model registry in your stack is particularly useful if you want to interact with all the logged models in your pipeline, or if you need to manage the state of your models in a centralized way and make it easy to retrieve, load, and deploy these models.
How model registries fit into the ZenML stack
Here is an architecture diagram that shows how a model registry fits into the overall story of a remote stack.
Model Registry Flavors
Model Registries are optional stack components provided by integrations:
Model Registry Flavor Integration Notes MLflow mlflow mlflow Add MLflow as Model Registry to your stack Custom Implementation custom custom
If you would like to see the available flavors of Model Registry, you can use the command:
zenml model-registry flavor list
How to use it | stack-components | https://docs.zenml.io/v/docs/stack-components/model-registries | 370 |
out the SDK Docs .
S3 data access in ZenML stepsIn Sagemaker jobs, it is possible to access data that is located in S3. Similarly, it is possible to write data from a job to a bucket. The ZenML Sagemaker orchestrator supports this via the SagemakerOrchestratorSettings and hence at component, pipeline, and step levels.
Import: S3 -> job
Importing data can be useful when large datasets are available in S3 for training, for which manual copying can be cumbersome. Sagemaker supports File (default) and Pipe mode, with which data is either fully copied before the job starts or piped on the fly. See the Sagemaker documentation referenced above for more information about these modes.
Note that data import and export can be used jointly with processor_args for maximum flexibility.
A simple example of importing data from S3 to the Sagemaker job is as follows:
sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(
input_data_s3_mode="File",
input_data_s3_uri="s3://some-bucket-name/folder"
In this case, data will be available at /opt/ml/processing/input/data within the job.
It is also possible to split your input over channels. This can be useful if the dataset is already split in S3, or maybe even located in different buckets.
sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(
input_data_s3_mode="File",
input_data_s3_uri={
"train": "s3://some-bucket-name/training_data",
"val": "s3://some-bucket-name/validation_data",
"test": "s3://some-other-bucket-name/testing_data"
Here, the data will be available in /opt/ml/processing/input/data/train, /opt/ml/processing/input/data/val and /opt/ml/processing/input/data/test.
In the case of using Pipe for input_data_s3_mode, a file path specifying the pipe will be available as per the description written here . An example of using this pipe file within a Python script can be found here .
Export: job -> S3 | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/sagemaker | 450 |
Token or AWS Session Token authentication methods.Note that the discovered credentials inherit the full set of permissions of the local AWS client configuration, environment variables, or remote AWS IAM role. Depending on the extent of those permissions, this authentication instead method might not be recommended for production use, as it can lead to accidental privilege escalation. It is recommended to also configure an IAM role when using the implicit authentication method, or to use the AWS IAM Role, AWS Session Token, or AWS Federation Token authentication methods instead to limit the validity and/or permissions of the credentials being issued to connector clients.
If you need to access an EKS Kubernetes cluster with this authentication method, please be advised that the EKS cluster's aws-auth ConfigMap may need to be manually configured to allow authentication with the implicit IAM user or role picked up by the Service Connector. For more information, see this documentation.
An AWS region is required and the connector may only be used to access AWS resources in the specified region.
The following assumes the local AWS CLI has a connectors AWS CLI profile already configured with credentials:
AWS_PROFILE=connectors zenml service-connector register aws-implicit --type aws --auth-method implicit --region=us-east-1
Example Command Output
โ ธ Registering service connector 'aws-implicit'...
Successfully registered service connector `aws-implicit` with access to the following resources:
โโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ RESOURCE TYPE โ RESOURCE NAMES โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ถ aws-generic โ us-east-1 โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ฆ s3-bucket โ s3://zenfiles โ
โ โ s3://zenml-demos โ | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 419 |
โโ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ EXPIRES IN โ N/A โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ OWNER โ default โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ WORKSPACE โ default โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ SHARED โ โ โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ CREATED_AT โ 2023-05-19 09:15:12.882929 โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ UPDATED_AT โ 2023-05-19 09:15:12.882930 โ
โโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Configuration
โโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโ
โ PROPERTY โ VALUE โ
โ โโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโจ
โ project_id โ zenml-core โ
โ โโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโจ
โ user_account_json โ [HIDDEN] โ
โโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโ
Local client provisioning
The local gcloud CLI, the Kubernetes kubectl CLI and the Docker CLI can be configured with credentials extracted from or generated by a compatible GCP Service Connector. Please note that unlike the configuration made possible through the GCP CLI, the Kubernetes and Docker credentials issued by the GCP Service Connector have a short lifetime and will need to be regularly refreshed. This is a byproduct of implementing a high-security profile. | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 471 |
contents of file [email protected] registered service connector `gcp-service-account` with access to the following resources:
โโโโโโโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ RESOURCE TYPE โ RESOURCE NAMES โ
โ โโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ๐ฆ gcs-bucket โ gs://zenml-bucket-sl โ
โ โ gs://zenml-core.appspot.com โ
โ โ gs://zenml-core_cloudbuild โ
โ โ gs://zenml-datasets โ
โ โ gs://zenml-internal-artifact-store โ
โ โ gs://zenml-kubeflow-artifact-store โ
โ โ gs://zenml-project-time-series-bucket โ
โโโโโโโโโโโโโโโโโโโโโโโโโทโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The GCP service connector configuration and service account credentials:
zenml service-connector describe gcp-service-account
Example Command Output
Service connector 'gcp-service-account' of type 'gcp' with id '4b3d41c9-6a6f-46da-b7ba-8f374c3f49c5' is owned by user 'default' and is 'private'.
'gcp-service-account' gcp Service Connector Details
โโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ PROPERTY โ VALUE โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ID โ 4b3d41c9-6a6f-46da-b7ba-8f374c3f49c5 โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ NAME โ gcp-service-account โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ TYPE โ ๐ต gcp โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ AUTH METHOD โ service-account โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 565 |
Feast
Managing data in Feast feature stores.
Feast (Feature Store) is an operational data system for managing and serving machine learning features to models in production. Feast is able to serve feature data to models from a low-latency online store (for real-time prediction) or from an offline store (for scale-out batch scoring or model training).
When would you want to use it?
There are two core functions that feature stores enable:
access to data from an offline / batch store for training.
access to online data at inference time.
Feast integration currently supports your choice of offline data sources and a Redis backend for your online feature serving. We encourage users to check out Feast's documentation and guides on how to set up your offline and online data sources via the configuration yaml file.
COMING SOON: While the ZenML integration has an interface to access online feature store data, it currently is not usable in production settings with deployed models. We will update the docs when we enable this functionality.
How to deploy it?
ZenML assumes that users already have a Feast feature store that they just need to connect with. If you don't have a feature store yet, follow the Feast Documentation to deploy one first.
To use the feature store as a ZenML stack component, you also need to install the corresponding feast integration in ZenML:
zenml integration install feast
Now you can register your feature store as a ZenML stack component and add it into a corresponding stack:
zenml feature-store register feast_store --flavor=feast --feast_repo="<PATH/TO/FEAST/REPO>"
zenml stack register ... -f feast_store
How do you use it?
Online data retrieval is possible in a local setting, but we don't currently support using the online data serving in the context of a deployed model or as part of model deployment. We will update this documentation as we develop this feature. | stack-components | https://docs.zenml.io/stack-components/feature-stores/feast | 386 |
onnector describe gcp-auto
Example Command OutputService connector 'gcp-auto' of type 'gcp' with id 'fe16f141-7406-437e-a579-acebe618a293' is owned by user 'default' and is 'private'.
'gcp-auto' gcp Service Connector Details
โโโโโโโโโโโโโโโโโโโโฏโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ PROPERTY โ VALUE โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ ID โ fe16f141-7406-437e-a579-acebe618a293 โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ NAME โ gcp-auto โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ TYPE โ ๐ต gcp โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ AUTH METHOD โ user-account โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ RESOURCE TYPES โ ๐ต gcp-generic, ๐ฆ gcs-bucket, ๐ kubernetes-cluster, ๐ณ docker-registry โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ RESOURCE NAME โ <multiple> โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ SECRET ID โ 5eca8f6e-291f-4958-ae2d-a3e847a1ad8a โ
โ โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโจ
โ SESSION DURATION โ N/A โ | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 450 |
in the From Local to Cloud with bentoctl section.The bentoctl integration implementation is still in progress and will be available soon. The integration will allow you to deploy your models to a specific cloud provider with just a few lines of code using ZenML built-in steps.
How do you deploy it?
Within ZenML you can quickly get started with BentoML by simply creating Model Deployer Stack Component with the BentoML flavor. To do so you'll need to install the required Python packages on your local machine to be able to deploy your models:
zenml integration install bentoml -y
To register the BentoML model deployer with ZenML you need to run the following command:
zenml model-deployer register bentoml_deployer --flavor=bentoml
The ZenML integration will provision a local HTTP deployment server as a daemon process that will continue to run in the background to serve the latest models and Bentos.
How do you use it?
The recommended flow to use the BentoML model deployer is to first create a BentoML Service, then use the bento_builder_step to build the model and service into a bento bundle, and finally deploy the bundle with the bentoml_model_deployer_step.
BentoML Service and Runner
The first step to being able to deploy your models and use BentoML is to create a bento service which is the main logic that defines how your model will be served, and a bento runner which represents a unit of execution for your model on a remote Python worker.
The following example shows how to create a basic bento service and runner that will be used to serve a basic scikit-learn model.
import numpy as np
import bentoml
from bentoml.io import NumpyNdarray
iris_clf_runner = bentoml.sklearn.get("iris_clf:latest").to_runner()
svc = bentoml.Service("iris_classifier", runners=[iris_clf_runner])
@svc.api(input=NumpyNdarray(), output=NumpyNdarray())
def classify(input_series: np.ndarray) -> np.ndarray:
result = iris_clf_runner.predict.run(input_series)
return result
ZenML Bento Builder step | stack-components | https://docs.zenml.io/stack-components/model-deployers/bentoml | 453 |
sing the 'default' username and an empty password.The Dashboard will be available at http://localhost:8237 by default:
For more details on other possible deployment options, see the ZenML deployment documentation, and/or follow the starter guide to learn more.
Removal of Profiles and the local YAML database
Prior to 0.20.0, ZenML used used a set of local YAML files to store information about the Stacks and Stack Components that were registered on your machine. In addition to that, these Stacks could be grouped together and organized under individual Profiles.
Profiles and the local YAML database have both been deprecated and removed in ZenML 0.20.0. Stack, Stack Components as well as all other information that ZenML tracks, such as Pipelines and Pipeline Runs, are now stored in a single SQL database. These entities are no longer organized into Profiles, but they can be scoped into different Projects instead.
Since the local YAML database is no longer used by ZenML 0.20.0, you will lose all the Stacks and Stack Components that you currently have configured when you update to ZenML 0.20.0. If you still want to use these Stacks, you will need to manually migrate them after the update.
๐ฃ How to migrate your Profiles
If you're already using ZenML, you can migrate your existing Profiles to the new ZenML 0.20.0 paradigm by following these steps:
first, update ZenML to 0.20.0. This will automatically invalidate all your existing Profiles.
decide the ZenML deployment model that you want to follow for your projects. See the ZenML deployment documentation for available deployment scenarios. If you decide on using a local or remote ZenML server to manage your pipelines, make sure that you first connect your client to it by running zenml connect. | reference | https://docs.zenml.io/reference/migration-guide/migration-zero-twenty | 378 |
Security best practices
Best practices concerning the various authentication methods implemented by Service Connectors.
Service Connector Types, especially those targeted at cloud providers, offer a plethora of authentication methods matching those supported by remote cloud platforms. While there is no single authentication standard that unifies this process, there are some patterns that are easily identifiable and can be used as guidelines when deciding which authentication method to use to configure a Service Connector.
This section explores some of those patterns and gives some advice regarding which authentication methods are best suited for your needs.
This section may require some general knowledge about authentication and authorization to be properly understood. We tried to keep it simple and limit ourselves to talking about high-level concepts, but some areas may get a bit too technical.
Username and password
The key takeaway is this: you should avoid using your primary account password as authentication credentials as much as possible. If there are alternative authentication methods that you can use or other types of credentials (e.g. session tokens, API keys, API tokens), you should always try to use those instead.
Ultimately, if you have no choice, be cognizant of the third parties you share your passwords with. If possible, they should never leave the premises of your local host or development environment.
This is the typical authentication method that uses a username or account name plus the associated password. While this is the de facto method used to log in with web consoles and local CLIs, this is the least secure of all authentication methods and never something you want to share with other members of your team or organization or use to authenticate automated workloads. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/best-security-practices | 317 |