page_content
stringlengths 74
2.86k
| parent_section
stringclasses 7
values | url
stringlengths 21
129
| token_count
int64 17
755
|
---|---|---|---|
ighly recommended. See an end to end example here.In order to benefit from the advantages of having a code repository in a project, you need to make sure that the relevant integrations are installed for your ZenML installation.. For instance, let's assume you are working on a project with ZenML and one of your team members has already registered a corresponding code repository of type github for it. If you do zenml code-repository list, you would also be able to see this repository. However, in order to fully use this repository, you still need to install the corresponding integration for it, in this example the github integration.
zenml integration install github
Detecting local code repository checkouts
Once you have registered one or more code repositories, ZenML will check whether the files you use when running a pipeline are tracked inside one of those code repositories. This happens as follows:
First, the source root is computed
Next, ZenML checks whether this source root directory is included in a local checkout of one of the registered code repositories
Tracking code version for pipeline runs
If a local code repository checkout is detected when running a pipeline, ZenML will store a reference to the current commit for the pipeline run, so you'll be able to know exactly which code was used. Note that this reference is only tracked if your local checkout is clean (i.e. it does not contain any untracked or uncommitted files). This is to ensure that your pipeline is actually running with the exact code stored at the specific code repository commit.
Tips and best practices
It is also important to take some additional points into consideration:
The file download is only possible if the local checkout is clean (i.e. it does not contain any untracked or uncommitted files) and the latest commit has been pushed to the remote repository. This is necessary as otherwise, the file download inside the Docker container will fail. | how-to | https://docs.zenml.io/v/docs/how-to/customize-docker-builds/use-code-repositories-to-speed-up-docker-build-times | 382 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β ID β 37b6000e-3f7f-483e-b2c5-7a5db44fe66b β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β gcp-workload-identity β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β π΅ gcp β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β external-account β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π΅ gcp-generic, π¦ gcs-bucket, π kubernetes-cluster, π³ docker-registry β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β <multiple> β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β 1ff6557f-7f60-4e63-b73d-650e64f015b5 β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β N/A β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES_SKEW_TOLERANCE β N/A β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 392 |
β region β us-east-1 β
ββββββββββββ·ββββββββββββVerifying access to resources (note the AWS_PROFILE environment points to the same AWS CLI profile used during registration, but may yield different results with a different profile, which is why this method is not suitable for reproducible results):
AWS_PROFILE=connectors zenml service-connector verify aws-implicit --resource-type s3-bucket
Example Command Output
β Έ Verifying service connector 'aws-implicit'...
Service connector 'aws-implicit' is correctly configured with valid credentials and has access to the following resources:
βββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β β s3://zenml-public-datasets β
βββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββ
zenml service-connector verify aws-implicit --resource-type s3-bucket
Example Command Output
β Έ Verifying service connector 'aws-implicit'...
Service connector 'aws-implicit' is correctly configured with valid credentials and has access to the following resources:
βββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://sagemaker-studio-907999144431-m11qlsdyqr8 β
β β s3://sagemaker-studio-d8a14tvjsmb β
βββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββββ
Depending on the environment, clients are issued either temporary STS tokens or long-lived credentials, which is a reason why this method isn't well suited for production:
AWS_PROFILE=zenml zenml service-connector describe aws-implicit --resource-type s3-bucket --resource-id zenfiles --client | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 578 |
-registry β iam-role β β ββ β β β session-token β β β
β β β β federation-token β β β
β βββββββββββββββββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββΌββββββββΌβββββββββ¨
β GCP Service Connector β π΅ gcp β π΅ gcp-generic β implicit β β
β β
β
β β β π¦ gcs-bucket β user-account β β β
β β β π kubernetes-cluster β service-account β β β
β β β π³ docker-registry β oauth2-token β β β
β β β β impersonation β β β
β βββββββββββββββββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββΌββββββββΌβββββββββ¨
β HyperAI Service Connector β π€ hyperai β π€ hyperai-instance β rsa-key β β
β β
β
β β β β dsa-key β β β
β β β β ecdsa-key β β β
β β β β ed25519-key β β β
ββββββββββββββββββββββββββββββββ·ββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββββ·ββββββββ·βββββββββ
Service Connector Types are also displayed in the dashboard during the configuration of a new Service Connector:
The cloud provider of choice for our example is AWS and we're looking to hook up an S3 bucket to an S3 Artifact Store Stack Component. We'll use the AWS Service Connector Type. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management | 466 |
ut_materializers=MyMaterializer)
first_pipeline()Due to the typing of the inputs and outputs and the ASSOCIATED_TYPES attribute of the materializer, you won't necessarily have to add .configure(output_materializers=MyMaterializer) to the step. It should automatically be detected. It doesn't hurt to be explicit though.
This will now work as expected and yield the following output:
Creating run for pipeline: `first_pipeline`
Cache enabled for pipeline `first_pipeline`
Using stack `default` to run pipeline `first_pipeline`...
Step `my_first_step` has started.
Step `my_first_step` has finished in 0.081s.
Step `my_second_step` has started.
The following object was passed to this step: `my_object`
Step `my_second_step` has finished in 0.048s.
Pipeline run `first_pipeline-22_Apr_22-10_58_51_135729` has finished in 0.153s.
import logging
import os
from typing import Type
from zenml import step, pipeline
from zenml.enums import ArtifactType
from zenml.materializers.base_materializer import BaseMaterializer
class MyObj:
def __init__(self, name: str):
self.name = name
class MyMaterializer(BaseMaterializer):
ASSOCIATED_TYPES = (MyObj,)
ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA
def load(self, data_type: Type[MyObj]) -> MyObj:
"""Read from artifact store."""
with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'r') as f:
name = f.read()
return MyObj(name=name)
def save(self, my_obj: MyObj) -> None:
"""Write to artifact store."""
with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'w') as f:
f.write(my_obj.name)
@step
def my_first_step() -> MyObj:
"""Step that returns an object of type MyObj."""
return MyObj("my_object")
my_first_step.configure(output_materializers=MyMaterializer)
@step
def my_second_step(my_obj: MyObj) -> None:
"""Step that log the input object and returns nothing."""
logging.info(
f"The following object was passed to this step: `{my_obj.name}`"
@pipeline
def first_pipeline():
output_1 = my_first_step()
my_second_step(output_1) | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/handle-custom-data-types | 502 |
ββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨β π kubernetes-cluster β demo-zenml-demos/demo-zenml-terraform-cluster β
βββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββ
The login CLI command can be used to configure the local Kubernetes CLI to access a Kubernetes cluster reachable through an Azure Service Connector:
zenml service-connector login azure-service-principal --resource-type kubernetes-cluster --resource-id demo-zenml-demos/demo-zenml-terraform-cluster
Example Command Output
β Attempting to configure local client using service connector 'azure-service-principal'...
Updated local kubeconfig with the cluster details. The current kubectl context was set to 'demo-zenml-terraform-cluster'.
The 'azure-service-principal' Kubernetes Service Connector connector was used to successfully configure the local Kubernetes cluster client/SDK.
The local Kubernetes CLI can now be used to interact with the Kubernetes cluster:
kubectl cluster-info
Example Command Output
Kubernetes control plane is running at https://demo-43c5776f7.hcp.westeurope.azmk8s.io:443
CoreDNS is running at https://demo-43c5776f7.hcp.westeurope.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://demo-43c5776f7.hcp.westeurope.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
A similar process is possible with ACR container registries:
zenml service-connector verify azure-service-principal --resource-type docker-registry
Example Command Output
β ¦ Verifying service connector 'azure-service-principal'...
Service connector 'azure-service-principal' is correctly configured with valid credentials and has access to the following resources:
ββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β demozenmlcontainerregistry.azurecr.io β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 513 |
Default Container Registry
Storing container images locally.
The Default container registry is a container registry flavor that comes built-in with ZenML and allows container registry URIs of any format.
When to use it
You should use the Default container registry if you want to use a local container registry or when using a remote container registry that is not covered by other container registry flavors.
Local registry URI format
To specify a URI for a local container registry, use the following format:
localhost:<PORT>
# Examples:
localhost:5000
localhost:8000
localhost:9999
How to use it
To use the Default container registry, we need:
Docker installed and running.
The registry URI. If you're using a local container registry, check out
the previous section on the URI format.
We can then register the container registry and use it in our active stack:
zenml container-registry register <NAME> \
--flavor=default \
--uri=<REGISTRY_URI>
# Add the container registry to the active stack
zenml stack update -c <NAME>
You may also need to set up authentication required to log in to the container registry.
Authentication Methods
If you are using a private container registry, you will need to configure some form of authentication to login to the registry. If you're looking for a quick way to get started locally, you can use the Local Authentication method. However, the recommended way to authenticate to a remote private container registry is through a Docker Service Connector.
If your target private container registry comes from a cloud provider like AWS, GCP or Azure, you should use the container registry flavor targeted at that cloud provider. For example, if you're using AWS, you should use the AWS Container Registry flavor. These cloud provider flavors also use specialized cloud provider Service Connectors to authenticate to the container registry. | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/default | 371 |
active guide:
zenml service-connector register -iA quick glance into the Service Connector configuration that was automatically detected gives a better idea of what happened:
zenml service-connector describe aws-s3
Example Command Output
Service connector 'aws-s3' of type 'aws' with id '96a92154-4ec7-4722-bc18-21eeeadb8a4f' is owned by user 'default' and is 'private'.
'aws-s3' aws Service Connector Details
ββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β ID β 96a92154-4ec7-4722-bc18-21eeeadb8a4f β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-s3 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β session-token β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π¦ s3-bucket β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β <multiple> β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β a8c6d0ff-456a-4b25-8557-f0d7e3c12c5f β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β 43200s β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β SHARED β β β | how-to | https://docs.zenml.io/how-to/auth-management | 538 |
πͺUse the Model Control Plane
A Model is simply an entity that groups pipelines, artifacts, metadata, and other crucial business data into a unified entity. A ZenML Model is a concept that more broadly encapsulates your ML products business logic. You may even think of a ZenML Model as a "project" or a "workspace"
Please note that one of the most common artifacts that is associated with a Model in ZenML is the so-called technical model, which is the actually model file/files that holds the weight and parameters of a machine learning training result. However, this is not the only artifact that is relevant; artifacts such as the training data and the predictions this model produces in production are also linked inside a ZenML Model.
Models are first-class citizens in ZenML and as such viewing and using them is unified and centralized in the ZenML API, client as well as on the ZenML Pro dashboard.
A Model captures lineage information and more. Within a Model, different Model versions can be staged. For example, you can rely on your predictions at a specific stage, like Production, and decide whether the Model version should be promoted based on your business rules during training. Plus, accessing data from other Models and their versions is just as simple.
The Model Control Plane is how you manage your models through this unified interface. It allows you to combine the logic of your pipelines, artifacts and crucial business data along with the actual 'technical model'.
To see an end-to-end example, please refer to the starter guide.
PreviousDisabling visualizations
NextRegistering a Model
Last updated 12 days ago | how-to | https://docs.zenml.io/v/docs/how-to/use-the-model-control-plane | 326 |
s to provide additional tags for your experiments:from zenml.integrations.comet.flavors.comet_experiment_tracker_flavor import CometExperimentTrackerSettings
comet_settings = CometExperimentTrackerSettings(
tags=["some_tag"]
@step(
experiment_tracker="<COMET_TRACKER_STACK_COMPONENT_NAME>",
settings={
"experiment_tracker.comet": comet_settings
def my_step():
...
Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings.
PreviousExperiment Trackers
NextMLflow
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/experiment-trackers/comet | 117 |
βββββββββββ¨
β aws_access_key_id β [HIDDEN] ββ ββββββββββββββββββββββββΌββββββββββββ¨
β aws_secret_access_key β [HIDDEN] β
βββββββββββββββββββββββββ·ββββββββββββ
However, clients receive temporary STS tokens instead of the AWS Secret Key configured in the connector (note the authentication method, expiration time, and credentials):
zenml service-connector describe aws-session-token --resource-type s3-bucket --resource-id zenfiles --client
Example Command Output
Service connector 'aws-session-token (s3-bucket | s3://zenfiles client)' of type 'aws' with id '3ae3e595-5cbc-446e-be64-e54e854e0e3f' is owned by user 'default' and is 'private'.
'aws-session-token (s3-bucket | s3://zenfiles client)' aws Service
Connector Details
ββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β c0f8e857-47f9-418b-a60f-c3b03023da54 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-session-token (s3-bucket | s3://zenfiles client) β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β sts-token β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π¦ s3-bucket β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β s3://zenfiles β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 536 |
d:
zenml step-operator flavor list
How to use itYou don't need to directly interact with any ZenML step operator in your code. As long as the step operator that you want to use is part of your active ZenML stack, you can simply specify it in the @step decorator of your step.
from zenml import step
@step(step_operator= <STEP_OPERATOR_NAME>)
def my_step(...) -> ...:
...
Specifying per-step resources
If your steps require additional hardware resources, you can specify them on your steps as described here.
Enabling CUDA for GPU-backed hardware
Note that if you wish to use step operators to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
PreviousDevelop a Custom Model Deployer
NextAmazon SageMaker
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/step-operators | 195 |
>' \
--client_secret='<YOUR_AZURE_CLIENT_SECRET>'# Alternatively for providing key-value pairs, you can utilize the '--values' option by specifying a file path containing
# key-value pairs in either JSON or YAML format.
# File content example: {"account_name":"<YOUR_AZURE_ACCOUNT_NAME>",...}
zenml secret create az_secret \
--values=@path/to/file.txt
# Register the Azure artifact store and reference the ZenML secret
zenml artifact-store register az_store -f azure \
--path='az://your-container' \
--authentication_secret=az_secret
# Register and set a stack with the new artifact store
zenml stack register custom_stack -a az_store ... --set
For more, up-to-date information on the Azure Artifact Store implementation and its configuration, you can have a look at the SDK docs .
How do you use it?
Aside from the fact that the artifacts are stored in Azure Blob Storage, using the Azure Artifact Store is no different from using any other flavor of Artifact Store.
PreviousGoogle Cloud Storage (GCS)
NextDevelop a custom artifact store
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/artifact-stores/azure | 232 |
e going to be starting with a very basic approach.First, let's understand what a service connector does. In simple words, a service connector contains credentials that grant stack components access to cloud infrastructure. These credentials are stored in the form of a secret, and are available to the ZenML server to use. Using these credentials, the service connector brokers a short-lived token and grants temporary permissions to the stack component to access that infrastructure. This diagram represents this process:
There are many ways to create an AWS service connector, but for the sake of this guide, we recommend creating one by using the IAM method.
AWS_PROFILE=<AWS_PROFILE> zenml service-connector register cloud_connector --type aws --auto-configure
There are many ways to create a GCP service connector, but for the sake of this guide, we recommend creating one by using the Service Account method.
zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@<PATH_TO_SERVICE_ACCOUNT_JSON> --project_id=<PROJECT_ID> --generate_temporary_tokens=False
There are many ways to create an Azure service connector, but for the sake of this guide, we recommend creating one by using the Service Principal method.
zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id=<TENANT_ID> --client_id=<CLIENT_ID> --client_secret=<CLIENT_SECRET>
Once we have our service connector, we can now attach it to stack components. In this case, we are going to connect it to our remote artifact store:
zenml artifact-store connect cloud_artifact_store --connector cloud_connector
Now, every time you (or anyone else with access) uses the cloud_artifact_store, they will be granted a temporary token that will grant them access to the remote storage. Therefore, your colleagues don't need to worry about setting up credentials and installing clients locally!
Running a pipeline on a cloud stack | user-guide | https://docs.zenml.io/user-guide/production-guide/remote-storage | 394 |
lowing configuration values in custom-values.yaml:the database configuration, if you mean to use an external database:the database URL, formatted as mysql://<username>:<password>@<hostname>:<port>/<database>CA and/or client TLS certificates, if youβre using SSL to secure the connection to the database
the Ingress configuration, if enabled:enabling TLSenabling self-signed certificatesconfiguring the hostname that will be used to access the ZenML server, if different from the IP address or hostname associated with the Ingress service installed in your cluster
Note All the file paths that you use in your helm chart (e.g. for certificates like database.sslCa) must be relative to the ./zenml helm chart directory, meaning that you also have to copy these files there.
Install the Helm chart
Once everything is configured, you can run the following command in the ./zenml folder to install the Helm chart.
helm -n <namespace> install zenml-server . --create-namespace --values custom-values.yaml
Connect to the deployed ZenML server
Immediately after deployment, the ZenML server needs to be activated before it can be used. The activation process includes creating an initial admin user account and configuring some server settings. You can do this only by visiting the ZenML server URL in your browser and following the on-screen instructions. Connecting your local ZenML client to the server is not possible until the server is properly initialized.
The Helm chart should print out a message with the URL of the deployed ZenML server. You can use the URL to open the ZenML UI in your browser.
To connect your local client to the ZenML server, you can either pass the configuration as command line arguments or as a YAML file:
zenml connect --url=https://zenml.example.com:8080 --no-verify-ssl
or
zenml connect --config=/path/to/zenml_server_config.yaml
The YAML file should have the following structure when connecting to a ZenML server:
url: <The URL of the ZenML server>
verify_ssl: | | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-helm | 421 |
ht want to use torch.save() and torch.load() here.(Optional) How to Visualize the Artifact
Optionally, you can override the save_visualizations() method to automatically save visualizations for all artifacts saved by your materializer. These visualizations are then shown next to your artifacts in the dashboard:
Currently, artifacts can be visualized either as CSV table, embedded HTML, image or Markdown. For more information, see zenml.enums.VisualizationType.
To create visualizations, you need to:
Compute the visualizations based on the artifact
Save all visualizations to paths inside self.uri
Return a dictionary mapping visualization paths to visualization types.
As an example, check out the implementation of the zenml.materializers.NumpyMaterializer that use matplotlib to automatically save or plot certain arrays.
Read more about visualizations here.
(Optional) Which Metadata to Extract for the Artifact
Optionally, you can override the extract_metadata() method to track custom metadata for all artifacts saved by your materializer. Anything you extract here will be displayed in the dashboard next to your artifacts.
zenml.metadata.metadata_types that are displayed in a dedicated way in the dashboard. See
zenml.metadata.metadata_types.MetadataType for more details.
By default, this method will only extract the storage size of an artifact, but you can override it to track anything you wish. E.g., the zenml.materializers.NumpyMaterializer overrides this method to track the shape, dtype, and some statistical properties of each np.ndarray that it saves.
If you would like to disable artifact visualization altogether, you can set enable_artifact_visualization at either pipeline or step level via @pipeline(enable_artifact_visualization=False) or @step(enable_artifact_visualization=False).
(Optional) Which Metadata to Extract for the Artifact | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/handle-custom-data-types | 358 |
the Tekton orchestrator, check out the SDK Docs .Enabling CUDA for GPU-backed hardware
Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
PreviousAWS Sagemaker Orchestrator
NextAirflow Orchestrator
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/orchestrators/tekton | 97 |
:
container.clusters.list
container.clusters.getIn addition to the above permissions, the credentials should include permissions to connect to and use the GKE cluster (i.e. some or all permissions in the Kubernetes Engine Developer role).
If set, the resource name must identify a GKE cluster using one of the following formats:
GKE cluster name: {cluster-name}
GKE cluster names are project scoped. The connector can only be used to access GKE clusters in the GCP project that it is configured to use.
GCR container registry
Allows Stack Components to access a GCR registry as a standard Docker registry resource. When used by Stack Components, they are provided a pre-authenticated Python Docker client instance.
The configured credentials must have at least the following GCP permissions:
storage.buckets.get
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.list
The Storage Legacy Bucket Writer role includes all of the above permissions while at the same time restricting access to only the GCR buckets.
The resource name associated with this resource type identifies the GCR container registry associated with the GCP-configured project (the repository name is optional):
GCR repository URI: [https://]gcr.io/{project-id}[/{repository-name}]
Authentication Methods
Implicit authentication
Implicit authentication to GCP services using Application Default Credentials.
This method may constitute a security risk, because it can give users access to the same cloud resources and services that the ZenML Server itself is configured to access. For this reason, all implicit authentication methods are disabled by default and need to be explicitly enabled by setting the ZENML_ENABLE_IMPLICIT_AUTH_METHODS environment variable or the helm chart enableImplicitAuthMethods configuration option to true in the ZenML deployment. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 373 |
the creation of the custom flavor through the CLI.The CustomModelRegistryConfig class is imported when someone tries to register/update a stack component with this custom flavor. Most of all, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config objects are pydantic objects under the hood, you can also add your own custom validators here.
The CustomModelRegistry only comes into play when the component is ultimately in use.
The design behind this interaction lets us separate the configuration of the flavor from its implementation. This way we can register flavors and components even when the major dependencies behind their implementation are not installed in our local setting (assuming the CustomModelRegistryFlavor and the CustomModelRegistryConfig are implemented in a different module/path than the actual CustomModelRegistry).
For a full implementation example, please check out the MLFlowModelRegistry
PreviousMLflow Model Registry
NextFeature Stores
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/model-registries/custom | 195 |
ment_uri", is_deployment_artifact=True)],
]:
...The ArtifactConfig object allows configuring model linkage directly on the artifact, and you specify whether it's for a model or deployment by using the is_model_artifact and is_deployment_artifact flags (as shown above) else it will be assumed to be a data artifact.
Saving intermediate artifacts
It is often handy to save some of your work half-way: steps like epoch-based training can be running slow, and you don't want to lose any checkpoints along the way if an error occurs. You can use the save_artifact utility function to save your data assets as ZenML artifacts. Moreover, if your step has the Model context configured in the @pipeline or @step decorator it will be automatically linked to it, so you can get easy access to it using the Model Control Plane features.
from zenml import step, Model
from zenml.artifacts.utils import save_artifact
import pandas as pd
from typing_extensions import Annotated
from zenml.artifacts.artifact_config import ArtifactConfig
@step(model=Model(name="MyModel", version="1.2.42"))
def trainer(
trn_dataset: pd.DataFrame,
) -> Annotated[
ClassifierMixin, ArtifactConfig("trained_model", is_model_artifact=True)
]: # this configuration will be applied to `model` output
"""Step running slow training."""
...
for epoch in epochs:
checkpoint = model.train(epoch)
# this will save each checkpoint in `training_checkpoint` artifact
# with distinct version e.g. `1.2.42_0`, `1.2.42_1`, etc.
# Checkpoint artifacts will be linked to `MyModel` version `1.2.42`
# implicitly.
save_artifact(
data=checkpoint,
name="training_checkpoint",
version=f"1.2.42_{epoch}",
...
return model
Link artifacts explicitly
If you would like to link an artifact to a model not from the step context or even outside a step, you can use the link_artifact_to_model function. All you need is ready to link artifact and the configuration of a model.
from zenml import step, Model, link_artifact_to_model, save_artifact
from zenml.client import Client | how-to | https://docs.zenml.io/v/docs/how-to/use-the-model-control-plane/linking-model-binaries-data-to-models | 463 |
have a look at the SDK docs .
How do you use it?To log information from a ZenML pipeline step using the Neptune Experiment Tracker component in the active stack, you need to enable an experiment tracker using the @step decorator. Then fetch the Neptune run object and use logging capabilities as you would normally do. For example:
import numpy as np
import tensorflow as tf
from neptune_tensorflow_keras import NeptuneCallback
from zenml.integrations.neptune.experiment_trackers.run_state import (
get_neptune_run,
from zenml import step
@step(experiment_tracker="<NEPTUNE_TRACKER_STACK_COMPONENT_NAME>")
def tf_trainer(
x_train: np.ndarray,
y_train: np.ndarray,
x_val: np.ndarray,
y_val: np.ndarray,
epochs: int = 5,
lr: float = 0.001
) -> tf.keras.Model:
...
neptune_run = get_neptune_run()
model.fit(
x_train,
y_train,
epochs=epochs,
validation_data=(x_val, y_val),
callbacks=[
NeptuneCallback(run=neptune_run),
],
metric = ...
neptune_run["<METRIC_NAME>"] = metric
Instead of hardcoding an experiment tracker name, you can also use the Client to dynamically use the experiment tracker of your active stack:
from zenml.client import Client
experiment_tracker = Client().active_stack.experiment_tracker
@step(experiment_tracker=experiment_tracker.name)
def tf_trainer(...):
...
Additional configuration
You can pass a set of tags to the Neptune run by using the NeptuneExperimentTrackerSettings class, like in the example below:
import numpy as np
import tensorflow as tf
from zenml import step
from zenml.integrations.neptune.experiment_trackers.run_state import (
get_neptune_run,
from zenml.integrations.neptune.flavors import NeptuneExperimentTrackerSettings
neptune_settings = NeptuneExperimentTrackerSettings(tags={"keras", "mnist"})
@step(
experiment_tracker="<NEPTUNE_TRACKER_STACK_COMPONENT_NAME>",
settings={
"experiment_tracker.neptune": neptune_settings
def my_step(
x_test: np.ndarray,
y_test: np.ndarray,
model: tf.keras.Model,
) -> float:
"""Log metadata to Neptune run""" | stack-components | https://docs.zenml.io/stack-components/experiment-trackers/neptune | 465 |
n this mechanism and initialize zenml at the root.Afterward, you should see the new flavor in the list of available flavors:
zenml orchestrator flavor list
It is important to draw attention to when and how these base abstractions are coming into play in a ZenML workflow.
The CustomOrchestratorFlavor class is imported and utilized upon the creation of the custom flavor through the CLI.
The CustomOrchestratorConfig class is imported when someone tries to register/update a stack component with this custom flavor. Especially, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config object are inherently pydantic objects, you can also add your own custom validators here.
The CustomOrchestrator only comes into play when the component is ultimately in use.
The design behind this interaction lets us separate the configuration of the flavor from its implementation. This way we can register flavors and components even when the major dependencies behind their implementation are not installed in our local setting (assuming the CustomOrchestratorFlavor and the CustomOrchestratorConfig are implemented in a different module/path than the actual CustomOrchestrator).
Implementation guide
Create your orchestrator class: This class should either inherit from BaseOrchestrator, or more commonly from ContainerizedOrchestrator. If your orchestrator uses container images to run code, you should inherit from ContainerizedOrchestrator which handles building all Docker images for the pipeline to be executed. If your orchestator does not use container images, you'll be responsible that the execution environment contains all the necessary requirements and code files to run the pipeline. | stack-components | https://docs.zenml.io/stack-components/orchestrators/custom | 339 |
nML, namely an orchestrator and an artifact store.Keep in mind, that each one of these components is built on top of base abstractions and is completely extensible.
Orchestrator
An Orchestrator is a workhorse that coordinates all the steps to run in a pipeline. Since pipelines can be set up with complex combinations of steps with various asynchronous dependencies between them, the orchestrator acts as the component that decides what steps to run and when to run them.
ZenML comes with a default local orchestrator designed to run on your local machine. This is useful, especially during the exploration phase of your project. You don't have to rent a cloud instance just to try out basic things.
Artifact Store
An Artifact Store is a component that houses all data that pass through the pipeline as inputs and outputs. Each artifact that gets stored in the artifact store is tracked and versioned and this allows for extremely useful features like data caching which speeds up your workflows.
Similar to the orchestrator, ZenML comes with a default local artifact store designed to run on your local machine. This is useful, especially during the exploration phase of your project. You don't have to set up a cloud storage system to try out basic things.
Flavor
ZenML provides a dedicated base abstraction for each stack component type. These abstractions are used to develop solutions, called Flavors, tailored to specific use cases/tools. With ZenML installed, you get access to a variety of built-in and integrated Flavors for each component type, but users can also leverage the base abstractions to create their own custom flavors.
Stack Switching
When it comes to production-grade solutions, it is rarely enough to just run your workflow locally without including any cloud infrastructure. | getting-started | https://docs.zenml.io/v/docs/getting-started/core-concepts | 352 |
mponent. We'll use the AWS Service Connector Type.A lot more is hidden behind a Service Connector Type than a name and a simple list of resource types. Before using a Service Connector Type to configure a Service Connector, you probably need to understand what it is, what it can offer and what are the supported authentication methods and their requirements. All this can be accessed on-site directly through the CLI or in the dashboard. Some examples are included here.
Showing information about the AWS Service Connector Type:
zenml service-connector describe-type aws
Example Command Output
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β πΆ AWS Service Connector (connector type: aws) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Authentication methods:
π implicit
π secret-key
π sts-token
π iam-role
π session-token
π federation-token
Resource types:
πΆ aws-generic
π¦ s3-bucket
π kubernetes-cluster
π³ docker-registry
Supports auto-configuration: True
Available locally: True
Available remotely: True
The ZenML AWS Service Connector facilitates the authentication and access to
managed AWS services and resources. These encompass a range of resources,
including S3 buckets, ECR repositories, and EKS clusters. The connector provides
support for various authentication methods, including explicit long-lived AWS
secret keys, IAM roles, short-lived STS tokens and implicit authentication.
To ensure heightened security measures, this connector also enables the
generation of temporary STS security tokens that are scoped down to the minimum
permissions necessary for accessing the intended resource. Furthermore, it
includes automatic configuration and detection of credentials locally configured
through the AWS CLI.
This connector serves as a general means of accessing any AWS service by issuing
pre-authenticated boto3 sessions to clients. Additionally, the connector can | how-to | https://docs.zenml.io/v/docs/how-to/auth-management | 455 |
ader() -> Tuple[
Annotated[pd.DataFrame, "data"],Annotated[DatasetProfileView, "profile"]
]:
"""Load the diabetes dataset."""
X, y = datasets.load_diabetes(return_X_y=True, as_frame=True)
# merge X and y together
df = pd.merge(X, y, left_index=True, right_index=True)
profile = why.log(pandas=df).profile().view()
return df, profile
How do you use it?
Whylogs's profiling functions take in a pandas.DataFrame dataset generate a DatasetProfileView object containing all the relevant information extracted from the dataset.
There are three ways you can use whylogs in your ZenML pipelines that allow different levels of flexibility:
instantiate, configure and insert the standard WhylogsProfilerStep shipped with ZenML into your pipelines. This is the easiest way and the recommended approach, but can only be customized through the supported step configuration parameters.
call the data validation methods provided by the whylogs Data Validator in your custom step implementation. This method allows for more flexibility concerning what can happen in the pipeline step, but you are still limited to the functionality implemented in the Data Validator.
use the whylogs library directly in your custom step implementation. This gives you complete freedom in how you are using whylogs's features.
You can visualize whylogs profiles in Jupyter notebooks or view them directly in the ZenML dashboard.
The whylogs standard step
ZenML wraps the whylogs/WhyLabs functionality in the form of a standard WhylogsProfilerStep step. The only field in the step config is a dataset_timestamp attribute which is only relevant when you upload the profiles to WhyLabs that uses this field to group and merge together profiles belonging to the same dataset. The helper function get_whylogs_profiler_step used to create an instance of this standard step takes in an optional dataset_id parameter that is also used only in the context of WhyLabs upload to identify the model in the context of which the profile is uploaded, e.g.: | stack-components | https://docs.zenml.io/stack-components/data-validators/whylogs | 405 |
-dimensional vector in the high-dimensional space.We can use dimensionality reduction functionality in umap and scikit-learn to represent the 384 dimensions of our embeddings in two-dimensional space. This allows us to visualize the embeddings and see how similar chunks are clustered together based on their semantic meaning and context. We can also use this visualization to identify patterns and relationships in the data that can help us improve the retrieval performance of our RAG pipeline. It's worth trying both UMAP and t-SNE to see which one works best for our use case since they both have somewhat different representations of the data and reduction algorithms, as you'll see.
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import numpy as np
from sklearn.manifold import TSNE
import umap
from zenml.client import Client
artifact = Client().get_artifact_version('EMBEDDINGS_ARTIFACT_UUID_GOES_HERE')
embeddings = artifact.load()
embeddings = np.array([doc.embedding for doc in documents])
parent_sections = [doc.parent_section for doc in documents]
# Get unique parent sections
unique_parent_sections = list(set(parent_sections))
# Tol color palette
tol_colors = [
"#4477AA",
"#EE6677",
"#228833",
"#CCBB44",
"#66CCEE",
"#AA3377",
"#BBBBBB",
# Create a colormap with Tol colors
tol_colormap = ListedColormap(tol_colors)
# Assign colors to each unique parent section
section_colors = tol_colors[: len(unique_parent_sections)]
# Create a dictionary mapping parent sections to colors
section_color_dict = dict(zip(unique_parent_sections, section_colors))
# Dimensionality reduction using t-SNE
def tsne_visualization(embeddings, parent_sections):
tsne = TSNE(n_components=2, random_state=42)
embeddings_2d = tsne.fit_transform(embeddings)
plt.figure(figsize=(8, 8))
for section in unique_parent_sections:
if section in section_color_dict:
mask = [section == ps for ps in parent_sections]
plt.scatter(
embeddings_2d[mask, 0],
embeddings_2d[mask, 1], | user-guide | https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml/embeddings-generation | 441 |
_model = bentoml_model_deployer_step(
bento=bentomodel_name="pytorch_mnist", # Name of the model
port=3001, # Port to be used by the http server
ZenML BentoML Pipeline examples
Once all the steps have been defined, we can create a ZenML pipeline and run it. The bento builder step expects to get the trained model as an input, so we need to make sure either we have a previous step that trains the model and outputs it or loads the model from a previous run. Then the deployer step expects to get the bento bundle as an input, so we need to make sure either we have a previous step that builds the bento bundle and outputs it or load the bento bundle from a previous run or external source.
The following example shows how to create a ZenML pipeline that trains a model, builds a bento bundle, and deploys it to a local HTTP server.
# Import the pipeline to use the pipeline decorator
from zenml.pipelines import pipeline
# Pipeline definition
@pipeline
def bentoml_pipeline(
importer,
trainer,
evaluator,
deployment_trigger,
bento_builder,
deployer,
):
"""Link all the steps and artifacts together"""
train_dataloader, test_dataloader = importer()
model = trainer(train_dataloader)
accuracy = evaluator(test_dataloader=test_dataloader, model=model)
decision = deployment_trigger(accuracy=accuracy)
bento = bento_builder(model=model)
deployer(deploy_decision=decision, bento=bento)
In more complex scenarios, you might want to build a pipeline that trains a model and builds a bento bundle in a remote environment. Then creates a new pipeline that retrieves the bento bundle and deploys it to a local http server, or to a cloud provider. The following example shows a pipeline example that does exactly that.
# Import the pipeline to use the pipeline decorator
from zenml.pipelines import pipeline
# Pipeline definition
@pipeline
def remote_train_pipeline(
importer,
trainer,
evaluator,
bento_builder,
):
"""Link all the steps and artifacts together"""
train_dataloader, test_dataloader = importer() | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/bentoml | 447 |
π¦ blob-container β az://demo-zenmlartifactstore ββββββββββββββββββββββ·βββββββββββββββββββββββββββββββ
Note: Please remember to grant the Azure service principal permissions to read and write to your Azure Blob storage container as well as to list accessible storage accounts and Blob containers. For a full list of permissions required to use an AWS Service Connector to access one or more S3 buckets, please refer to the Azure Service Connector Blob storage container resource type documentation or read the documentation available in the interactive CLI commands and dashboard. The Azure Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use-case.
If you already have one or more Azure Service Connectors configured in your ZenML deployment, you can check which of them can be used to access the Azure Blob storage container you want to use for your Azure Artifact Store by running e.g.:
zenml service-connector list-resources --resource-type blob-container
Example Command Output
The following 'blob-container' resources can be accessed by service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββ―βββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββΌβββββββββββββββββββββββββββββββ¨
β 273d2812-2643-4446-82e6-6098b8ccdaa4 β azure-service-principal β π¦ azure β π¦ blob-container β az://demo-zenmlartifactstore β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββΌβββββββββββββββββββββββββββββββ¨
β f6b329e1-00f7-4392-94c9-264119e672d0 β azure-blob-demo β π¦ azure β π¦ blob-container β az://demo-zenmlartifactstore β | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/azure | 527 |
d_data_loader_step()
# Load the model to finetune# If no version is specified, the latest version of "my_model" is used
model = client.get_artifact_version(
name_id_or_prefix="my_model", version=model_version
# Finetune the model
# This automatically creates a new version of "my_model"
model_finetuner_step(model=model, dataset=dataset)
def main():
# Save an untrained model as first version of "my_model"
untrained_model = SVC(gamma=0.001)
save_artifact(
untrained_model, name="my_model", version="1", tags=["SVC", "untrained"]
# Create a first version of "my_dataset" and train the model on it
model_finetuning_pipeline()
# Finetune the latest model on an older version of the dataset
model_finetuning_pipeline(dataset_version="1")
# Run inference with the latest model on an older version of the dataset
latest_trained_model = load_artifact("my_model")
old_dataset = load_artifact("my_dataset", version="1")
latest_trained_model.predict(old_dataset[0])
if __name__ == "__main__":
main()
This would create the following pipeline run DAGs:
Run 1:
Run 2:
PreviousCache previous executions
NextTrack ML models
Last updated 19 days ago | user-guide | https://docs.zenml.io/v/docs/user-guide/starter-guide/manage-artifacts | 281 |
The step creates and returns a FacetsComparison.When the step finishes, ZenML will search for a materializer class that can handle this type, finds the FacetsMaterializer, and calls the save_visualizations() method which creates the visualization and saves it into your artifact store as an HTML file.
When you open your dashboard and click on the artifact inside the run DAG, the visualization HTML file is loaded from the artifact store and displayed.
PreviousDefault visualizations
NextDisplaying visualizations in the dashboard
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/visualize-artifacts/creating-custom-visualizations | 109 |
Configuration hierarchy
When things can be configured on the pipeline and step level, the step configuration overrides the pipeline.
There are a few general rules when it comes to settings and configurations that are applied in multiple places. Generally the following is true:
Configurations in code override configurations made inside of the yaml file
Configurations at the step level override those made at the pipeline level
In case of attributes the dictionaries are merged
from zenml import pipeline, step
from zenml.config import ResourceSettings
@step
def load_data(parameter: int) -> dict:
...
@step(settings={"resources": ResourceSettings(gpu_count=1, memory="2GB")})
def train_model(data: dict) -> None:
...
@pipeline(settings={"resources": ResourceSettings(cpu_count=2, memory="1GB")})
def simple_ml_pipeline(parameter: int):
...
# ZenMl merges the two configurations and uses the step configuration to override
# values defined on the pipeline level
train_model.configuration.settings["resources"]
# -> cpu_count: 2, gpu_count=1, memory="2GB"
simple_ml_pipeline.configuration.settings["resources"]
# -> cpu_count: 2, memory="1GB"
PreviousRuntime settings for Docker, resources, and stack components
NextFind out which configuration was used for a run
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/use-configuration-files/configuration-hierarchy | 272 |
ing resources:
βββββββββββββββββ―ββββββββββββββββββ RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌβββββββββββββββββ¨
β π¦ s3-bucket β s3://zenfiles β
βββββββββββββββββ·βββββββββββββββββ
The following might help understand the difference between scopes:
the difference between a multi-instance and a multi-type Service Connector is that the Resource Type scope is locked to a particular value during configuration for the multi-instance Service Connector
similarly, the difference between a multi-instance and a multi-type Service Connector is that the Resource Name (Resource ID) scope is locked to a particular value during configuration for the single-instance Service Connector
Service Connector Verification
When registering Service Connectors, the authentication configuration and credentials are automatically verified to ensure that they can indeed be used to gain access to the target resources:
for multi-type Service Connectors, this verification means checking that the configured credentials can be used to authenticate successfully to the remote service, as well as listing all resources that the credentials have permission to access for each Resource Type supported by the Service Connector Type.
for multi-instance Service Connectors, this verification step means listing all resources that the credentials have permission to access in addition to validating that the credentials can be used to authenticate to the target service or platform.
for single-instance Service Connectors, the verification step simply checks that the configured credentials have permission to access the target resource.
The verification can also be performed later on an already registered Service Connector. Furthermore, for multi-type and multi-instance Service Connectors, the verification operation can be scoped to a Resource Type and a Resource Name.
The following shows how a multi-type, a multi-instance and a single-instance Service Connector can be verified with multiple scopes after registration. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 389 |
βββββββββ―βββββββββββββββββββββ―βββββββββββββββββββββ CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌββββββββββββββββββββ¨
β ffc01795-0c0a-4f1d-af80-b84aceabcfcf β gcp-implicit β π΅ gcp β π³ docker-registry β gcr.io/zenml-core β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌββββββββββββββββββββ¨
β 561b776a-af8b-491c-a4ed-14349b440f30 β gcp-zenml-core β π΅ gcp β π³ docker-registry β gcr.io/zenml-core β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·ββββββββββββββββββββ
After having set up or decided on a GCP Service Connector to use to connect to the target GCR registry, you can register the GCP Container Registry as follows:
# Register the GCP container registry and reference the target GCR registry URI
zenml container-registry register <CONTAINER_REGISTRY_NAME> -f gcp \
--uri=<REGISTRY_URL>
# Connect the GCP container registry to the target GCR registry via a GCP Service Connector
zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i
A non-interactive version that connects the GCP Container Registry to a target GCR registry through a GCP Service Connector:
zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID>
Linking the GCP Container Registry to a Service Connector means that your local Docker client is no longer authenticated to access the remote registry. If you need to manually interact with the remote registry via the Docker CLI, you can use the local login Service Connector feature to temporarily authenticate your local Docker client to the remote registry:
zenml service-connector login <CONNECTOR_NAME> --resource-type docker-registry
Example Command Output
$ zenml service-connector login gcp-zenml-core --resource-type docker-registry | stack-components | https://docs.zenml.io/stack-components/container-registries/gcp | 556 |
Set up CI/CD
Managing the lifecycle of a ZenML pipeline with Continuous Integration and Delivery
Until now, we have been executing ZenML pipelines locally. While this is a good mode of operating pipelines, in production it is often desirable to mediate runs through a central workflow engine baked into your CI.
This allows data scientists to experiment with data processing and model training locally and then have code changes automatically tested and validated through the standard pull request/merge request peer review process. Changes that pass the CI and code-review are then deployed automatically to production. Here is how this could look like:
Breaking it down
To illustrate this, let's walk through how this process might be set up on a GitHub Repository.
A data scientist wants to make improvements to the ML pipeline. They clone the repository, create a new branch, and experiment with new models or data processing steps on their local machine.
Once the data scientist thinks they have improved the pipeline, they create a pull request for their branch on GitHub. This automatically triggers a GitHub Action that will run the same pipeline in the staging environment (e.g. a pipeline running on a cloud stack in GCP), potentially with different test data. As long as the pipeline does not run successfully in the staging environment, the PR cannot be merged. The pipeline also generates a set of metrics and test results that are automatically published to the PR, where they can be peer-reviewed to decide if the changes should be merged.
Once the PR has been reviewed and passes all checks, the branch is merged into main. This automatically triggers another GitHub Action that now runs a pipeline in the production environment, which trains the same model on production data, runs some checks to compare its performance with the model currently served in production and then, if all checks pass, automatically deploys the new model. | user-guide | https://docs.zenml.io/user-guide/production-guide/ci-cd | 364 |
zers={"1": MyMaterializer1, "2": MyMaterializer2})def my_first_step() -> Tuple[Annotated[MyObj1, "1"], Annotated[MyObj2, "2"]]:
return 1
Also, as briefly outlined in the configuration docs section, which materializer to use for the output of what step can also be configured within YAML config files.
For each output of your steps, you can define custom materializers to handle the loading and saving. You can configure them like this in the config:
...
steps:
<STEP_NAME>:
...
outputs:
<OUTPUT_NAME>:
materializer_source: run.MyMaterializer
Check out this page for information on your step output names and how to customize them.
Defining a materializer globally
Sometimes, you would like to configure ZenML to use a custom materializer globally for all pipelines, and override the default materializers that come built-in with ZenML. A good example of this would be to build a materializer for a pandas.DataFrame to handle the reading and writing of that dataframe in a different way than the default mechanism.
An easy way to do that is to use the internal materializer registry of ZenML and override its behavior:
# Entrypoint file where we run pipelines (i.e. run.py)
from zenml.materializers.materializer_registry import materializer_registry
# Create a new materializer
class FastPandasMaterializer(BaseMaterializer):
...
# Register the FastPandasMaterializer for pandas dataframes objects
materializer_registry.register_and_overwrite_type(key=pd.DataFrame, type_=FastPandasMaterializer)
# Run your pipelines: They will now all use the custom materializer
Developing a custom materializer
Now that we know how to configure a pipeline to use a custom materializer, let us briefly discuss how materializers in general are implemented.
Base implementation
In the following, you can see the implementation of the abstract base class BaseMaterializer, which defines the interface of all materializers:
class BaseMaterializer(metaclass=BaseMaterializerMeta):
"""Base Materializer to realize artifact data.""" | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/handle-custom-data-types | 433 |
. It can be managed through Hugging Face settings.namespace parameter is used for listing and creating the inference endpoints. It can take any of the following values, username or organization name or * depending on where the inference endpoint should be created.
We can now use the model deployer in our stack.
zenml stack update <CUSTOM_STACK_NAME> --model-deployer=<MODEL_DEPLOYER_NAME>
See the huggingface_model_deployer_step for an example of using the Hugging Face Model Deployer to deploy a model inside a ZenML pipeline step.
Configuration
Within the HuggingFaceServiceConfig you can configure:
model_name: the name of the model in ZenML.
endpoint_name: the name of the inference endpoint. We add a prefix zenml- and first 8 characters of the service uuid as a suffix to the endpoint name.
repository: The repository name in the userβs namespace ({username}/{model_id}) or in the organization namespace ({organization}/{model_id}) from the Hugging Face hub.
framework: The machine learning framework used for the model (e.g. "custom", "pytorch" )
accelerator: The hardware accelerator to be used for inference. (e.g. "cpu", "gpu")
instance_size: The size of the instance to be used for hosting the model (e.g. "large", "xxlarge")
instance_type: Inference Endpoints offers a selection of curated CPU and GPU instances. (e.g. "c6i", "g5.12xlarge")
region: The cloud region in which the Inference Endpoint will be created (e.g. "us-east-1", "eu-west-1" for vendor = aws and "eastus" for Microsoft Azure vendor.).
vendor: The cloud provider or vendor where the Inference Endpoint will be hosted (e.g. "aws").
token: The Hugging Face authentication token. It can be managed through huggingface settings. The same token can be passed used while registering the Hugging Face model deployer.
account_id: (Optional) The account ID used to link a VPC to a private Inference Endpoint (if applicable).
min_replica: (Optional) The minimum number of replicas (instances) to keep running for the Inference Endpoint. Defaults to 0. | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/huggingface | 465 |
aspyre/src/zenml/[email protected] registered service connector `gcp-impersonate-sa` with access to the following resources:
βββββββββββββββββ―βββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌβββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://zenml-bucket-sl β
βββββββββββββββββ·βββββββββββββββββββββββ
Short-lived credentials
This category of authentication methods uses temporary credentials explicitly configured in the Service Connector or generated by the Service Connector during auto-configuration. Of all available authentication methods, this is probably the least useful and you will likely never have to use it because it is terribly impractical: when short-lived credentials expire, Service Connectors become unusable and need to either be manually updated or replaced.
On the other hand, this authentication method is ideal if you're looking to grant someone else in your team temporary access to some resources without exposing your long-lived credentials.
A previous section described how temporary credentials can be automatically generated from other, long-lived credentials by most cloud provider Service Connectors. It only stands to reason that temporary credentials can also be generated manually by external means such as cloud provider CLIs and used directly to configure Service Connectors, or automatically generated during Service Connector auto-configuration.
This may be used as a way to grant an external party temporary access to some resources and have the Service Connector automatically become unusable (i.e. expire) after some time. Your long-lived credentials are kept safe, while the Service Connector only stores a short-lived credential.
The following is an example of using Service Connector auto-configuration to automatically generate a short-lived token from long-lived credentials configured for the local cloud provider CLI (AWS in this case): | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 405 |
Deepchecks
How to test the data and models used in your pipelines with Deepchecks test suites
The Deepchecks Data Validator flavor provided with the ZenML integration uses Deepchecks to run data integrity, data drift, model drift and model performance tests on the datasets and models circulated in your ZenML pipelines. The test results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
When would you want to use it?
Deepchecks is an open-source library that you can use to run a variety of data and model validation tests, from data integrity tests that work with a single dataset to model evaluation tests to data drift analyses and model performance comparison tests. All this can be done with minimal configuration input from the user, or customized with specialized conditions that the validation tests should perform.
Deepchecks works with both tabular data and computer vision data (currently in beta). For tabular, the supported dataset format is pandas.DataFrame and the supported model format is sklearn.base.ClassifierMixin. For computer vision, the supported dataset format is torch.utils.data.dataloader.DataLoader and supported model format is torch.nn.Module.
You should use the Deepchecks Data Validator when you need the following data and/or model validation features that are possible with Deepchecks:
Data Integrity Checks for tabular or computer vision data: detect data integrity problems within a single dataset (e.g. missing values, conflicting labels, mixed data types etc.).
Data Drift Checks for tabular or computer vision data: detect data skew and data drift problems by comparing a target dataset against a reference dataset (e.g. feature drift, label drift, new labels etc.). | stack-components | https://docs.zenml.io/stack-components/data-validators/deepchecks | 335 |
er Image Builder stack component, or the Vertex AIOrchestrator and Step Operator. It should be accompanied by a matching set of
GCP permissions that allow access to the set of remote resources required by the
client and Stack Component.
The resource name represents the GCP project that the connector is authorized to
access.
π¦ GCP GCS bucket (resource type: gcs-bucket)
Authentication methods: implicit, user-account, service-account, oauth2-token,
impersonation
Supports resource instances: True
Authentication methods:
π implicit
π user-account
π service-account
π oauth2-token
π impersonation
Allows Stack Components to connect to GCS buckets. When used by Stack
Components, they are provided a pre-configured GCS Python client instance.
The configured credentials must have at least the following GCP permissions
associated with the GCS buckets that it can access:
storage.buckets.list
storage.buckets.get
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.list
storage.objects.update
For example, the GCP Storage Admin role includes all of the required
permissions, but it also includes additional permissions that are not required
by the connector.
If set, the resource name must identify a GCS bucket using one of the following
formats:
GCS bucket URI: gs://{bucket-name}
GCS bucket name: {bucket-name}
[...]
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Please select a resource type or leave it empty to create a connector that can be used to access any of the supported resource types (gcp-generic, gcs-bucket, kubernetes-cluster, docker-registry). []: gcs-bucket
Would you like to attempt auto-configuration to extract the authentication configuration from your local environment ? [y/N]: y
Service connector auto-configured successfully with the following configuration:
Service connector 'gcp-interactive' of type 'gcp' is 'private'.
'gcp-interactive' gcp Service
Connector Details
ββββββββββββββββββββ―ββββββββββββββββββ | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 450 |
MLflow
Logging and visualizing experiments with MLflow.
The MLflow Experiment Tracker is an Experiment Tracker flavor provided with the MLflow ZenML integration that uses the MLflow tracking service to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
When would you want to use it?
MLflow Tracking is a very popular tool that you would normally use in the iterative ML experimentation phase to track and visualize experiment results. That doesn't mean that it cannot be repurposed to track and visualize the results produced by your automated pipeline runs, as you make the transition toward a more production-oriented workflow.
You should use the MLflow Experiment Tracker:
if you have already been using MLflow to track experiment results for your project and would like to continue doing so as you are incorporating MLOps workflows and best practices in your project through ZenML.
if you are looking for a more visually interactive way of navigating the results produced from your ZenML pipeline runs (e.g. models, metrics, datasets)
if you or your team already have a shared MLflow Tracking service deployed somewhere on-premise or in the cloud, and you would like to connect ZenML to it to share the artifacts and metrics logged by your pipelines
You should consider one of the other Experiment Tracker flavors if you have never worked with MLflow before and would rather use another experiment tracking tool that you are more familiar with.
How do you deploy it?
The MLflow Experiment Tracker flavor is provided by the MLflow ZenML integration, you need to install it on your local machine to be able to register an MLflow Experiment Tracker and add it to your stack:
zenml integration install mlflow -y
The MLflow Experiment Tracker can be configured to accommodate the following MLflow deployment scenarios: | stack-components | https://docs.zenml.io/stack-components/experiment-trackers/mlflow | 358 |
zenfiles β β β default β β βββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββ·βββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββ·βββββββββ·ββββββββββ·βββββββββββββ·βββββββββ
Verifying the multi-type Service Connector displays all resources that can be accessed through the Service Connector. This is like asking "are these credentials valid? can they be used to authenticate to AWS ? and if so, what resources can they access?":
zenml service-connector verify aws-multi-type
Example Command Output
Service connector 'aws-multi-type' is correctly configured with valid credentials and has access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β πΆ aws-generic β us-east-1 β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://aws-ia-mwaa-715803424590 β
β β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
You can scope the verification down to a particular Resource Type or all the way down to a Resource Name. This is the equivalent of asking "are these credentials valid and which S3 buckets are they authorized to access ?" and "can these credentials be used to access this particular Kubernetes cluster in AWS ?": | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 569 |
is removed with docker rm.
Docker MySQL databaseAs a recommended alternative to the SQLite database, you can run a MySQL database service as another Docker container and connect the ZenML server container to it.
A command like the following can be run to start the containerized MySQL database service:
docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8.0
If you also wish to persist the MySQL database data, you can mount a persistent volume or directory from the host into the container using the --mount flag, e.g.:
mkdir mysql-data
docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password \
--mount type=bind,source=$PWD/mysql-data,target=/var/lib/mysql \
mysql:8.0
Configuring the ZenML server container to connect to the MySQL database is just a matter of setting the ZENML_STORE_URL environment variable. We use the special host.docker.internal DNS name that is resolved from within the Docker containers to the gateway IP address used by the Docker network (see the Docker documentation for more details). On Linux, this needs to be explicitly enabled in the docker run command with the --add-host argument:
docker run -it -d -p 8080:8080 --name zenml \
--add-host host.docker.internal:host-gateway \
--env ZENML_STORE_URL=mysql://root:[email protected]/zenml \
zenmldocker/zenml-server
You need to visit the ZenML dashboard at http://localhost:8080 and activate the server by creating an initial admin user account. You can then connect your client to the server with the web login flow:
zenml connect --url http://localhost:8080
Direct MySQL database connection
This scenario is similar to the previous one, but instead of running a ZenML server, the client is configured to connect directly to a MySQL database running in a Docker container.
As previously covered, the containerized MySQL database service can be started with a command like the following:
docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8.0 | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 451 |
list --name aws-sts-token
Example Command Outputββββββββββ―ββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββ―βββββββββ―ββββββββββββββββββββββββ―ββββββββββββββββ―βββββββββ―ββββββββββ―βββββββββββββ―βββββββββ
β ACTIVE β NAME β ID β TYPE β RESOURCE TYPES β RESOURCE NAME β SHARED β OWNER β EXPIRES IN β LABELS β
β βββββββββΌββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββΌβββββββββΌββββββββββββββββββββββββΌββββββββββββββββΌβββββββββΌββββββββββΌβββββββββββββΌβββββββββ¨
β β aws-sts-token β a05ef4ef-92cb-46b2-8a3a-a48535adccaf β πΆ aws β πΆ aws-generic β <multiple> β β β default β 11h57m51s β β
β β β β β π¦ s3-bucket β β β β β β
β β β β β π kubernetes-cluster β β β β β β
β β β β β π³ docker-registry β β β β β β
ββββββββββ·ββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββ·βββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββ·βββββββββ·ββββββββββ·βββββββββββββ·βββββββββ
AWS IAM Role
Generates temporary STS credentials by assuming an AWS IAM role.
This authentication method still requires credentials to be explicitly configured. If your ZenML server is running in AWS and you're looking for an alternative that uses implicit credentials while at the same time benefits from all the security advantages of assuming an IAM role, you should use the implicit authentication method with a configured IAM role instead. | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 526 |
container_registry --flavor=aws --provider=aws ...You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
How to use it
To use the AWS container registry, we need:
The ZenML aws integration installed. If you haven't done so, runCopyzenml integration install aws
Docker installed and running.
The registry URI. Check out the previous section on the URI format and how to get the URI for your registry.
We can then register the container registry and use it in our active stack:
zenml container-registry register <NAME> \
--flavor=aws \
--uri=<REGISTRY_URI>
# Add the container registry to the active stack
zenml stack update -c <NAME>
You also need to set up authentication required to log in to the container registry.
Authentication Methods
Integrating and using an AWS Container Registry in your pipelines is not possible without employing some form of authentication. If you're looking for a quick way to get started locally, you can use the Local Authentication method. However, the recommended way to authenticate to the AWS cloud platform is through an AWS Service Connector. This is particularly useful if you are configuring ZenML stacks that combine the AWS Container Registry with other remote stack components also running in AWS. | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/aws | 295 |
Project templates
Rocketstart your ZenML journey!
What would you need to get a quick understanding of the ZenML framework and start building your ML pipelines? The answer is one of ZenML project templates to cover major use cases of ZenML: a collection of steps and pipelines and, to top it all off, a simple but useful CLI. This is exactly what the ZenML templates are all about!
List of available project templates
Project Template [Short name] Tags Description Starter template [ starter ] basic scikit-learn All the basic ML ingredients you need to get you started with ZenML: parameterized steps, a model training pipeline, a flexible configuration and a simple CLI. All created around a representative and versatile model training use-case implemented with the scikit-learn library. E2E Training with Batch Predictions [ e2e_batch ] etl hp-tuning model-promotion drift-detection batch-prediction scikit-learn This project template is a good starting point for anyone starting with ZenML. It consists of two pipelines with the following high-level steps: load, split, and preprocess data; run HP tuning; train and evaluate model performance; promote model to production; detect data drift; run batch inference. NLP Training Pipeline [ nlp ] nlp hp-tuning model-promotion training pytorch gradio huggingface This project template is a simple NLP training pipeline that walks through tokenization, training, HP tuning, evaluation and deployment for a BERT or GPT-2 based model and testing locally it with gradio
Do you have a personal project powered by ZenML that you would like to see here? At ZenML, we are looking for design partnerships and collaboration to help us better understand the real-world scenarios in which MLOps is being used and to build the best possible experience for our users. If you are interested in sharing all or parts of your project with us in the form of a ZenML project template, please join our Slack and leave us a message! | how-to | https://docs.zenml.io/v/docs/how-to/setting-up-a-project-repository/using-project-templates | 407 |
zenml orchestrator connect ${ORCHESTRATOR_NAME} -iHead on over to our docs to learn more about orchestrators and how to configure them.
Container Registry
export CONTAINER_REGISTRY_NAME=gcp_container_registry
zenml container-registry register ${CONTAINER_REGISTRY_NAME} --flavor=gcp --uri=<GCR-URI>
# Connect the GCS orchestrator to the target gcp project via a GCP Service Connector
zenml container-registry connect ${CONTAINER_REGISTRY_NAME} -i
Head on over to our docs to learn more about container registries and how to configure them.
7) Create Stack
export STACK_NAME=gcp_stack
zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} \
a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set
In case you want to also add any other stack components to this stack, feel free to do so.
And you're already done!
Just like that, you now have a fully working GCP stack ready to go. Feel free to take it for a spin by running a pipeline on it.
Cleanup
If you do not want to use any of the created resources in the future, simply delete the project you created.
gcloud project delete <PROJECT_ID_OR_NUMBER>
PreviousRun on AWS
NextKubeflow
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/popular-integrations/gcp-guide | 284 |
Develop a custom experiment tracker
Learning how to develop a custom experiment tracker.
Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.
Base abstraction in progress!
We are actively working on the base abstraction for the Experiment Tracker, which will be available soon. As a result, their extension is not recommended at the moment. When you are selecting an Experiment Tracker for your stack, you can use one of the existing flavors.
If you need to implement your own Experiment Tracker flavor, you can still do so, but keep in mind that you may have to refactor it when the base abstraction is released.
Build your own custom experiment tracker
If you want to create your own custom flavor for an experiment tracker, you can follow the following steps:
Create a class that inherits from the BaseExperimentTracker class and implements the abstract methods.
If you need any configuration, create a class that inherits from the BaseExperimentTrackerConfig class and add your configuration parameters.
Bring both the implementation and the configuration together by inheriting from the BaseExperimentTrackerFlavor class.
Once you are done with the implementation, you can register it through the CLI. Please ensure you point to the flavor class via dot notation:
zenml experiment-tracker flavor register <path.to.MyExperimentTrackerFlavor>
For example, if your flavor class MyExperimentTrackerFlavor is defined in flavors/my_flavor.py, you'd register it by doing:
zenml experiment-tracker flavor register flavors.my_flavor.MyExperimentTrackerFlavor | stack-components | https://docs.zenml.io/v/docs/stack-components/experiment-trackers/custom | 328 |
Develop a Custom Model Registry
Learning how to develop a custom model registry.
Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.
Base abstraction in progress!
The Model registry stack component is relatively new in ZenML. While it is fully functional, it can be challenging to cover all the ways ML systems deal with model versioning. This means that the API might change in the future. We will keep this page up-to-date with the latest changes.
If you are writing a custom model registry flavor, and you found that the base abstraction is lacking or not flexible enough, please let us know by messaging us on Slack, or by opening an issue on GitHub
Base Abstraction
The BaseModelRegistry is the abstract base class that needs to be subclassed in order to create a custom component that can be used to register and retrieve models. As model registries can come in many shapes and forms, the base class exposes a deliberately basic and generic interface:
from abc import ABC, abstractmethod
from enum import Enum
from typing import Any, Dict, List, Optional, Type, cast
from pydantic import BaseModel, Field, root_validator
from zenml.enums import StackComponentType
from zenml.stack import Flavor, StackComponent
from zenml.stack.stack_component import StackComponentConfig
class BaseModelRegistryConfig(StackComponentConfig):
"""Base config for model registries."""
class BaseModelRegistry(StackComponent, ABC):
"""Base class for all ZenML model registries."""
@property
def config(self) -> BaseModelRegistryConfig:
"""Returns the config of the model registry."""
return cast(BaseModelRegistryConfig, self._config)
# ---------
# Model Registration Methods
# ---------
@abstractmethod
def register_model(
self,
name: str,
description: Optional[str] = None, | stack-components | https://docs.zenml.io/v/docs/stack-components/model-registries/custom | 391 |
re functionality altogether.
Backup secrets storeA backup secrets store back-end may be configured for high-availability and backup purposes. or as an intermediate step in the process of migrating secrets to a different external location or secrets manager provider.
To configure a backup secrets store in the Docker container, use the same approach and instructions documented for the primary secrets store, but set the **ZENML\_BACKUP\_SECRETS\_STORE\*** environment variables instead of **ZENML\_SECRETS\_STORE\***, e.g.:
ZENML_BACKUP_SECRETS_STORE_TYPE: aws
ZENML_BACKUP_SECRETS_STORE_AUTH_METHOD: secret-key
ZENML_BACKUP_SECRETS_STORE_AUTH_CONFIG: '{"aws_access_key_id":"<aws-key-id>", "aws_secret_access_key","<aws-secret-key>","role_arn": "<aws-role-arn>"}`'
Advanced server configuration options
These configuration options are not required for most use cases, but can be useful in certain scenarios that require mirroring the same ZenML server configuration across multiple container instances (e.g. a Kubernetes deployment with multiple replicas):
ZENML_SERVER_JWT_SECRET_KEY: This is a secret key used to sign JWT tokens used for authentication. If not explicitly set, a random key is generated automatically by the server on startup and stored in the server's global configuration. This should be set to a random string with a recommended length of at least 32 characters, e.g.:Copyfrom secrets import token_hex
token_hex(32)or:Copyopenssl rand -hex 32
The environment variables starting with ZENML_SERVER_SECURE_HEADERS_* can be used to enable, disable or set custom values for security headers in the ZenML server's HTTP responses. The following values can be set for any of the supported secure headers configuration options:
enabled, on, true or yes - enables the secure header with the default value.
disabled, off, false, none or no - disables the secure header entirely, so that it is not set in the ZenML server's HTTP responses.
any other value - sets the secure header to the specified value. | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 435 |
onfig class and add your configuration parameters.Bring both the implementation and the configuration together by inheriting from the BaseModelDeployerFlavor class. Make sure that you give a name to the flavor through its abstract property.
Create a service class that inherits from the BaseService class and implements the abstract methods. This class will be used to represent the deployed model server in ZenML.
Once you are done with the implementation, you can register it through the CLI. Please ensure you point to the flavor class via dot notation:
zenml model-deployer flavor register <path.to.MyModelDeployerFlavor>
For example, if your flavor class MyModelDeployerFlavor is defined in flavors/my_flavor.py, you'd register it by doing:
zenml model-deployer flavor register flavors.my_flavor.MyModelDeployerFlavor
ZenML resolves the flavor class by taking the path where you initialized zenml (via zenml init) as the starting point of resolution. Therefore, please ensure you follow the best practice of initializing zenml at the root of your repository.
If ZenML does not find an initialized ZenML repository in any parent directory, it will default to the current working directory, but usually, it's better to not have to rely on this mechanism and initialize zenml at the root.
Afterward, you should see the new flavor in the list of available flavors:
zenml model-deployer flavor list
It is important to draw attention to when and how these base abstractions are coming into play in a ZenML workflow.
The CustomModelDeployerFlavor class is imported and utilized upon the creation of the custom flavor through the CLI.
The CustomModelDeployerConfig class is imported when someone tries to register/update a stack component with this custom flavor. Especially, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config objects are inherently pydantic objects, you can also add your own custom validators here. | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/custom | 404 |
S Secrets Manager accounts or regions may be used.Always make sure that the backup Secrets Store is configured to use a different location than the primary Secrets Store. The location can be different in terms of the Secrets Store back-end type (e.g. internal database vs. AWS Secrets Manager) or the actual location of the Secrets Store back-end (e.g. different AWS Secrets Manager account or region, GCP Secret Manager project or Azure Key Vault's vault).
Using the same location for both the primary and backup Secrets Store will not provide any additional benefits and may even result in unexpected behavior.
When a backup secrets store is in use, the ZenML Server will always attempt to read and write secret values from/to the primary Secrets Store first while ensuring to keep the backup Secrets Store in sync. If the primary Secrets Store is unreachable, if the secret values are not found there or any otherwise unexpected error occurs, the ZenML Server falls back to reading and writing from/to the backup Secrets Store. Only if the backup Secrets Store is also unavailable, the ZenML Server will return an error.
In addition to the hidden backup operations, users can also explicitly trigger a backup operation by using the zenml secret backup CLI command. This command will attempt to read all secrets from the primary Secrets Store and write them to the backup Secrets Store. Similarly, the zenml secret restore CLI command can be used to restore secrets from the backup Secrets Store to the primary Secrets Store. These CLI commands are useful for migrating secrets from one Secrets Store to another.
Secrets migration strategy
Sometimes you may need to change the external provider or location where secrets values are stored by the Secrets Store. The immediate implication of this is that the ZenML server will no longer be able to access existing secrets with the new configuration until they are also manually copied to the new location. Some examples of such changes include: | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/manage-the-deployed-services/secret-management | 373 |
ent using service connector 'aws-session-token'...WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
The 'aws-session-token' Docker Service Connector connector was used to successfully configure the local Docker/OCI container registry client/SDK.
# Verify that the local Docker client is now configured to access the remote Docker container registry
$ docker pull 715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml-server
Using default tag: latest
latest: Pulling from zenml-server
e9995326b091: Pull complete
f3d7f077cdde: Pull complete
0db71afa16f3: Pull complete
6f0b5905c60c: Pull complete
9d2154d50fd1: Pull complete
d072bba1f611: Pull complete
20e776588361: Pull complete
3ce69736a885: Pull complete
c9c0554c8e6a: Pull complete
bacdcd847a66: Pull complete
482033770844: Pull complete
Digest: sha256:bf2cc3895e70dfa1ee1cd90bbfa599fa4cd8df837e27184bac1ce1cc239ecd3f
Status: Downloaded newer image for 715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml-server:latest
715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml-server:latest
Discover available resources
One of the questions that you may have as a ZenML user looking to register and connect a Stack Component to an external resource is "what resources do I even have access to ?". Sure, you can browse through all the registered Service connectors and manually verify each one to find a particular resource that you are looking for, but this is counterproductive.
A better way is to ask ZenML directly questions such as:
what are the Kubernetes clusters that I can get access to through Service Connectors?
can I access this particular S3 bucket through one of the Service Connectors? Which one?
The zenml service-connector list-resources CLI command can be used exactly for this purpose. | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 487 |
ons. Try it out at https://www.zenml.io/live-demo!Automated Deployments: With ZenML, you no longer need to upload custom Docker images to the cloud whenever you want to deploy a new model to production. Simply define your ML workflow as a ZenML pipeline, let ZenML handle the containerization, and have your model automatically deployed to a highly scalable Kubernetes deployment service like Seldon.Copyfrom zenml.integrations.seldon.steps import seldon_model_deployer_step
from my_organization.steps import data_loader_step, model_trainer_step
@pipeline
def my_pipeline():
data = data_loader_step()
model = model_trainer_step(data)
seldon_model_deployer_step(model)
π Learn More
Ready to manage your ML lifecycles end-to-end with ZenML? Here is a collection of pages you can take a look at next:
Get started with ZenML and learn how to build your first pipeline and stack.
Discover advanced ZenML features like config management and containerization.
Explore ZenML through practical use-case examples.
NextInstallation
Last updated 18 days ago | docs | https://docs.zenml.io/v/docs | 228 |
same credentials across multiple stack components.If you don't already have a GCP Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You have the option to configure a GCP Service Connector that can be used to access more than one GCS bucket or even more than one type of GCP resource:
zenml service-connector register --type gcp -i
A non-interactive CLI example that leverages the Google Cloud CLI configuration on your local machine to auto-configure a GCP Service Connector targeting a single GCS bucket is:
zenml service-connector register <CONNECTOR_NAME> --type gcp --resource-type gcs-bucket --resource-name <GCS_BUCKET_NAME> --auto-configure
Example Command Output
$ zenml service-connector register gcs-zenml-bucket-sl --type gcp --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl --auto-configure
β Έ Registering service connector 'gcs-zenml-bucket-sl'...
Successfully registered service connector `gcs-zenml-bucket-sl` with access to the following resources:
βββββββββββββββββ―βββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌβββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://zenml-bucket-sl β
βββββββββββββββββ·βββββββββββββββββββββββ
Note: Please remember to grant the entity associated with your GCP credentials permissions to read and write to your GCS bucket as well as to list accessible GCS buckets. For a full list of permissions required to use a GCP Service Connector to access one or more GCS buckets, please refer to the GCP Service Connector GCS bucket resource type documentation or read the documentation available in the interactive CLI commands and dashboard. The GCP Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use case. | stack-components | https://docs.zenml.io/stack-components/artifact-stores/gcp | 448 |
he context of which the profile is uploaded, e.g.:from zenml.integrations.whylogs.steps import get_whylogs_profiler_step
train_data_profiler = get_whylogs_profiler_step(dataset_id="model-2")
test_data_profiler = get_whylogs_profiler_step(dataset_id="model-3")
The step can then be inserted into your pipeline where it can take in a pandas.DataFrame dataset, e.g.:
from zenml import pipeline
@pipeline
def data_profiling_pipeline():
data, _ = data_loader()
train, test = data_splitter(data)
train_data_profiler(train)
test_data_profiler(test)
data_profiling_pipeline()
As can be seen from the step definition , the step takes in a dataset and returns a whylogs DatasetProfileView object:
@step
def whylogs_profiler_step(
dataset: pd.DataFrame,
dataset_timestamp: Optional[datetime.datetime] = None,
) -> DatasetProfileView:
...
You should consult the official whylogs documentation for more information on what you can do with the collected profiles.
You can view the complete list of configuration parameters in the SDK docs.
The whylogs Data Validator
The whylogs Data Validator implements the same interface as do all Data Validators, so this method forces you to maintain some level of compatibility with the overall Data Validator abstraction, which guarantees an easier migration in case you decide to switch to another Data Validator.
All you have to do is call the whylogs Data Validator methods when you need to interact with whylogs to generate data profiles. You may optionally enable whylabs logging to automatically upload the returned whylogs profile to WhyLabs, e.g.:
import pandas as pd
from whylogs.core import DatasetProfileView
from zenml.integrations.whylogs.data_validators.whylogs_data_validator import (
WhylogsDataValidator,
from zenml.integrations.whylogs.flavors.whylogs_data_validator_flavor import (
WhylogsDataValidatorSettings,
from zenml import step
whylogs_settings = WhylogsDataValidatorSettings(
enable_whylabs=True, dataset_id="<WHYLABS_DATASET_ID>"
@step(
settings={ | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/whylogs | 441 |
same credentials across multiple stack components.If you don't already have an AWS Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You have the option to configure an AWS Service Connector that can be used to access more than one S3 bucket or even more than one type of AWS resource:
zenml service-connector register --type aws -i
A non-interactive CLI example that leverages the AWS CLI configuration on your local machine to auto-configure an AWS Service Connector targeting a single S3 bucket is:
zenml service-connector register <CONNECTOR_NAME> --type aws --resource-type s3-bucket --resource-name <S3_BUCKET_NAME> --auto-configure
Example Command Output
$ zenml service-connector register s3-zenfiles --type aws --resource-type s3-bucket --resource-id s3://zenfiles --auto-configure
β Έ Registering service connector 's3-zenfiles'...
Successfully registered service connector `s3-zenfiles` with access to the following resources:
βββββββββββββββββ―βββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌβββββββββββββββββ¨
β π¦ s3-bucket β s3://zenfiles β
βββββββββββββββββ·βββββββββββββββββ
Note: Please remember to grant the entity associated with your AWS credentials permissions to read and write to your S3 bucket as well as to list accessible S3 buckets. For a full list of permissions required to use an AWS Service Connector to access one or more S3 buckets, please refer to the AWS Service Connector S3 bucket resource type documentation or read the documentation available in the interactive CLI commands and dashboard. The AWS Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use case.
If you already have one or more AWS Service Connectors configured in your ZenML deployment, you can check which of them can be used to access the S3 bucket you want to use for your S3 Artifact Store by running e.g.: | stack-components | https://docs.zenml.io/stack-components/artifact-stores/s3 | 464 |
aset_id="<WHYLABS_DATASET_ID>"
@step(
settings={"data_validator.whylogs": whylogs_settings
def data_profiler(
dataset: pd.DataFrame,
) -> DatasetProfileView:
"""Custom data profiler step with whylogs
Args:
dataset: a Pandas DataFrame
Returns:
Whylogs profile generated for the data
"""
# validation pre-processing (e.g. dataset preparation) can take place here
data_validator = WhylogsDataValidator.get_active_data_validator()
profile = data_validator.data_profiling(
dataset,
# optionally upload the profile to WhyLabs, if WhyLabs credentials are configured
data_validator.upload_profile_view(profile)
# validation post-processing (e.g. interpret results, take actions) can happen here
return profile
Have a look at the complete list of methods and parameters available in the WhylogsDataValidator API in the SDK docs.
Call whylogs directly
You can use the whylogs library directly in your custom pipeline steps, and only leverage ZenML's capability of serializing, versioning and storing the DatasetProfileView objects in its Artifact Store. You may optionally enable whylabs logging to automatically upload the returned whylogs profile to WhyLabs, e.g.:
import pandas as pd
from whylogs.core import DatasetProfileView
import whylogs as why
from zenml import step
from zenml.integrations.whylogs.flavors.whylogs_data_validator_flavor import (
WhylogsDataValidatorSettings,
whylogs_settings = WhylogsDataValidatorSettings(
enable_whylabs=True, dataset_id="<WHYLABS_DATASET_ID>"
@step(
settings={
"data_validator.whylogs": whylogs_settings
def data_profiler(
dataset: pd.DataFrame,
) -> DatasetProfileView:
"""Custom data profiler step with whylogs
Args:
dataset: a Pandas DataFrame
Returns:
Whylogs Profile generated for the dataset
"""
# validation pre-processing (e.g. dataset preparation) can take place here
results = why.log(dataset)
profile = results.profile()
# validation post-processing (e.g. interpret results, take actions) can happen here
return profile.view() | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/whylogs | 438 |
un.py
Read more in the production guide.
CleanupMake sure you no longer need the resources before deleting them. The instructions and commands that follow are DESTRUCTIVE.
Delete any AWS resources you no longer use to avoid additional charges. You'll want to do the following:
# delete the S3 bucket
aws s3 rm s3://your-bucket-name --recursive
aws s3api delete-bucket --bucket your-bucket-name
# delete the SageMaker domain
aws sagemaker delete-domain --domain-id <DOMAIN_ID>
# delete the ECR repository
aws ecr delete-repository --repository-name zenml-repository --force
# detach policies from the IAM role
aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess
aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess
# delete the IAM role
aws iam delete-role --role-name zenml-role
Make sure to run these commands in the same AWS region where you created the resources.
By running these cleanup commands, you will delete the S3 bucket, SageMaker domain, ECR repository, and IAM role, along with their associated policies. This will help you avoid any unnecessary charges for resources you no longer need.
Remember to be cautious when deleting resources and ensure that you no longer require them before running the deletion commands.
Conclusion
In this guide, we walked through the process of setting up an AWS stack with ZenML to run your machine learning pipelines in a scalable and production-ready environment. The key steps included:
Setting up credentials and the local environment by creating an IAM role with the necessary permissions.
Creating a ZenML service connector to authenticate with AWS services using the IAM role.
Configuring stack components, including an S3 artifact store, a SageMaker Pipelines orchestrator, and an ECR container registry. | how-to | https://docs.zenml.io/how-to/popular-integrations/aws-guide | 441 |
nity and ask!
Running a pipeline on a cloud stackNow that we have our orchestrator and container registry registered, we can register a new stack, just like we did in the previous chapter:
zenml stack register minimal_cloud_stack -o skypilot_orchestrator -a cloud_artifact_store -c cloud_container_registry
Now, using the code from the previous chapter, we can run a training pipeline. First, set the minimal cloud stack active:
zenml stack set minimal_cloud_stack
and then, run the training pipeline:
python run.py --training-pipeline
You will notice this time your pipeline behaves differently. After it has built the Docker image with all your code, it will push that image, and run a VM on the cloud. Here is where your pipeline will execute, and the logs will be streamed back to you. So with a few commands, we were able to ship our entire code to the cloud!
Curious to see what other stacks you can create? The Component Guide has an exhaustive list of various artifact stores, container registries, and orchestrators that are integrated with ZenML. Try playing around with more stack components to see how easy it is to switch between MLOps stacks with ZenML.
PreviousConnecting remote storage
NextConfigure your pipeline to add compute
Last updated 19 days ago | user-guide | https://docs.zenml.io/v/docs/user-guide/production-guide/cloud-orchestration | 270 |
An end-to-end project
Put your new knowledge in action with an end-to-end project
That was awesome! We learned so many advanced MLOps production concepts:
The value of deploying ZenML
Abstracting infrastructure configuration into stacks
Connecting remote storage
Orchestrating on the cloud
Configuring the pipeline to scale compute
Connecting a git repository
We will now combine all of these concepts into an end-to-end MLOps project powered by ZenML.
Get started
Start with a fresh virtual environment with no dependencies. Then let's install our dependencies:
pip install "zenml[templates,server]" notebook
zenml integration install sklearn -y
We will then use ZenML templates to help us get the code we need for the project:
mkdir zenml_batch_e2e
cd zenml_batch_e2e
zenml init --template e2e_batch --template-with-defaults
# Just in case, we install the requirements again
pip install -r requirements.txt
The e2e template is also available as a ZenML example. You can clone it:
git clone --depth 1 [email protected]:zenml-io/zenml.git
cd zenml/examples/e2e
pip install -r requirements.txt
zenml init
What you'll learn
The e2e project is a comprehensive project template to cover major use cases of ZenML: a collection of steps and pipelines and, to top it all off, a simple but useful CLI. It showcases the core ZenML concepts for supervised ML with batch predictions. It builds on top of the starter project with more advanced concepts.
As you progress through the e2e batch template, try running the pipelines on a remote cloud stack on a tracked git repository to practice some of the concepts we have learned in this guide.
At the end, don't forget to share the ZenML e2e template with your colleagues and see how they react!
Conclusion and next steps | user-guide | https://docs.zenml.io/v/docs/user-guide/production-guide/end-to-end | 398 |
fer. Run the following command to create the role:aws iam create-role --role-name zenml-role --assume-role-policy-document file://assume-role-policy.json
Be sure to take note of the information that is output to the terminal, as you will need it in the next steps, especially the Role ARN.
Attach policies to the role
Attach the following policies to the role to grant access to the necessary AWS services:
AmazonS3FullAccess
AmazonEC2ContainerRegistryFullAccess
AmazonSageMakerFullAccess
aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess
aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess
If you have not already, install the AWS and S3 ZenML integrations:
zenml integration install aws s3 -y
2) Create a Service Connector within ZenML
Create an AWS Service Connector within ZenML. The service connector will allow ZenML and other ZenML components to authenticate themselves with AWS using the IAM role.
zenml service-connector register aws_connector \
--type aws \
--auth-method iam-role \
--role_arn=<ROLE_ARN> \
--region=<YOUR_REGION> \
--aws_access_key_id=<YOUR_ACCESS_KEY_ID> \
--aws_secret_access_key=<YOUR_SECRET_ACCESS_KEY>
Replace <ROLE_ARN> with the ARN of the IAM role you created in the previous step, <YOUR_REGION> with the respective value and use your AWS access key ID and secret access key that we noted down earlier.
3) Create Stack Components
Artifact Store (S3)
An artifact store is used for storing and versioning data flowing through your pipelines.
Before you run anything within the ZenML CLI, create an AWS S3 bucket. If you already have one, you can skip this step. (Note: the bucket name should be unique, so you might need to try a few times to find a unique name.)
aws s3api create-bucket --bucket your-bucket-name | how-to | https://docs.zenml.io/how-to/popular-integrations/aws-guide | 474 |
principal, please consult the Azure documentation.This method uses the implicit Azure authentication available in the environment where the ZenML code is running. On your local machine, this is the quickest way to configure an Azure Artifact Store. You don't need to supply credentials explicitly when you register the Azure Artifact Store, instead, you have to set one of the following sets of environment variables:
to use an Azure storage account key , set AZURE_STORAGE_ACCOUNT_NAME to your account name and one of AZURE_STORAGE_ACCOUNT_KEY or AZURE_STORAGE_SAS_TOKEN to the Azure key value.
to use an Azure storage account key connection string , set AZURE_STORAGE_CONNECTION_STRING to your Azure Storage Key connection string
to use Azure Service Principal credentials , create an Azure Service Principal and then set AZURE_STORAGE_ACCOUNT_NAME to your account name and AZURE_STORAGE_CLIENT_ID , AZURE_STORAGE_CLIENT_SECRET and AZURE_STORAGE_TENANT_ID to the client ID, secret and tenant ID of your service principal
Certain dashboard functionality, such as visualizing or deleting artifacts, is not available when using an implicitly authenticated artifact store together with a deployed ZenML server because the ZenML server will not have permission to access the filesystem.
The implicit authentication method also needs to be coordinated with other stack components that are highly dependent on the Artifact Store and need to interact with it directly to the function. If these components are not running on your machine, they do not have access to the local environment variables and will encounter authentication failures while trying to access the Azure Artifact Store:
Orchestrators need to access the Artifact Store to manage pipeline artifacts
Step Operators need to access the Artifact Store to manage step-level artifacts
Model Deployers need to access the Artifact Store to load served models | stack-components | https://docs.zenml.io/stack-components/artifact-stores/azure | 346 |
Google Cloud VertexAI Orchestrator
Orchestrating your pipelines to run on Vertex AI.
Vertex AI Pipelines is a serverless ML workflow tool running on the Google Cloud Platform. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
This component is only meant to be used within the context of a remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!
When to use it
You should use the Vertex orchestrator if:
you're already using GCP.
you're looking for a proven production-grade orchestrator.
you're looking for a UI in which you can track your pipeline runs.
you're looking for a managed solution for running your pipelines.
you're looking for a serverless solution for running your pipelines.
How to deploy it
In order to use a Vertex AI orchestrator, you need to first deploy ZenML to the cloud. It would be recommended to deploy ZenML in the same Google Cloud project as where the Vertex infrastructure is deployed, but it is not necessary to do so. You must ensure that you are connected to the remote ZenML server before using this stack component.
The only other thing necessary to use the ZenML Vertex orchestrator is enabling Vertex-relevant APIs on the Google Cloud project.
In order to quickly enable APIs, and create other resources necessary for using this integration, you can also consider using mlstacks, which helps you set up the infrastructure with one click.
How to use it
The Vertex Orchestrator (and GCP integration in general) currently only works for Python versions <3.11. The ZenML team is aware of this dependency clash/issue and is working on a fix. For now, please use Python <3.11 together with the GCP integration.
To use the Vertex orchestrator, we need:
The ZenML gcp integration installed. If you haven't done so, runCopyzenml integration install gcp | stack-components | https://docs.zenml.io/stack-components/orchestrators/vertex | 410 |
βββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨β RESOURCE TYPES β π¦ gcs-bucket β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β <multiple> β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β 0d0a42bb-40a4-4f43-af9e-6342eeca3f28 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β SHARED β β β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-05-19 08:15:48.056937 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β UPDATED_AT β 2023-05-19 08:15:48.056940 β
ββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββ
Configuration
ββββββββββββββββββββββββ―βββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββββββΌβββββββββββββ¨
β project_id β zenml-core β
β βββββββββββββββββββββββΌβββββββββββββ¨
β service_account_json β [HIDDEN] β
ββββββββββββββββββββββββ·βββββββββββββ
GCP Service Account impersonation
Generates temporary STS credentials by impersonating another GCP service account. | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 502 |
em as type SecretField in the configuration class.With the configuration defined, we can move on to the implementation class, which will use the S3 file system to implement the abstract methods of the BaseArtifactStore:
import s3fs
from zenml.artifact_stores import BaseArtifactStore
class MyS3ArtifactStore(BaseArtifactStore):
"""Custom artifact store implementation."""
_filesystem: Optional[s3fs.S3FileSystem] = None
@property
def filesystem(self) -> s3fs.S3FileSystem:
"""Get the underlying S3 file system."""
if self._filesystem:
return self._filesystem
self._filesystem = s3fs.S3FileSystem(
key=self.config.key,
secret=self.config.secret,
token=self.config.token,
client_kwargs=self.config.client_kwargs,
config_kwargs=self.config.config_kwargs,
s3_additional_kwargs=self.config.s3_additional_kwargs,
return self._filesystem
def open(self, path, mode: = "r"):
"""Custom logic goes here."""
return self.filesystem.open(path=path, mode=mode)
def exists(self, path):
"""Custom logic goes here."""
return self.filesystem.exists(path=path)
The configuration values defined in the corresponding configuration class are always available in the implementation class under self.config.
Finally, let's define a custom flavor that brings these two classes together. Make sure that you give your flavor a globally unique name here.
from zenml.artifact_stores import BaseArtifactStoreFlavor
class MyS3ArtifactStoreFlavor(BaseArtifactStoreFlavor):
"""Custom artifact store implementation."""
@property
def name(self):
"""The name of the flavor."""
return 'my_s3_artifact_store'
@property
def implementation_class(self):
"""Implementation class for this flavor."""
from ... import MyS3ArtifactStore
return MyS3ArtifactStore
@property
def config_class(self):
"""Configuration class for this flavor."""
from ... import MyS3ArtifactStoreConfig
return MyS3ArtifactStoreConfig | how-to | https://docs.zenml.io/v/docs/how-to/stack-deployment/implement-a-custom-stack-component | 394 |
r_api: true
The project_id is required to be set.The database_username and database_password from the general config is used to set those variables for the CloudSQL instance.
SSL is disabled by default on the database and the option to enable it is coming soon!
# The Azure resource_group to deploy to.
resource_group: zenml
# The name of the Flexible MySQL instance to create.
db_instance_name: zenmlserver
# Name of RDS database to create.
db_name: zenmlserver
# Version of MySQL database to create.
db_version: 5.7
# The sku_name for the database resource.
db_sku_name: B_Standard_B1s
# Allocated storage of MySQL database to create.
db_disk_size: 20
The database_username and database_password from the general config is used to set those variables for the Azure Flexible MySQL server.
Connecting to deployed ZenML
Immediately after deployment, the ZenML server needs to be activated before it can be used. The activation process includes creating an initial admin user account and configuring some server settings. You can do this only by visiting the ZenML server URL in your browser and following the on-screen instructions. Connecting your local ZenML client to the server is not possible until the server is properly initialized.
Once ZenML is deployed, one or multiple users can connect to it with the zenml connect command.
zenml connect
If no arguments are supplied, ZenML will attempt to connect to the last ZenML server deployed from the local host using the zenml deploy command:
In order to connect to a specific ZenML server, you can either pass the configuration as command line arguments or as a YAML file:
zenml connect --url=https://zenml.example.com:8080 --no-verify-ssl
or
zenml connect --config=/path/to/zenml_server_config.yaml
The YAML file should have the following structure when connecting to a ZenML server:
# The URL of the ZenML server
url:
# Either a boolean, in which case it controls whether the server's TLS
# certificate is verified, or a string, in which case it must be a path | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-zenml-cli | 440 |
ββ βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SHARED β β β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-05-19 08:04:51.037955 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β UPDATED_AT β 2023-05-19 08:04:51.037958 β
ββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Configuration
ββββββββββββββ―βββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββΌβββββββββββββ¨
β project_id β zenml-core β
ββββββββββββββ·βββββββββββββ
GCP User Account
Long-lived GCP credentials consist of a GCP user account and its credentials.
This method requires GCP user account credentials like those generated by the gcloud auth application-default login command. | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 421 |
ne(...):
...
PreviousUse your own DockerfilesNextReuse Docker builds to speed up Docker build times
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/customize-docker-builds/which-files-are-built-into-the-image | 28 |
Basic RAG inference pipeline
Use your RAG components to generate responses to prompts.
Now that we have our index store, we can use it to make queries based on the documents in the index store. We use some utility functions to make this happen but no external libraries are needed beyond an interface to the index store as well as the LLM itself.
If you've been following along with the guide, you should have some documents ingested already and you can pass a query in as a flag to the Python command used to run the pipeline:
python run.py --rag-query "how do I use a custom materializer inside my own zenml
steps? i.e. how do I set it? inside the @step decorator?" --model=gpt4
This inference query itself is not a ZenML pipeline, but rather a function call which uses the outputs and components of our pipeline to generate the response. For a more complex inference setup, there might be even more going on here, but for the purposes of this initial guide we will keep it simple.
Bringing everything together, the code for the inference pipeline is as follows:
def process_input_with_retrieval(
input: str, model: str = OPENAI_MODEL, n_items_retrieved: int = 5
) -> str:
delimiter = "```"
# Step 1: Get documents related to the user input from database
related_docs = get_topn_similar_docs(
get_embeddings(input), get_db_conn(), n=n_items_retrieved
# Step 2: Get completion from OpenAI API
# Set system message to help set appropriate tone and context for model
system_message = f"""
You are a friendly chatbot. \
You can answer questions about ZenML, its features and its use cases. \
You respond in a concise, technically credible tone. \
You ONLY use the context from the ZenML documentation to provide relevant
answers. \
You do not make up answers or provide opinions that you don't have
information to support. \
If you are unsure or don't know, just say so. \
"""
# Prepare messages to pass to model | user-guide | https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipeline | 433 |
Docker Service Connector
Configuring Docker Service Connectors to connect ZenML to Docker container registries.
The ZenML Docker Service Connector allows authenticating with a Docker or OCI container registry and managing Docker clients for the registry. This connector provides pre-authenticated python-docker Python clients to Stack Components that are linked to it.
$ zenml service-connector list-types --type docker
ββββββββββββββββββββββββββββ―ββββββββββββ―βββββββββββββββββββββ―βββββββββββββββ―ββββββββ―βββββββββ
β NAME β TYPE β RESOURCE TYPES β AUTH METHODS β LOCAL β REMOTE β
β βββββββββββββββββββββββββββΌββββββββββββΌβββββββββββββββββββββΌβββββββββββββββΌββββββββΌβββββββββ¨
β Docker Service Connector β π³ docker β π³ docker-registry β password β β
β β
β
ββββββββββββββββββββββββββββ·ββββββββββββ·βββββββββββββββββββββ·βββββββββββββββ·ββββββββ·βββββββββ
Prerequisites
No Python packages are required for this Service Connector. All prerequisites are included in the base ZenML Python package. Docker needs to be installed on environments where container images are built and pushed to the target container registry.
Resource Types
The Docker Service Connector only supports authenticating to and granting access to a Docker/OCI container registry. This type of resource is identified by the docker-registry Resource Type.
The resource name identifies a Docker/OCI registry using one of the following formats (the repository name is optional and ignored).
DockerHub: docker.io or [https://]index.docker.io/v1/[/<repository-name>]
generic OCI registry URI: http[s]://host[:port][/<repository-name>]
Authentication Methods
Authenticating to Docker/OCI container registries is done with a username and password or access token. It is recommended to use API tokens instead of passwords, wherever this is available, for example in the case of DockerHub:
zenml service-connector register dockerhub --type docker -in
Example Command Output | how-to | https://docs.zenml.io/how-to/auth-management/docker-service-connector | 493 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β UPDATED_AT β 2023-05-19 08:09:44.102936 β
ββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Configuration
βββββββββββββββββββββ―βββββββββββββ
β PROPERTY β VALUE β
β ββββββββββββββββββββΌβββββββββββββ¨
β project_id β zenml-core β
β ββββββββββββββββββββΌβββββββββββββ¨
β user_account_json β [HIDDEN] β
βββββββββββββββββββββ·βββββββββββββ
GCP Service Account
Long-lived GCP credentials consisting of a GCP service account and its credentials.
This method requires a GCP service account and a service account key JSON created for it.
By default, the GCP connector generates temporary OAuth 2.0 tokens from the service account credentials and distributes them to clients. The tokens have a limited lifetime of 1 hour. This behavior can be disabled by setting the generate_temporary_tokens configuration option to False, in which case, the connector will distribute the service account credentials JSON to clients instead (not recommended).
A GCP project is required and the connector may only be used to access GCP resources in the specified project.
If you already have the GOOGLE_APPLICATION_CREDENTIALS environment variable configured to point to a service account key JSON file, it will be automatically picked up when auto-configuration is used.
The following assumes a GCP service account was created, granted permissions to access GCS buckets in the target project and a service account key JSON was generated and saved locally in the [email protected] file:
zenml service-connector register gcp-service-account --type gcp --auth-method service-account --resource-type gcs-bucket --project_id=zenml-core --service_account_json=@[email protected]
Example Command Output
Expanding argument value service_account_json to contents of file [email protected]. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 484 |
er": docker_settings})
def my_pipeline(...):
...Specify a list of apt packages in code:Copydocker_settings = DockerSettings(apt_packages=["git"])
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
Prevent ZenML from automatically installing the requirements of your stack:Copydocker_settings = DockerSettings(install_stack_requirements=False)
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
In some cases the steps of your pipeline will have conflicting requirements or some steps of your pipeline will require large dependencies that don't need to be installed to run the remaining steps of your pipeline. For this case, ZenML allows you to specify custom Docker settings for steps in your pipeline.
docker_settings = DockerSettings(requirements=["tensorflow"])
@step(settings={"docker": docker_settings})
def my_training_step(...):
...
You can combine these methods but do make sure that your list of requirements does not overlap with the ones specified explicitly in the Docker settings.
Depending on the options specified in your Docker settings, ZenML installs the requirements in the following order (each step optional):
The packages installed in your local Python environment
The packages specified via the requirements attribute (step level overwrites pipeline level)
The packages specified via the required_integrations and potentially stack requirements
You can specify additional arguments for the installer used to install your Python packages as follows:
# This will result in a `pip install --timeout=1000 ...` call when installing packages in the
# Docker image
docker_settings = DockerSettings(python_package_installer_args={"timeout": 1000})
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
Experimental: If you want to use uv for faster resolving and installation of your Python packages, you can use by it as follows:
docker_settings = DockerSettings(python_package_installer="uv") | how-to | https://docs.zenml.io/v/docs/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages | 378 |
e helm chart
cd src/zenml/zen_server/deploy/helm/Simply reuse the custom-values.yaml file that you used during the previous installation or upgrade. If you don't have it handy, you can extract the values from the ZenML Helm deployment using the following command:Copyhelm -n <namespace> get values zenml-server > custom-values.yaml
Upgrade the release using your modified values file. Make sure you are in the directory that hosts the helm chart:Copyhelm -n <namespace> upgrade zenml-server . -f custom-values.yaml
It is not recommended to change the container image tag in the Helm chart to custom values, since every Helm chart version is tested to work only with the default image tag. However, if you know what you're doing you can change the zenml.image.tag value in your custom-values.yaml file to the desired ZenML version (e.g. 0.32.0).
Downgrading the server to an older version is not supported and can lead to unexpected behavior.
The version of the Python client that connects to the server should be kept at the same version as the server.
PreviousManage deployed services
NextTroubleshoot the deployed server
Last updated 15 days ago | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/upgrade-the-version-of-the-zenml-server | 250 |
Adding it to a stack is as simple as running e.g.:# Register the whylogs data validator
zenml data-validator register whylogs_data_validator --flavor=whylogs
# Register and set a stack with the new data validator
zenml stack register custom_stack -dv whylogs_data_validator ... --set
Adding WhyLabs logging capabilities to your whylogs Data Validator is just slightly more complicated, as you also need to create a ZenML Secret to store the sensitive WhyLabs authentication information in a secure location and then reference the secret in the Data Validator configuration. To generate a WhyLabs access token, you can follow the official WhyLabs instructions documented here .
Then, you can register the whylogs Data Validator with WhyLabs logging capabilities as follows:
# Create the secret referenced in the data validator
zenml secret create whylabs_secret \
--whylabs_default_org_id=<YOUR-WHYLOGS-ORGANIZATION-ID> \
--whylabs_api_key=<YOUR-WHYLOGS-API-KEY>
# Register the whylogs data validator
zenml data-validator register whylogs_data_validator --flavor=whylogs \
--authentication_secret=whylabs_secret
You'll also need to enable whylabs logging for your custom pipeline steps if you want to upload the whylogs data profiles that they return as artifacts to the WhyLabs platform. This is enabled by default for the standard whylogs step. For custom steps, you can enable WhyLabs logging by setting the upload_to_whylabs parameter to True in the step configuration, e.g.:
from typing_extensions import Annotated # or `from typing import Annotated on Python 3.9+
from typing import Tuple
import pandas as pd
import whylogs as why
from sklearn import datasets
from whylogs.core import DatasetProfileView
from zenml.integrations.whylogs.flavors.whylogs_data_validator_flavor import (
WhylogsDataValidatorSettings,
from zenml import step
@step(
settings={
"data_validator.whylogs": WhylogsDataValidatorSettings(
enable_whylabs=True, dataset_id="model-1"
def data_loader() -> Tuple[
Annotated[pd.DataFrame, "data"], | stack-components | https://docs.zenml.io/stack-components/data-validators/whylogs | 455 |
when the server is first deployed. Defaults to 0.ZENML_DEFAULT_USER_NAME: The name of the initial admin user account created by the server on the first deployment, during database initialization. Defaults to default.
ZENML_DEFAULT_USER_PASSWORD: The password to use for the initial admin user account. Defaults to an empty password value, if not set.
Run the ZenML server with Docker
As previously mentioned, the ZenML server container image uses sensible defaults for most configuration options. This means that you can simply run the container with Docker without any additional configuration and it will work out of the box for most use cases:
docker run -it -d -p 8080:8080 --name zenml zenmldocker/zenml-server
Note: It is recommended to use a ZenML container image version that matches the version of your client, to avoid any potential API incompatibilities (e.g. zenmldocker/zenml-server:0.21.1 instead of zenmldocker/zenml-server).
The above command will start a containerized ZenML server running on your machine that uses a temporary SQLite database file stored in the container. Temporary means that the database and all its contents (stacks, pipelines, pipeline runs, etc.) will be lost when the container is removed with docker rm.
You need to visit the ZenML dashboard at http://localhost:8080 and activate the server by creating an initial admin user account. You can then connect your client to the server with the web login flow:
$ zenml connect --url http://localhost:8080
Connecting to: 'http://localhost:8080'...
If your browser did not open automatically, please open the following URL into your browser to proceed with the authentication:
http://localhost:8080/devices/verify?device_id=f7a7333a-3ef0-4f39-85a9-f190279456d3&user_code=9375f5cdfdaf36772ce981fe3ee6172c
Successfully logged in.
Creating default stack for user 'default' in workspace default...
Updated the global store configuration. | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-docker | 438 |
thon script can be found here .
Export: job -> S3Data from within the job (e.g. produced by the training process, or when preprocessing large data) can be exported as well. The structure is highly similar to that of importing data. Copying data to S3 can be configured with output_data_s3_mode, which supports EndOfJob (default) and Continuous.
In the simple case, data in /opt/ml/processing/output/data will be copied to S3 at the end of a job:
sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(
output_data_s3_mode="EndOfJob",
output_data_s3_uri="s3://some-results-bucket-name/results"
In a more complex case, data in /opt/ml/processing/output/data/metadata and /opt/ml/processing/output/data/checkpoints will be written away continuously:
sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(
output_data_s3_mode="Continuous",
output_data_s3_uri={
"metadata": "s3://some-results-bucket-name/metadata",
"checkpoints": "s3://some-results-bucket-name/checkpoints"
Enabling CUDA for GPU-backed hardware
Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
PreviousGoogle Cloud VertexAI Orchestrator
NextTekton Orchestrator
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/orchestrators/sagemaker | 331 |
access temporarily with someone else in your team.Using other authentication methods like IAM role, Session Token, or Federation Token will automatically generate and refresh STS tokens for clients upon request.
An AWS region is required and the connector may only be used to access AWS resources in the specified region.
Fetching STS tokens from the local AWS CLI is possible if the AWS CLI is already configured with valid credentials. In our example, the connectors AWS CLI profile is configured with an IAM user Secret Key. We need to force the ZenML CLI to use the STS token authentication by passing the --auth-method sts-token option, otherwise it would automatically use the session token authentication method:
AWS_PROFILE=connectors zenml service-connector register aws-sts-token --type aws --auto-configure --auth-method sts-token
Example Command Output
β Έ Registering service connector 'aws-sts-token'...
Successfully registered service connector `aws-sts-token` with access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β πΆ aws-generic β us-east-1 β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 482 |
OUP> --project=<PROJECT> \
--token=<GITLAB_TOKEN>where <NAME> is the name of the code repository you are registering, <GROUP> is the group of the project, <PROJECT> is the name of the project, <GITLAB_TOKEN> is your GitLab Personal Access Token, and <GITLAB_URL> is the URL of the GitLab instance which defaults to https://gitlab.com. You will need to set a URL if you have a self-hosted GitLab instance.
After registering the GitLab code repository, ZenML will automatically detect if your source files are being tracked by GitLab and store the commit hash for each pipeline run.
Go to your GitLab account settings and click on Access Tokens.
Name the token and select the scopes that you need (e.g. read_repository, read_user, read_api)
Click on "Create personal access token" and copy the token to a safe place.
Developing a custom code repository
If you're using some other platform to store your code, and you still want to use a code repository in ZenML, you can implement and register a custom code repository.
First, you'll need to subclass and implement the abstract methods of the zenml.code_repositories.BaseCodeRepository class:
class BaseCodeRepository(ABC):
"""Base class for code repositories."""
@abstractmethod
def login(self) -> None:
"""Logs into the code repository."""
@abstractmethod
def download_files(
self, commit: str, directory: str, repo_sub_directory: Optional[str]
) -> None:
"""Downloads files from the code repository to a local directory.
Args:
commit: The commit hash to download files from.
directory: The directory to download files to.
repo_sub_directory: The subdirectory in the repository to
download files from.
"""
@abstractmethod
def get_local_context(
self, path: str
) -> Optional["LocalRepositoryContext"]:
"""Gets a local repository context from a path.
Args:
path: The path to the local repository.
Returns:
The local repository context object.
"""
After you're finished implementing this, you can register it as follows: | how-to | https://docs.zenml.io/how-to/setting-up-a-project-repository/connect-your-git-repository | 433 |
ace. Try it out at https://www.zenml.io/live-demo!No Vendor Lock-In: Since infrastructure is decoupled from code, ZenML gives you the freedom to switch to a different tooling stack whenever it suits you. By avoiding vendor lock-in, you have the flexibility to transition between cloud providers or services, ensuring that you receive the best performance and pricing available in the market at any time.Copyzenml stack set gcp
python run.py # Run your ML workflows in GCP
zenml stack set aws
python run.py # Now your ML workflow runs in AWS
π Learn More
Ready to deploy and manage your MLOps infrastructure with ZenML? Here is a collection of pages you can take a look at next:
Set up and manage production-ready infrastructure with ZenML.
Explore the existing infrastructure and tooling integrations of ZenML.
Find answers to the most frequently asked questions.
ZenML gives data scientists the freedom to fully focus on modeling and experimentation while writing code that is production-ready from the get-go.
Develop Locally: ZenML allows you to develop ML models in any environment using your favorite tools. This means you can start developing locally, and simply switch to a production environment once you are satisfied with your results.Copypython run.py # develop your code locally with all your favorite tools
zenml stack set production
python run.py # run on production infrastructure without any code changes
Pythonic SDK: ZenML is designed to be as unintrusive as possible. Adding a ZenML @step or @pipeline decorator to your Python functions is enough to turn your existing code into ZenML pipelines:Copyfrom zenml import pipeline, step
@step
def step_1() -> str:
return "world"
@step
def step_2(input_one: str, input_two: str) -> None:
combined_str = input_one + ' ' + input_two
print(combined_str)
@pipeline
def my_pipeline():
output_step_one = step_1()
step_2(input_one="hello", input_two=output_step_one)
my_pipeline() | docs | https://docs.zenml.io/v/docs/ | 437 |
Default visualizations
Types of visualizations in ZenML.
ZenML automatically saves visualizations of many common data types and allows you to view these visualizations in the ZenML dashboard:
Alternatively, any of these visualizations can also be displayed in Jupyter notebooks using the artifact.visualize() method:
Currently, the following visualization types are supported:
HTML: Embedded HTML visualizations such as data validation reports,
Image: Visualizations of image data such as Pillow images or certain numeric numpy arrays,
CSV: Tables, such as the pandas DataFrame .describe() output,
Markdown: Markdown strings or pages.
PreviousVisualizing artifacts
NextCreating custom visualizations
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/visualize-artifacts/types-of-visualizations | 136 |
ββββββββββ·ββββββββββββββββββββββββββββββββββββββββNote: Please remember to grant the entity associated with your Azure credentials permissions to read and write to your ACR registry as well as to list accessible ACR registries. For a full list of permissions required to use an Azure Service Connector to access a ACR registry, please refer to the Azure Service Connector ACR registry resource type documentation or read the documentation available in the interactive CLI commands and dashboard. The Azure Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use case.
If you already have one or more Azure Service Connectors configured in your ZenML deployment, you can check which of them can be used to access the ACR registry you want to use for your Azure Container Registry by running e.g.:
zenml service-connector list-resources --connector-type azure --resource-type docker-registry
Example Command Output
The following 'docker-registry' resources can be accessed by 'azure' service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ¨
β db5821d0-a658-4504-ae96-04c3302d8f85 β azure-demo β π¦ azure β π³ docker-registry β demozenmlcontainerregistry.azurecr.io β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββ
After having set up or decided on an Azure Service Connector to use to connect to the target ACR registry, you can register the Azure Container Registry as follows: | stack-components | https://docs.zenml.io/stack-components/container-registries/azure | 530 |
un.py
Read more in the production guide.
CleanupMake sure you no longer need the resources before deleting them. The instructions and commands that follow are DESTRUCTIVE.
Delete any AWS resources you no longer use to avoid additional charges. You'll want to do the following:
# delete the S3 bucket
aws s3 rm s3://your-bucket-name --recursive
aws s3api delete-bucket --bucket your-bucket-name
# delete the SageMaker domain
aws sagemaker delete-domain --domain-id <DOMAIN_ID>
# delete the ECR repository
aws ecr delete-repository --repository-name zenml-repository --force
# detach policies from the IAM role
aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess
aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess
# delete the IAM role
aws iam delete-role --role-name zenml-role
Make sure to run these commands in the same AWS region where you created the resources.
By running these cleanup commands, you will delete the S3 bucket, SageMaker domain, ECR repository, and IAM role, along with their associated policies. This will help you avoid any unnecessary charges for resources you no longer need.
Remember to be cautious when deleting resources and ensure that you no longer require them before running the deletion commands.
Conclusion
In this guide, we walked through the process of setting up an AWS stack with ZenML to run your machine learning pipelines in a scalable and production-ready environment. The key steps included:
Setting up credentials and the local environment by creating an IAM role with the necessary permissions.
Creating a ZenML service connector to authenticate with AWS services using the IAM role.
Configuring stack components, including an S3 artifact store, a SageMaker Pipelines orchestrator, and an ECR container registry. | how-to | https://docs.zenml.io/v/docs/how-to/popular-integrations/aws-guide | 441 |
Google Cloud Storage (GCS)
Storing artifacts using GCP Cloud Storage.
The GCS Artifact Store is an Artifact Store flavor provided with the GCP ZenML integration that uses the Google Cloud Storage managed object storage service to store ZenML artifacts in a GCP Cloud Storage bucket.
When would you want to use it?
Running ZenML pipelines with the local Artifact Store is usually sufficient if you just want to evaluate ZenML or get started quickly without incurring the trouble and the cost of employing cloud storage services in your stack. However, the local Artifact Store becomes insufficient or unsuitable if you have more elaborate needs for your project:
if you want to share your pipeline run results with other team members or stakeholders inside or outside your organization
if you have other components in your stack that are running remotely (e.g. a Kubeflow or Kubernetes Orchestrator running in a public cloud).
if you outgrow what your local machine can offer in terms of storage space and need to use some form of private or public storage service that is shared with others
if you are running pipelines at scale and need an Artifact Store that can handle the demands of production-grade MLOps
In all these cases, you need an Artifact Store that is backed by a form of public cloud or self-hosted shared object storage service.
You should use the GCS Artifact Store when you decide to keep your ZenML artifacts in a shared object storage and if you have access to the Google Cloud Storage managed service. You should consider one of the other Artifact Store flavors if you don't have access to the GCP Cloud Storage service.
How do you deploy it?
The GCP artifact store (and GCP integration in general) currently only works for Python versions <3.11. The ZenML team is aware of this dependency clash/issue and is working on a fix. For now, please use Python <3.11 together with the GCP integration. | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/gcp | 389 |
hestrator_url"].value
Run pipelines on a scheduleThe Vertex Pipelines orchestrator supports running pipelines on a schedule using its native scheduling capability.
How to schedule a pipeline
from zenml.config.schedule import Schedule
# Run a pipeline every 5th minute
pipeline_instance.run(
schedule=Schedule(
cron_expression="*/5 * * * *"
# Run a pipeline every hour
# starting in one day from now and ending in three days from now
pipeline_instance.run(
schedule=Schedule(
cron_expression="0 * * * *"
start_time=datetime.datetime.now() + datetime.timedelta(days=1),
end_time=datetime.datetime.now() + datetime.timedelta(days=3),
The Vertex orchestrator only supports the cron_expression, start_time (optional) and end_time (optional) parameters in the Schedule object, and will ignore all other parameters supplied to define the schedule.
The start_time and end_time timestamp parameters are both optional and are to be specified in local time. They define the time window in which the pipeline runs will be triggered. If they are not specified, the pipeline will run indefinitely.
The cron_expression parameter supports timezones. For example, the expression TZ=Europe/Paris 0 10 * * * will trigger runs at 10:00 in the Europe/Paris timezone.
How to delete a scheduled pipeline
Note that ZenML only gets involved to schedule a run, but maintaining the lifecycle of the schedule is the responsibility of the user.
In order to cancel a scheduled Vertex pipeline, you need to manually delete the schedule in VertexAI (via the UI or the CLI).
Additional configuration
For additional configuration of the Vertex orchestrator, you can pass VertexOrchestratorSettings which allows you to configure node selectors, affinity, and tolerations to apply to the Kubernetes Pods running your pipeline. These can be either specified using the Kubernetes model objects or as dictionaries.
from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import VertexOrchestratorSettings
from kubernetes.client.models import V1Toleration | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/vertex | 416 |
ource-id zenfiles --client
Example Command OutputService connector 'aws-federation-token (s3-bucket | s3://zenfiles client)' of type 'aws' with id '868b17d4-b950-4d89-a6c4-12e520e66610' is owned by user 'default' and is 'private'.
'aws-federation-token (s3-bucket | s3://zenfiles client)' aws Service
Connector Details
ββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β e28c403e-8503-4cce-9226-8a7cd7934763 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-federation-token (s3-bucket | s3://zenfiles client) β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β sts-token β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π¦ s3-bucket β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β s3://zenfiles β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β 11h59m56s β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 484 |
Evaluating reranking performance
Evaluate the performance of your reranking model.
We've already set up an evaluation pipeline, so adding reranking evaluation is relatively straightforward. In this section, we'll explore how to evaluate the performance of your reranking model using ZenML.
Evaluating Reranking Performance
The simplest first step in evaluating the reranking model is to compare the retrieval performance before and after reranking. You can use the same metrics we discussed in the evaluation section to assess the performance of the reranking model.
If you recall, we have a hand-crafted set of queries and relevant documents that we use to evaluate the performance of our retrieval system. We also have a set that was generated by LLMs. The actual retrieval test is implemented as follows:
def perform_retrieval_evaluation(
sample_size: int, use_reranking: bool
) -> float:
"""Helper function to perform the retrieval evaluation."""
dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train")
sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size))
total_tests = len(sampled_dataset)
failures = 0
for item in sampled_dataset:
generated_questions = item["generated_questions"]
question = generated_questions[
] # Assuming only one question per item
url_ending = item["filename"].split("/")[
1
] # Extract the URL ending from the filename
# using the method above to query similar documents
# we pass in whether we want to use reranking or not
_, _, urls = query_similar_docs(question, url_ending, use_reranking)
if all(url_ending not in url for url in urls):
logging.error(
f"Failed for question: {question}. Expected URL ending: {url_ending}. Got: {urls}"
failures += 1
logging.info(f"Total tests: {total_tests}. Failures: {failures}")
failure_rate = (failures / total_tests) * 100
return round(failure_rate, 2) | user-guide | https://docs.zenml.io/user-guide/llmops-guide/reranking/evaluating-reranking-performance | 420 |
the executor_args attribute of the image builder.zenml image-builder register <NAME> \
--flavor=kaniko \
--kubernetes_context=<KUBERNETES_CONTEXT> \
--executor_args='["--label", "key=value"]' # Adds a label to the final image
List of some possible additional flags:
--cache: Set to false to disable caching. Defaults to true.
--cache-dir: Set the directory where to store cached layers. Defaults to /cache.
--cache-repo: Set the repository where to store cached layers. Defaults to gcr.io/kaniko-project/executor.
--cache-ttl: Set the cache expiration time. Defaults to 24h.
--cleanup: Set to false to disable cleanup of the working directory. Defaults to true.
--compressed-caching: Set to false to disable compressed caching. Defaults to true.
For a full list of possible flags, check out the Kaniko additional flags
PreviousLocal Image Builder
NextGoogle Cloud Image Builder
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/image-builders/kaniko | 210 |
strator supports specifying resources in what way.If you're using an orchestrator which does not support this feature or its underlying infrastructure does not cover your requirements, you can also take a look at step operators which allow you to execute individual steps of ../...your pipeline in environments independent of your orchestrator.
Ensure your container is CUDA-enabled
To run steps or pipelines on GPUs, it's crucial to have the necessary CUDA tools installed in the environment. This section will guide you on how to configure your environment to utilize GPU capabilities effectively.
Note that these configuration changes are required for the GPU hardware to be properly utilized. If you don't update the settings, your steps might run, but they will not see any boost in performance from the custom hardware.
All steps running on GPU-backed hardware will be executed within a containerized environment, whether you're using the local Docker orchestrator or a cloud instance of Kubeflow. Therefore, you need to make two amendments to your Docker settings for the relevant steps:
1. Specify a CUDA-enabled parent image in your DockerSettings
For complete details, refer to the containerization page that explains how to do this. As an example, if you want to use the latest CUDA-enabled official PyTorch image for your entire pipeline run, you can include the following code:
from zenml import pipeline
from zenml.config import DockerSettings
docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime")
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
For TensorFlow, you might use the tensorflow/tensorflow:latest-gpu image, as detailed in the official TensorFlow documentation or their DockerHub overview.
2. Add ZenML as an explicit pip requirement | how-to | https://docs.zenml.io/v/docs/how-to/training-with-gpus | 359 |
_mapping=column_mapping,
Let's break this down...We configure the evidently_report_step using parameters that you would normally pass to the Evidently Report object to configure and run an Evidently report. It consists of the following fields:
column_mapping: This is an EvidentlyColumnMapping object that is the exact equivalent of the ColumnMapping object in Evidently. It is used to describe the columns in the dataset and how they should be treated (e.g. as categorical, numerical, or text features).
metrics: This is a list of EvidentlyMetricConfig objects that are used to configure the metrics that should be used to generate the report in a declarative way. This is the same as configuring the metrics that go in the Evidently Report.
download_nltk_data: This is a boolean that is used to indicate whether the NLTK data should be downloaded. This is only needed if you are using Evidently reports that handle text data, which require the NLTK data to be downloaded ahead of time.
There are several ways you can reference the Evidently metrics when configuring EvidentlyMetricConfig items:
by class name: this is the easiest way to reference an Evidently metric. You can use the name of a metric or metric preset class as it appears in the Evidently documentation (e.g."DataQualityPreset", "DatasetDriftMetric").
by full class path: you can also use the full Python class path of the metric or metric preset class ( e.g. "evidently.metric_preset.DataQualityPreset", "evidently.metrics.DatasetDriftMetric"). This is useful if you want to use metrics or metric presets that are not included in Evidently library.
by passing in the class itself: you can also import and pass in an Evidently metric or metric preset class itself, e.g.:Copyfrom evidently.metrics import DatasetDriftMetric
...
evidently_report_step.with_options(
parameters=dict(
metrics=[EvidentlyMetricConfig.metric(DatasetDriftMetric)]
),
) | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/evidently | 420 |
test suite . It consists of the following fields:column_mapping: This is an EvidentlyColumnMapping object that is the exact equivalent of the ColumnMapping object in Evidently. It is used to describe the columns in the dataset and how they should be treated (e.g. as categorical, numerical, or text features).
tests: This is a list of EvidentlyTestConfig objects that are used to configure the tests that will be run as part of your test suite in a declarative way. This is the same as configuring the tests that go in the Evidently TestSuite.
download_nltk_data: This is a boolean that is used to indicate whether the NLTK data should be downloaded. This is only needed if you are using Evidently tests or test presets that handle text data, which require the NLTK data to be downloaded ahead of time.
There are several ways you can reference the Evidently tests when configuring EvidentlyTestConfig items, similar to how you reference them in an EvidentlyMetricConfig object:
by class name: this is the easiest way to reference an Evidently test. You can use the name of a test or test preset class as it appears in the Evidently documentation (e.g."DataQualityTestPreset", "TestColumnRegExp").
by full class path: you can also use the full Python class path of the test or test preset class ( e.g. "evidently.test_preset.DataQualityTestPreset", "evidently.tests.TestColumnRegExp"). This is useful if you want to use tests or test presets that are not included in Evidently library.
by passing in the class itself: you can also import and pass in an Evidently test or test preset class itself, e.g.:Copyfrom evidently.tests import TestColumnRegExp
...
evidently_test_step.with_options(
parameters=dict(
tests=[EvidentlyTestConfig.test(TestColumnRegExp)]
),
)
As can be seen in the example, there are two basic ways of adding tests to your Evidently test step configuration: | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/evidently | 425 |
ββ βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β METADATA β {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_08_52_398499', 'zenml_pipeline_name': 'mlflow_training_pipeline', 'zenml_pipeline_run_uuid': '29fb22c1-6e0b-4431-9e04-226226506d16', β
β β 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'lr': '0.001', 'epochs': '5', 'optimizer': 'Adam'} β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β MODEL_SOURCE_URI β file:///Users/safoine-zenml/Library/Application Support/zenml/local_stores/0902a511-117d-4152-a098-b2f1124c4493/mlruns/489728212459131640/293a0d2e71e046999f77a79639f6eac2/artifacts/model β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β STAGE β None β | stack-components | https://docs.zenml.io/stack-components/model-registries/mlflow | 376 |
Azure Container Registry
Storing container images in Azure.
The Azure container registry is a container registry flavor that comes built-in with ZenML and uses the Azure Container Registry to store container images.
When to use it
You should use the Azure container registry if:
one or more components of your stack need to pull or push container images.
you have access to Azure. If you're not using Azure, take a look at the other container registry flavors.
How to deploy it
Go here and choose a subscription, resource group, location, and registry name. Then click on Review + Create and to create your container registry.
How to find the registry URI
The Azure container registry URI should have the following format:
<REGISTRY_NAME>.azurecr.io
# Examples:
zenmlregistry.azurecr.io
myregistry.azurecr.io
To figure out the URI for your registry:
Go to the Azure portal.
In the search bar, enter container registries and select the container registry you want to use. If you don't have any container registries yet, check out the deployment section on how to create one.
Use the name of your registry to fill the template <REGISTRY_NAME>.azurecr.io and get your URI.
How to use it
To use the Azure container registry, we need:
Docker installed and running.
The registry URI. Check out the previous section on the URI format and how to get the URI for your registry.
We can then register the container registry and use it in our active stack:
zenml container-registry register <NAME> \
--flavor=azure \
--uri=<REGISTRY_URI>
# Add the container registry to the active stack
zenml stack update -c <NAME>
You also need to set up authentication required to log in to the container registry.
Authentication Methods | stack-components | https://docs.zenml.io/stack-components/container-registries/azure | 365 |
Version pipelines
Understanding how and when the version of a pipeline is incremented.
You might have noticed that when you run a pipeline in ZenML with the same name, but with different steps, it creates a new version of the pipeline. Consider our example pipeline:
from zenml import pipeline
@pipeline
def first_pipeline(gamma: float = 0.002):
X_train, X_test, y_train, y_test = training_data_loader()
svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train)
if __name__ == "__main__":
first_pipeline()
Running this the first time will create a single run for version 1 of the pipeline called first_pipeline.
$python run.py
...
Registered pipeline first_pipeline (version 1).
...
Running it again (python run.py) will create yet another run for version 1 of the pipeline called first_pipeline. So now the same pipeline has two runs. You can also verify this in the dashboard.
However, now let's change the pipeline configuration itself. You can do this by modifying the step connections within the @pipeline function or by replacing a concrete step with another one. For example, let's create an alternative step called digits_data_loader which loads a different dataset.
import pandas as pd
from zenml import step
from typing import Tuple
from typing_extensions import Annotated
@step
def digits_data_loader() -> Tuple[
Annotated[pd.DataFrame, "X_train"],
Annotated[pd.DataFrame, "X_test"],
Annotated[pd.Series, "y_train"],
Annotated[pd.Series, "y_test"],
]:
"""Loads the digits dataset and splits it into train and test data."""
# Load data from the digits dataset
digits = load_digits(as_frame=True)
# Split into datasets
X_train, X_test, y_train, y_test = train_test_split(
digits.data, digits.target, test_size=0.2, shuffle=True
return X_train, X_test, y_train, y_test
@pipeline
def first_pipeline(gamma: float = 0.002):
X_train, X_test, y_train, y_test = digits_data_loader()
svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) | how-to | https://docs.zenml.io/how-to/build-pipelines/version-pipelines | 463 |
-client-id","client_secret": "my-client-secret"}).Note: The remaining configuration options are deprecated and may be removed in a future release. Instead, you should set the ZENML_SECRETS_STORE_AUTH_METHOD and ZENML_SECRETS_STORE_AUTH_CONFIG variables to use the Azure Service Connector authentication method.
ZENML_SECRETS_STORE_AZURE_CLIENT_ID: The Azure application service principal client ID to use to authenticate with the Azure Key Vault API. If you are running the ZenML server hosted in Azure and are using a managed identity to access the Azure Key Vault service, you can omit this variable.
ZENML_SECRETS_STORE_AZURE_CLIENT_SECRET: The Azure application service principal client secret to use to authenticate with the Azure Key Vault API. If you are running the ZenML server hosted in Azure and are using a managed identity to access the Azure Key Vault service, you can omit this variable.
ZENML_SECRETS_STORE_AZURE_TENANT_ID: The Azure application service principal tenant ID to use to authenticate with the Azure Key Vault API. If you are running the ZenML server hosted in Azure and are using a managed identity to access the Azure Key Vault service, you can omit this variable.
These configuration options are only relevant if you're using Hashicorp Vault as the secrets store backend.
ZENML_SECRETS_STORE_TYPE: Set this to hashicorp in order to set this type of secret store.
ZENML_SECRETS_STORE_VAULT_ADDR: The URL of the HashiCorp Vault server to connect to. NOTE: this is the same as setting the VAULT_ADDR environment variable.
ZENML_SECRETS_STORE_VAULT_TOKEN: The token to use to authenticate with the HashiCorp Vault server. NOTE: this is the same as setting the VAULT_TOKEN environment variable.
ZENML_SECRETS_STORE_VAULT_NAMESPACE: The Vault Enterprise namespace. Not required for Vault OSS. NOTE: this is the same as setting the VAULT_NAMESPACE environment variable. | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 416 |
er": docker_settings})
def my_pipeline(...):
...Specify a list of apt packages in code:Copydocker_settings = DockerSettings(apt_packages=["git"])
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
Prevent ZenML from automatically installing the requirements of your stack:Copydocker_settings = DockerSettings(install_stack_requirements=False)
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
In some cases the steps of your pipeline will have conflicting requirements or some steps of your pipeline will require large dependencies that don't need to be installed to run the remaining steps of your pipeline. For this case, ZenML allows you to specify custom Docker settings for steps in your pipeline.
docker_settings = DockerSettings(requirements=["tensorflow"])
@step(settings={"docker": docker_settings})
def my_training_step(...):
...
You can combine these methods but do make sure that your list of requirements does not overlap with the ones specified explicitly in the Docker settings.
Depending on the options specified in your Docker settings, ZenML installs the requirements in the following order (each step optional):
The packages installed in your local Python environment
The packages specified via the requirements attribute (step level overwrites pipeline level)
The packages specified via the required_integrations and potentially stack requirements
You can specify additional arguments for the installer used to install your Python packages as follows:
# This will result in a `pip install --timeout=1000 ...` call when installing packages in the
# Docker image
docker_settings = DockerSettings(python_package_installer_args={"timeout": 1000})
@pipeline(settings={"docker": docker_settings})
def my_pipeline(...):
...
Experimental: If you want to use uv for faster resolving and installation of your Python packages, you can use by it as follows:
docker_settings = DockerSettings(python_package_installer="uv") | how-to | https://docs.zenml.io/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages | 378 |
not scoped to a single ECR repository. Instead, aconnector configured with this resource type will grant access to all the ECR
repositories that the credentials are allowed to access under the configured AWS
region (i.e. all repositories under the Docker registry URL
https://{account-id}.dkr.ecr.{region}.amazonaws.com).
The resource name associated with this resource type uniquely identifies an ECR
registry using one of the following formats (the repository name is ignored,
only the registry URL/ARN is used):
ECR repository URI (canonical resource name):
[https://]{account}.dkr.ecr.{region}.amazonaws.com[/{repository-name}]
ECR repository ARN:
arn:aws:ecr:{region}:{account-id}:repository[/{repository-name}]
ECR repository names are region scoped. The connector can only be used to access
ECR repositories in the AWS region that it is configured to use.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The Service Connector is how you configure ZenML to authenticate and connect to one or more external resources. It stores the required configuration and security credentials and can optionally be scoped with a Resource Type and a Resource Name.
Depending on the Service Connector Type implementation, a Service Connector instance can be configured in one of the following modes with regards to the types and number of resources that it has access to:
a multi-type Service Connector instance that can be configured once and used to gain access to multiple types of resources. This is only possible with Service Connector Types that support multiple Resource Types to begin with, such as those that target multi-service cloud providers like AWS, GCP and Azure. In contrast, a single-type Service Connector can only be used with a single Resource Type. To configure a multi-type Service Connector, you can simply skip scoping its Resource Type during registration. | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 376 |
he context of which the profile is uploaded, e.g.:from zenml.integrations.whylogs.steps import get_whylogs_profiler_step
train_data_profiler = get_whylogs_profiler_step(dataset_id="model-2")
test_data_profiler = get_whylogs_profiler_step(dataset_id="model-3")
The step can then be inserted into your pipeline where it can take in a pandas.DataFrame dataset, e.g.:
from zenml import pipeline
@pipeline
def data_profiling_pipeline():
data, _ = data_loader()
train, test = data_splitter(data)
train_data_profiler(train)
test_data_profiler(test)
data_profiling_pipeline()
As can be seen from the step definition , the step takes in a dataset and returns a whylogs DatasetProfileView object:
@step
def whylogs_profiler_step(
dataset: pd.DataFrame,
dataset_timestamp: Optional[datetime.datetime] = None,
) -> DatasetProfileView:
...
You should consult the official whylogs documentation for more information on what you can do with the collected profiles.
You can view the complete list of configuration parameters in the SDK docs.
The whylogs Data Validator
The whylogs Data Validator implements the same interface as do all Data Validators, so this method forces you to maintain some level of compatibility with the overall Data Validator abstraction, which guarantees an easier migration in case you decide to switch to another Data Validator.
All you have to do is call the whylogs Data Validator methods when you need to interact with whylogs to generate data profiles. You may optionally enable whylabs logging to automatically upload the returned whylogs profile to WhyLabs, e.g.:
import pandas as pd
from whylogs.core import DatasetProfileView
from zenml.integrations.whylogs.data_validators.whylogs_data_validator import (
WhylogsDataValidator,
from zenml.integrations.whylogs.flavors.whylogs_data_validator_flavor import (
WhylogsDataValidatorSettings,
from zenml import step
whylogs_settings = WhylogsDataValidatorSettings(
enable_whylabs=True, dataset_id="<WHYLABS_DATASET_ID>"
@step(
settings={ | stack-components | https://docs.zenml.io/stack-components/data-validators/whylogs | 441 |
ββ βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SHARED β β β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-06-19 18:12:42.066053 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β UPDATED_AT β 2023-06-19 18:12:42.066055 β
ββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Configuration
βββββββββββββββββββββββββ―ββββββββββββ
β PROPERTY β VALUE β
β ββββββββββββββββββββββββΌββββββββββββ¨
β region β us-east-1 β
β ββββββββββββββββββββββββΌββββββββββββ¨
β aws_access_key_id β [HIDDEN] β
β ββββββββββββββββββββββββΌββββββββββββ¨
β aws_secret_access_key β [HIDDEN] β
βββββββββββββββββββββββββ·ββββββββββββ
AWS Secret Key
Long-lived AWS credentials consisting of an AWS access key ID and secret access key associated with an AWS IAM user or AWS account root user (not recommended).
This method is preferred during development and testing due to its simplicity and ease of use. It is not recommended as a direct authentication method for production use cases because the clients have direct access to long-lived credentials and are granted the full set of permissions of the IAM user or AWS account root user associated with the credentials. For production, it is recommended to use the AWS IAM Role, AWS Session Token, or AWS Federation Token authentication method instead.
An AWS region is required and the connector may only be used to access AWS resources in the specified region.
If you already have the local AWS CLI set up with these credentials, they will be automatically picked up when auto-configuration is used (see the example below). | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 502 |
artifact_version(name_id_or_prefix="iris_dataset")artifact.download_files("path/to/save.zip")
Take note that the path must have the .zip extension, as the artifact data will be saved as a zip file. Make sure to handle any exceptions that may arise from this operation.
Managing artifacts not produced by ZenML pipelines
Sometimes, artifacts can be produced completely outside of ZenML. A good example of this is the predictions produced by a deployed model.
# A model is deployed, running in a FastAPI container
# Let's use the ZenML client to fetch the latest model and make predictions
from zenml.client import Client
from zenml import save_artifact
# Fetch the model from a registry or a previous pipeline
model = ...
# Let's make a prediction
prediction = model.predict([[1, 1, 1, 1]])
# We now store this prediction in ZenML as an artifact
# This will create a new artifact version
save_artifact(prediction, name="iris_predictions")
You can also load any artifact stored within ZenML using the load_artifact method:
# Loads the latest version
load_artifact("iris_predictions")
load_artifact is simply short-hand for the following Client call:
from zenml.client import Client
client = Client()
client.get_artifact("iris_predictions").load()
Even if an artifact is created externally, it can be treated like any other artifact produced by ZenML steps - with all the functionalities described above!
It is also possible to use these functions inside your ZenML steps. However, it is usually cleaner to return the artifacts as outputs of your step to save them, or to use External Artifacts to load them instead.
Logging metadata for an artifact
One of the most useful ways of interacting with artifacts in ZenML is the ability to associate metadata with them. As mentioned before, artifact metadata is an arbitrary dictionary of key-value pairs that are useful for understanding the nature of the data. | user-guide | https://docs.zenml.io/v/docs/user-guide/starter-guide/manage-artifacts | 396 |
t (
EvidentlyColumnMapping,
evidently_test_step,from zenml.integrations.evidently.tests import EvidentlyTestConfig
text_data_test = evidently_test_step.with_options(
parameters=dict(
column_mapping=EvidentlyColumnMapping(
target="Rating",
numerical_features=["Age", "Positive_Feedback_Count"],
categorical_features=[
"Division_Name",
"Department_Name",
"Class_Name",
],
text_features=["Review_Text", "Title"],
),
tests=[
EvidentlyTestConfig.test("DataQualityTestPreset"),
EvidentlyTestConfig.test_generator(
"TestColumnRegExp",
columns=["Review_Text", "Title"],
reg_exp=r"[A-Z][A-Za-z0-9 ]*",
),
],
# We need to download the NLTK data for the TestColumnRegExp test
download_nltk_data=True,
),
The configuration shown in the example is the equivalent of running the following Evidently code inside the step:
from evidently.tests import TestColumnRegExp
from evidently.test_preset import DataQualityTestPreset
from evidently import ColumnMapping
from evidently.test_suite import TestSuite
from evidently.tests.base_test import generate_column_tests
import nltk
nltk.download("words")
nltk.download("wordnet")
nltk.download("omw-1.4")
column_mapping = ColumnMapping(
target="Rating",
numerical_features=["Age", "Positive_Feedback_Count"],
categorical_features=[
"Division_Name",
"Department_Name",
"Class_Name",
],
text_features=["Review_Text", "Title"],
test_suite = TestSuite(
tests=[
DataQualityTestPreset(),
generate_column_tests(
TestColumnRegExp,
columns=["Review_Text", "Title"],
parameters={"reg_exp": r"[A-Z][A-Za-z0-9 ]*"}
# The datasets are those that are passed to the Evidently step
# as input artifacts
test_suite.run(
current_data=current_dataset,
reference_data=reference_dataset,
column_mapping=column_mapping,
Let's break this down...
We configure the evidently_test_step using parameters that you would normally pass to the Evidently TestSuite object to configure and run an Evidently test suite . It consists of the following fields: | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/evidently | 459 |