page_content
stringlengths
74
2.86k
parent_section
stringclasses
7 values
url
stringlengths
21
129
token_count
int64
17
755
┃┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ 19edc05b-92db-49de-bc84-aa9b3fb8261a β”‚ aws-s3-zenfiles β”‚ πŸ”Ά aws β”‚ πŸ“¦ s3-bucket β”‚ s3://zenfiles ┃ ┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ c732c768-3992-4cbd-8738-d02cd7b6b340 β”‚ kubernetes-auto β”‚ πŸŒ€ kubernetes β”‚ πŸŒ€ kubernetes-cluster β”‚ πŸ’₯ error: connector 'kubernetes-auto' authorization failure: failed to verify Kubernetes cluster ┃ ┃ β”‚ β”‚ β”‚ β”‚ access: (401) ┃ ┃ β”‚ β”‚ β”‚ β”‚ Reason: Unauthorized ┃ ┃ β”‚ β”‚ β”‚ β”‚ HTTP response headers: HTTPHeaderDict({'Audit-Id': '20c96e65-3e3e-4e08-bae3-bcb72c527fbf', ┃ ┃ β”‚ β”‚ β”‚ β”‚ 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Fri, 09 Jun 2023 ┃ ┃ β”‚ β”‚ β”‚ β”‚ 18:52:56 GMT', 'Content-Length': '129'}) ┃
how-to
https://docs.zenml.io/how-to/auth-management/service-connectors-guide
376
def my_step(): ... or in YAML: steps: my_step:step_operator: "nameofstepoperator" settings: step_operator.sagemaker: estimator_args: instance_type: m7g.medium PreviousWhat can be configured NextConfiguration hierarchy Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/use-configuration-files/runtime-configuration
58
tracker register wandb_tracker \ --flavor=wandb \--entity={{wandb_secret.entity}} \ --project_name={{wandb_secret.project_name}} \ --api_key={{wandb_secret.api_key}} ... Read more about ZenML Secrets in the ZenML documentation. For more, up-to-date information on the Weights & Biases Experiment Tracker implementation and its configuration, you can have a look at the SDK docs . How do you use it? To be able to log information from a ZenML pipeline step using the Weights & Biases Experiment Tracker component in the active stack, you need to enable an experiment tracker using the @step decorator. Then use Weights & Biases logging or auto-logging capabilities as you would normally do, e.g.: import wandb from wandb.integration.keras import WandbCallback @step(experiment_tracker="<WANDB_TRACKER_STACK_COMPONENT_NAME>") def tf_trainer( config: TrainerConfig, x_train: np.ndarray, y_train: np.ndarray, x_val: np.ndarray, y_val: np.ndarray, ) -> tf.keras.Model: ... model.fit( x_train, y_train, epochs=config.epochs, validation_data=(x_val, y_val), callbacks=[ WandbCallback( log_evaluation=True, validation_steps=16, validation_data=(x_val, y_val), ], metric = ... wandb.log({"<METRIC_NAME>": metric}) Instead of hardcoding an experiment tracker name, you can also use the Client to dynamically use the experiment tracker of your active stack: from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def tf_trainer(...): ... Weights & Biases UI Weights & Biases comes with a web-based UI that you can use to find further details about your tracked experiments. Every ZenML step that uses Weights & Biases should create a separate experiment run which you can inspect in the Weights & Biases UI: You can find the URL of the Weights & Biases experiment linked to a specific ZenML run via the metadata of the step in which the experiment tracker was used: from zenml.client import Client
stack-components
https://docs.zenml.io/v/docs/stack-components/experiment-trackers/wandb
452
other remote stack components also running in GCP.This method uses the implicit GCP authentication available in the environment where the ZenML code is running. On your local machine, this is the quickest way to configure a GCS Artifact Store. You don't need to supply credentials explicitly when you register the GCS Artifact Store, as it leverages the local credentials and configuration that the Google Cloud CLI stores on your local machine. However, you will need to install and set up the Google Cloud CLI on your machine as a prerequisite, as covered in the Google Cloud documentation , before you register the GCS Artifact Store. Certain dashboard functionality, such as visualizing or deleting artifacts, is not available when using an implicitly authenticated artifact store together with a deployed ZenML server because the ZenML server will not have permission to access the filesystem. The implicit authentication method also needs to be coordinated with other stack components that are highly dependent on the Artifact Store and need to interact with it directly to the function. If these components are not running on your machine, they do not have access to the local Google Cloud CLI configuration and will encounter authentication failures while trying to access the GCS Artifact Store: Orchestrators need to access the Artifact Store to manage pipeline artifacts Step Operators need to access the Artifact Store to manage step-level artifacts Model Deployers need to access the Artifact Store to load served models To enable these use cases, it is recommended to use a GCP Service Connector to link your GCS Artifact Store to the remote GCS bucket. To set up the GCS Artifact Store to authenticate to GCP and access a GCS bucket, it is recommended to leverage the many features provided by the GCP Service Connector such as auto-configuration, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components.
stack-components
https://docs.zenml.io/stack-components/artifact-stores/gcp
366
─────────────────────────────────────────────────┨┃ UUID β”‚ 2b7773eb-d371-4f24-96f1-fad15e74fd6e ┃ ┠────────────────────┼──────────────────────────────────────────────────────────────────────────────┨ ┃ PATH β”‚ /home/stefan/.config/zenml/local_stores/2b7773eb-d371-4f24-96f1-fad15e74fd6e ┃ ┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ As shown by the PATH value in the zenml artifact-store describe output, the artifacts are stored inside a folder on your local filesystem. You can create additional instances of local Artifact Stores and use them in your stacks as you see fit, e.g.: # Register the local artifact store zenml artifact-store register custom_local --flavor local # Register and set a stack with the new artifact store zenml stack register custom_stack -o default -a custom_local --set Same as all other Artifact Store flavors, the local Artifact Store does take in a path configuration parameter that can be set during registration to point to a custom path on your machine. However, it is highly recommended that you rely on the default path value, otherwise, it may lead to unexpected results. Other local stack components depend on the convention used for the default path to be able to access the local Artifact Store. For more, up-to-date information on the local Artifact Store implementation and its configuration, you can have a look at the SDK docs . How do you use it? Aside from the fact that the artifacts are stored locally, using the local Artifact Store is no different from using any other flavor of Artifact Store. PreviousArtifact Stores NextAmazon Simple Cloud Storage (S3) Last updated 19 days ago
stack-components
https://docs.zenml.io/v/docs/stack-components/artifact-stores/local
424
πŸ“œOverview Overview of categories of MLOps components and third-party integrations. If you are new to the world of MLOps, it is often daunting to be immediately faced with a sea of tools that seemingly all promise and do the same things. It is useful in this case to try to categorize tools in various groups in order to understand their value in your toolchain in a more precise manner. ZenML tackles this problem by introducing the concept of Stacks and Stack Components. These stack components represent categories, each of which has a particular function in your MLOps pipeline. ZenML realizes these stack components as base abstractions that standardize the entire workflow for your team. In order to then realize the benefit, one can write a concrete implementation of the abstraction, or use one of the many built-in integrations that implement these abstractions for you. Here is a full list of all stack components currently supported in ZenML, with a description of the role of that component in the MLOps process: Type of Stack Component Description Orchestrator Orchestrating the runs of your pipeline Artifact Store Storage for the artifacts created by your pipelines Container Registry Store for your containers Data Validator Data and model validation Experiment Tracker Tracking your ML experiments Model Deployer Services/platforms responsible for online model serving Step Operator Execution of individual steps in specialized runtime environments Alerter Sending alerts through specified channels Image Builder Builds container images. Annotator Labeling and annotating data Model Registry Manage and interact with ML Models Feature Store Management of your data/features Each pipeline run that you execute with ZenML will require a stack and each stack will be required to include at least an orchestrator and an artifact store. Apart from these two, the other components are optional and to be added as your pipeline evolves in MLOps maturity.
stack-components
https://docs.zenml.io/v/docs/stack-components/component-guide
367
Evaluation in 65 lines of code Learn how to implement evaluation for RAG in just 65 lines of code. Our RAG guide included a short example for how to implement a basic RAG pipeline in just 85 lines of code. In this section, we'll build on that example to show how you can evaluate the performance of your RAG pipeline in just 65 lines. For the full code, please visit the project repository here. The code that follows requires the functions from the earlier RAG pipeline code to work. # ...previous RAG pipeline code here... # see https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_rag_pipeline.py eval_data = [ "question": "What creatures inhabit the luminescent forests of ZenML World?", "expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots.", }, "question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones that resonate through the crystalline structures, creating a symphony of otherworldly sounds in the melodic caverns of ZenML World.", }, "question": "Where do Gravitational Geckos live in ZenML World?", "expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World.", }, def evaluate_retrieval(question, expected_answer, corpus, top_n=2): relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) score = any( any(word in chunk for word in tokenize(expected_answer)) for chunk in relevant_chunks return score def evaluate_generation(question, expected_answer, generated_answer): client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) chat_completion = client.chat.completions.create( messages=[ "role": "system", "content": "You are an evaluation judge. Given a question, an expected answer, and a generated answer, your task is to determine if the generated answer is relevant and accurate. Respond with 'YES' if the generated answer is satisfactory, or 'NO' if it is not.", },
user-guide
https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/evaluation-in-65-loc
458
ge or if the ZenML version doesn't change at all).a backup file or database is created before every database migration attempt (i.e. during every Helm upgrade). If a backup already exists (i.e. persisted in a persistent volume or backup database), it is overwritten. the persistent backup file or database is cleaned up after the migration is completed successfully or if the database doesn't need to undergo a migration. This includes backups created by previous failed migration attempts. the persistent backup file or database is NOT cleaned up after a failed migration. This allows the user to manually inspect and/or apply the backup if the automatic recovery fails. The following example shows how to configure the ZenML server to use a persistent volume to store the database dump file: zenml: # ... database: url: "mysql://admin:[email protected]:3306/zenml" # Configure the database backup strategy backupStrategy: dump-file backupPVStorageSize: 1Gi podSecurityContext: fsGroup: 1000 # if you're using a PVC for backup, this should necessarily be set. PreviousDeploy with Docker NextDeploy using HuggingFace Spaces Last updated 19 days ago
getting-started
https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-helm
242
Load a Model in code There are a few different ways to load a ZenML Model in code: Load the active model in a pipeline You can also use the active model to get the model metadata, or the associated artifacts directly as described in the starter guide: from zenml import step, pipeline, get_step_context, pipeline, Model @pipeline(model=Model(name="my_model")) def my_pipeline(): ... @step def my_step(): # Get model from active step context mv = get_step_context().model # Get metadata print(mv.run_metadata["metadata_key"].value) # Directly fetch an artifact that is attached to the model output = mv.get_artifact("my_dataset", "my_version") output.run_metadata["accuracy"].value Load any model via the Client Alternatively, you can use the Client: from zenml import step from zenml.client import Client from zenml.enums import ModelStages @step def model_evaluator_step() ... # Get staging model version try: staging_zenml_model = Client().get_model_version( model_name_or_id="<INSERT_MODEL_NAME>", model_version_name_or_number_or_id=ModelStages.STAGING, except KeyError: staging_zenml_model = None ... PreviousControlling Model versions NextPromote a Model Last updated 15 days ago
how-to
https://docs.zenml.io/how-to/use-the-model-control-plane/load-a-model-in-code
281
mlflow_training_pipeline', ┃┃ β”‚ β”‚ β”‚ 'zenml_pipeline_run_uuid': 'a5d4faae-ef70-48f2-9893-6e65d5e51e98', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.005'} ┃ ┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ tensorflow-mnist-model β”‚ 2 β”‚ Run #2 of the mlflow_training_pipeline. β”‚ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_09_08_467212', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃ ┃ β”‚ β”‚ β”‚ 'zenml_pipeline_run_uuid': '11858dcf-3e47-4b1a-82c5-6fa25ba4e037', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.003'} ┃ ┠────────────────────────┼───────────────┼─────────────────────────────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ tensorflow-mnist-model β”‚ 1 β”‚ Run #1 of the mlflow_training_pipeline. β”‚ {'zenml_version': '0.34.0', 'zenml_run_name': 'mlflow_training_pipeline-2023_03_01-08_08_52_398499', 'zenml_pipeline_name': 'mlflow_training_pipeline', ┃ ┃ β”‚ β”‚ β”‚ 'zenml_pipeline_run_uuid': '29fb22c1-6e0b-4431-9e04-226226506d16', 'zenml_workspace': '10e060b3-2f7e-463d-9ec8-3a211ef4e1f6', 'epochs': '5', 'optimizer': 'Adam', 'lr': '0.001'} ┃
stack-components
https://docs.zenml.io/v/docs/stack-components/model-registries/mlflow
558
d9fe8eb2f833b788e9d8a6ebde194765b42fe46 size: 3256It is also possible to update the local gcloud CLI configuration with credentials extracted from the GCP Service Connector: zenml service-connector login gcp-user-account --resource-type gcp-generic Example Command Output Updated the local gcloud default application credentials file at '/home/user/.config/gcloud/application_default_credentials.json' The 'gcp-user-account' GCP Service Connector connector was used to successfully configure the local Generic GCP resource client/SDK. Stack Components use The GCS Artifact Store Stack Component can be connected to a remote GCS bucket through a GCP Service Connector. The Google Cloud Image Builder Stack Component, VertexAI Orchestrator, and VertexAI Step Operator can be connected and use the resources of a target GCP project through a GCP Service Connector. The GCP Service Connector can also be used with any Orchestrator or Model Deployer stack component flavor that relies on Kubernetes clusters to manage workloads. This allows GKE Kubernetes container workloads to be managed without the need to configure and maintain explicit GCP or Kubernetes kubectl configuration contexts and credentials in the target environment or in the Stack Component itself. Similarly, Container Registry Stack Components can be connected to a GCR Container Registry through a GCP Service Connector. This allows container images to be built and published to GCR container registries without the need to configure explicit GCP credentials in the target environment or the Stack Component. End-to-end examples This is an example of an end-to-end workflow involving Service Connectors that use a single multi-type GCP Service Connector to give access to multiple resources for multiple Stack Components. A complete ZenML Stack is registered and composed of the following Stack Components, all connected through the same Service Connector: a Kubernetes Orchestrator connected to a GKE Kubernetes cluster a GCS Artifact Store connected to a GCS bucket
how-to
https://docs.zenml.io/how-to/auth-management/gcp-service-connector
400
rray": [[1,2,3,4]] } }' Using a Service ConnectorTo set up the Seldon Core Model Deployer to authenticate to a remote Kubernetes cluster, it is recommended to leverage the many features provided by the Service Connectors such as auto-configuration, local client login, best security practices regarding long-lived credentials and fine-grained access control and reusing the same credentials across multiple stack components. Depending on where your target Kubernetes cluster is running, you can use one of the following Service Connectors: the AWS Service Connector, if you are using an AWS EKS cluster. the GCP Service Connector, if you are using a GKE cluster. the Azure Service Connector, if you are using an AKS cluster. the generic Kubernetes Service Connector for any other Kubernetes cluster. If you don't already have a Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You have the option to configure a Service Connector that can be used to access more than one Kubernetes cluster or even more than one type of cloud resource: zenml service-connector register -i A non-interactive CLI example that leverages the AWS CLI configuration on your local machine to auto-configure an AWS Service Connector targeting a single EKS cluster is: zenml service-connector register <CONNECTOR_NAME> --type aws --resource-type kubernetes-cluster --resource-name <EKS_CLUSTER_NAME> --auto-configure Example Command Output $ zenml service-connector register eks-zenhacks --type aws --resource-type kubernetes-cluster --resource-id zenhacks-cluster --auto-configure β Ό Registering service connector 'eks-zenhacks'... Successfully registered service connector `eks-zenhacks` with access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────────────┼──────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ zenhacks-cluster ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛
stack-components
https://docs.zenml.io/stack-components/model-deployers/seldon
460
stored and managed by the ZenML Pro Control Plane.On our infrastructure for ZenML Pro SaaS only ML metadata (e.g. pipeline and model tracking and versioning information) is stored. All the actual ML data artifacts (e.g. data produced or consumed by pipeline steps, logs and visualizations, models) are stored on the customer cloud. This can be set up quite easily by configuring an artifact store with your MLOps stack. Your tenant only needs permissions to read from this data to display artifacts on the ZenML dashboard. The tenant also needs direct access to parts of the customer infrastructure services to support dashboard control plane features such as CI/CD, triggering and running pipelines, triggering model deployments etc. This scenario is meant for customers who want to quickly get started with ZenML and can to a certain extent allow ingress connections into their infrastructure from an external SaaS provider. Scenario 2: Hybrid SaaS with Customer Secret Store managed by ZenML This scenario is a version of Scenario 1. modified to store all sensitive information on the customer side. In this case, the customer connects their own secret store directly to the ZenML server that is managed by us. All ZenML secrets used by running pipelines to access infrastructure services and resources are stored in the customer secret store. This allows users to use service connectors and the secrets API to authenticate ZenML pipelines and the ZenML Pro to 3rd party services and infrastructure while ensuring that credentials are always stored on the customer side.
getting-started
https://docs.zenml.io/v/docs/getting-started/zenml-pro/system-architectures
300
e ZenML CLI to install the right version directly.The zenml integration install sklearn command is simply doing a pip install of sklearn behind the scenes. If something goes wrong, one can always use zenml integration requirements sklearn to see which requirements are compatible and install using pip (or any other tool) directly. (If no specific requirements are mentioned for an integration then this means we support using all possible versions of that integration/package.) Define a data loader with multiple outputs A typical start of an ML pipeline is usually loading data from some source. This step will sometimes have multiple outputs. To define such a step, use a Tuple type annotation. Additionally, you can use the Annotated annotation to assign custom output names. Here we load an open-source dataset and split it into a train and a test dataset. import logging @step def training_data_loader() -> Tuple[ # Notice we use a Tuple and Annotated to return # multiple named outputs Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: """Load the iris dataset as a tuple of Pandas DataFrame / Series.""" logging.info("Loading iris...") iris = load_iris(as_frame=True) logging.info("Splitting train and test...") X_train, X_test, y_train, y_test = train_test_split( iris.data, iris.target, test_size=0.2, shuffle=True, random_state=42 return X_train, X_test, y_train, y_test ZenML records the root python logging handler's output into the artifact store as a side-effect of running a step. Therefore, when writing steps, use the logging module to record logs, to ensure that these logs then show up in the ZenML dashboard. Create a parameterized training step Here we are creating a training step for a support vector machine classifier with sklearn. As we might want to adjust the hyperparameter gamma later on, we define it as an input value to the step as well. @step def svc_trainer( X_train: pd.DataFrame,
user-guide
https://docs.zenml.io/v/docs/user-guide/starter-guide/create-an-ml-pipeline
443
Handle custom data types Using materializers to pass custom data types through steps. A ZenML pipeline is built in a data-centric way. The outputs and inputs of steps define how steps are connected and the order in which they are executed. Each step should be considered as its very own process that reads and writes its inputs and outputs from and to the artifact store. This is where materializers come into play. A materializer dictates how a given artifact can be written to and retrieved from the artifact store and also contains all serialization and deserialization logic. Whenever you pass artifacts as outputs from one pipeline step to other steps as inputs, the corresponding materializer for the respective data type defines how this artifact is first serialized and written to the artifact store, and then deserialized and read in the next step. Built-In Materializers ZenML already includes built-in materializers for many common data types. These are always enabled and are used in the background without requiring any user interaction / activation: Materializer Handled Data Types Storage Format BuiltInMaterializer bool , float , int , str , None .json BytesInMaterializer bytes .txt BuiltInContainerMaterializer dict , list , set , tuple Directory NumpyMaterializer np.ndarray .npy PandasMaterializer pd.DataFrame , pd.Series .csv (or .gzip if parquet is installed) PydanticMaterializer pydantic.BaseModel .json ServiceMaterializer zenml.services.service.BaseService .json StructuredStringMaterializer zenml.types.CSVString , zenml.types.HTMLString , zenml.types.MarkdownString .csv / .html / .md (depending on type)
how-to
https://docs.zenml.io/how-to/handle-data-artifacts/handle-custom-data-types
330
eneric β”‚ implicit β”‚ βœ… β”‚ βœ… ┃┃ β”‚ β”‚ πŸ“¦ gcs-bucket β”‚ user-account β”‚ β”‚ ┃ ┃ β”‚ β”‚ πŸŒ€ kubernetes-cluster β”‚ service-account β”‚ β”‚ ┃ ┃ β”‚ β”‚ 🐳 docker-registry β”‚ oauth2-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ impersonation β”‚ β”‚ ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛ zenml service-connector describe-type aws Example Command Output ╔══════════════════════════════════════════════════════════════════════════════╗ β•‘ πŸ”Ά AWS Service Connector (connector type: aws) β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• Authentication methods: πŸ”’ implicit πŸ”’ secret-key πŸ”’ sts-token πŸ”’ iam-role πŸ”’ session-token πŸ”’ federation-token Resource types: πŸ”Ά aws-generic πŸ“¦ s3-bucket πŸŒ€ kubernetes-cluster 🐳 docker-registry Supports auto-configuration: True Available locally: True Available remotely: False The ZenML AWS Service Connector facilitates the authentication and access to managed AWS services and resources. These encompass a range of resources, including S3 buckets, ECR repositories, and EKS clusters. The connector provides support for various authentication methods, including explicit long-lived AWS secret keys, IAM roles, short-lived STS tokens and implicit authentication. To ensure heightened security measures, this connector also enables the generation of temporary STS security tokens that are scoped down to the minimum permissions necessary for accessing the intended resource. Furthermore, it includes automatic configuration and detection of credentials locally configured through the AWS CLI. This connector serves as a general means of accessing any AWS service by issuing
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide
514
────────┼────────┼─────────┼────────────┼────────┨┃ β”‚ gcr-zenml-core β”‚ 9fddfaba-6d46-4806-ad96-9dcabef74639 β”‚ πŸ”΅ gcp β”‚ 🐳 docker-registry β”‚ gcr.io/zenml-core β”‚ βž– β”‚ default β”‚ β”‚ ┃ ┠────────┼──────────────────────────────┼──────────────────────────────────────┼────────┼────────────────────┼──────────────────────┼────────┼─────────┼────────────┼────────┨ ┃ β”‚ vertex-ai-zenml-core β”‚ f97671b9-8c73-412b-bf5e-4b7c48596f5f β”‚ πŸ”΅ gcp β”‚ πŸ”΅ gcp-generic β”‚ zenml-core β”‚ βž– β”‚ default β”‚ β”‚ ┃ ┠────────┼──────────────────────────────┼──────────────────────────────────────┼────────┼────────────────────┼──────────────────────┼────────┼─────────┼────────────┼────────┨ ┃ β”‚ gcp-cloud-builder-zenml-core β”‚ 648c1016-76e4-4498-8de7-808fd20f057b β”‚ πŸ”΅ gcp β”‚ πŸ”΅ gcp-generic β”‚ zenml-core β”‚ βž– β”‚ default β”‚ β”‚ ┃ ┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━┛ ``` register and connect a GCS Artifact Store Stack Component to the GCS bucket:Copyzenml artifact-store register gcs-zenml-bucket-sl --flavor gcp --path=gs://zenml-bucket-sl Example Command Output ```text Running with active workspace: 'default' (global) Running with active stack: 'default' (global) Successfully registered artifact_store `gcs-zenml-bucket-sl`. ``` ```sh zenml artifact-store connect gcs-zenml-bucket-sl --connector gcs-zenml-bucket-sl ``` Example Command Output ```text Running with active workspace: 'default' (global) Running with active stack: 'default' (global) Successfully connected artifact store `gcs-zenml-bucket-sl` to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃
how-to
https://docs.zenml.io/how-to/auth-management/gcp-service-connector
687
out the SDK Docs . S3 data access in ZenML stepsIn Sagemaker jobs, it is possible to access data that is located in S3. Similarly, it is possible to write data from a job to a bucket. The ZenML Sagemaker orchestrator supports this via the SagemakerOrchestratorSettings and hence at component, pipeline, and step levels. Import: S3 -> job Importing data can be useful when large datasets are available in S3 for training, for which manual copying can be cumbersome. Sagemaker supports File (default) and Pipe mode, with which data is either fully copied before the job starts or piped on the fly. See the Sagemaker documentation referenced above for more information about these modes. Note that data import and export can be used jointly with processor_args for maximum flexibility. A simple example of importing data from S3 to the Sagemaker job is as follows: sagemaker_orchestrator_settings = SagemakerOrchestratorSettings( input_data_s3_mode="File", input_data_s3_uri="s3://some-bucket-name/folder" In this case, data will be available at /opt/ml/processing/input/data within the job. It is also possible to split your input over channels. This can be useful if the dataset is already split in S3, or maybe even located in different buckets. sagemaker_orchestrator_settings = SagemakerOrchestratorSettings( input_data_s3_mode="File", input_data_s3_uri={ "train": "s3://some-bucket-name/training_data", "val": "s3://some-bucket-name/validation_data", "test": "s3://some-other-bucket-name/testing_data" Here, the data will be available in /opt/ml/processing/input/data/train, /opt/ml/processing/input/data/val and /opt/ml/processing/input/data/test. In the case of using Pipe for input_data_s3_mode, a file path specifying the pipe will be available as per the description written here . An example of using this pipe file within a Python script can be found here . Export: job -> S3
stack-components
https://docs.zenml.io/stack-components/orchestrators/sagemaker
450
This is us if you want to put faces to the names!However, in order to improve ZenML and understand how it is being used, we need to use analytics to have an overview of how it is used 'in the wild'. This not only helps us find bugs but also helps us prioritize features and commands that might be useful in future releases. If we did not have this information, all we really get is pip download statistics and chatting with people directly, which while being valuable, is not enough to seriously better the tool as a whole. How does ZenML collect these statistics? We use Segment as the data aggregation library for all our analytics. However, before any events get sent to Segment, they first go through a central ZenML analytics server. This added layer allows us to put various countermeasures to incidents such as getting spammed with events and enables us to have a more optimized tracking process. The client code is entirely visible and can be seen in the analytics module of our main repository. If I share my email, will you spam me? No, we won't. Our sole purpose of contacting you will be to ask for feedback (e.g. in the shape of a user interview). These interviews help the core team understand usage better and prioritize feature requests. If you have any concerns about data privacy and the usage of personal information, please contact us, and we will try to alleviate any concerns as soon as possible. Version mismatch (downgrading) If you've recently downgraded your ZenML version to an earlier release or installed a newer version on a different environment on the same machine, you might encounter an error message when running ZenML that says: `The ZenML global configuration version (%s) is higher than the version of ZenML currently being used (%s).` We generally recommend using the latest ZenML version. However, there might be cases where you need to match the global configuration version with the version of ZenML installed in the current environment. To do this, run the following command: zenml downgrade
reference
https://docs.zenml.io/v/docs/reference/global-settings
411
━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓┃ PROPERTY β”‚ VALUE ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ ID β”‚ 96a92154-4ec7-4722-bc18-21eeeadb8a4f ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ NAME β”‚ aws-s3 (s3-bucket | s3://zenfiles client) ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ TYPE β”‚ πŸ”Ά aws ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ AUTH METHOD β”‚ sts-token ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ RESOURCE TYPES β”‚ πŸ“¦ s3-bucket ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ RESOURCE NAME β”‚ s3://zenfiles ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ SECRET ID β”‚ ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ SESSION DURATION β”‚ N/A ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ EXPIRES IN β”‚ 11h59m56s ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ OWNER β”‚ default ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ WORKSPACE β”‚ default ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ SHARED β”‚ βž– ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ CREATED_AT β”‚ 2023-06-15 18:56:33.880081 ┃ ┠──────────────────┼───────────────────────────────────────────┨ ┃ UPDATED_AT β”‚ 2023-06-15 18:56:33.880082 ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Configuration
how-to
https://docs.zenml.io/how-to/auth-management
555
ace. Try it out at https://www.zenml.io/live-demo!No Vendor Lock-In: Since infrastructure is decoupled from code, ZenML gives you the freedom to switch to a different tooling stack whenever it suits you. By avoiding vendor lock-in, you have the flexibility to transition between cloud providers or services, ensuring that you receive the best performance and pricing available in the market at any time.Copyzenml stack set gcp python run.py # Run your ML workflows in GCP zenml stack set aws python run.py # Now your ML workflow runs in AWS πŸš€ Learn More Ready to deploy and manage your MLOps infrastructure with ZenML? Here is a collection of pages you can take a look at next: Set up and manage production-ready infrastructure with ZenML. Explore the existing infrastructure and tooling integrations of ZenML. Find answers to the most frequently asked questions. ZenML gives data scientists the freedom to fully focus on modeling and experimentation while writing code that is production-ready from the get-go. Develop Locally: ZenML allows you to develop ML models in any environment using your favorite tools. This means you can start developing locally, and simply switch to a production environment once you are satisfied with your results.Copypython run.py # develop your code locally with all your favorite tools zenml stack set production python run.py # run on production infrastructure without any code changes Pythonic SDK: ZenML is designed to be as unintrusive as possible. Adding a ZenML @step or @pipeline decorator to your Python functions is enough to turn your existing code into ZenML pipelines:Copyfrom zenml import pipeline, step @step def step_1() -> str: return "world" @step def step_2(input_one: str, input_two: str) -> None: combined_str = input_one + ' ' + input_two print(combined_str) @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) my_pipeline()
null
https://docs.zenml.io
437
client using service connector 'aws-multi-type'...Updated local kubeconfig with the cluster details. The current kubectl context was set to 'arn:aws:eks:us-east-1:715803424590:cluster/zenhacks-cluster'. The 'aws-multi-type' Kubernetes Service Connector connector was used to successfully configure the local Kubernetes cluster client/SDK. # Verify that the local kubectl client is now configured to access the remote Kubernetes cluster $ kubectl cluster-info Kubernetes control plane is running at https://A5F8F4142FB12DDCDE9F21F6E9B07A18.gr7.us-east-1.eks.amazonaws.com CoreDNS is running at https://A5F8F4142FB12DDCDE9F21F6E9B07A18.gr7.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy The same is possible with the local Docker client: zenml service-connector verify aws-session-token --resource-type docker-registry Example Command Output Service connector 'aws-session-token' is correctly configured with valid credentials and has access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼───────────────────┼────────────────┼────────────────────┼──────────────────────────────────────────────┨ ┃ 3ae3e595-5cbc-446e-be64-e54e854e0e3f β”‚ aws-session-token β”‚ πŸ”Ά aws β”‚ 🐳 docker-registry β”‚ 715803424590.dkr.ecr.us-east-1.amazonaws.com ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ zenml service-connector login aws-session-token --resource-type docker-registry Example Command Output $zenml service-connector login aws-session-token --resource-type docker-registry ⠏ Attempting to configure local client using service connector 'aws-session-token'...
how-to
https://docs.zenml.io/how-to/auth-management/service-connectors-guide
594
th the appropriate label config with Label Studio.get_labeled_data step - This step will get all labeled data available for a particular dataset. Note that these are output in a Label Studio annotation format, which will subsequently be converted into a format appropriate for your specific use case. sync_new_data_to_label_studio step - This step is for ensuring that ZenML is handling the annotations and that the files being used are stored and synced with the ZenML artifact store. This is an important step as part of a continuous annotation workflow since you want all the subsequent steps of your workflow to remain in sync with whatever new annotations are being made or have been created. Helper Functions Label Studio requires the use of what it calls 'label config' when you are creating/registering your dataset. These are strings containing HTML-like syntax that allow you to define a custom interface for your annotation. ZenML provides three helper functions that will construct these label config strings in the case of object detection, image classification, and OCR. See the integrations.label_studio.label_config_generators module for those three functions. PreviousArgilla NextPigeon Last updated 15 days ago
stack-components
https://docs.zenml.io/stack-components/annotators/label-studio
232
our active stack: from zenml.client import Clientexperiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def tf_trainer(...): ... MLflow UI MLflow comes with its own UI that you can use to find further details about your tracked experiments. You can find the URL of the MLflow experiment linked to a specific ZenML run via the metadata of the step in which the experiment tracker was used: from zenml.client import Client last_run = client.get_pipeline("<PIPELINE_NAME>").last_run trainer_step = last_run.get_step("<STEP_NAME>") tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) This will be the URL of the corresponding experiment in your deployed MLflow instance, or a link to the corresponding mlflow experiment file if you are using local MLflow. If you are using local MLflow, you can use the mlflow ui command to start MLflow at localhost:5000 where you can then explore the UI in your browser. mlflow ui --backend-store-uri <TRACKING_URL> Additional configuration For additional configuration of the MLflow experiment tracker, you can pass MLFlowExperimentTrackerSettings to create nested runs or add additional tags to your MLflow runs: import mlflow from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings( nested=True, tags={"key": "value"} @step( experiment_tracker="<MLFLOW_TRACKER_STACK_COMPONENT_NAME>", settings={ "experiment_tracker.mlflow": mlflow_settings def step_one( data: np.ndarray, ) -> np.ndarray: ... Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings. PreviousComet NextNeptune Last updated 15 days ago
stack-components
https://docs.zenml.io/stack-components/experiment-trackers/mlflow
391
ervice Connector credentials are actually working.When configuring local CLI utilities with credentials extracted from Service Connectors, keep in mind that most Service Connectors, particularly those used with cloud platforms, usually exercise the security best practice of issuing temporary credentials such as API tokens. The implication is that your local CLI may only be allowed access to the remote service for a short time before those credentials expire, then you need to fetch another set of credentials from the Service Connector. The following examples show how the local Kubernetes kubectl CLI can be configured with credentials issued by a Service Connector and then used to access a Kubernetes cluster directly: zenml service-connector list-resources --resource-type kubernetes-cluster Example Command Output The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼──────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────┨ ┃ 9d953320-3560-4a78-817c-926a3898064d β”‚ gcp-user-multi β”‚ πŸ”΅ gcp β”‚ πŸŒ€ kubernetes-cluster β”‚ zenml-test-cluster ┃ ┠──────────────────────────────────────┼──────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────┨
how-to
https://docs.zenml.io/how-to/auth-management/service-connectors-guide
424
Migration guide 0.39.1 β†’ 0.41.0 How to migrate your ZenML pipelines and steps from version <=0.39.1 to 0.41.0. ZenML versions 0.40.0 to 0.41.0 introduced a new and more flexible syntax to define ZenML steps and pipelines. This page contains code samples that show you how to upgrade your steps and pipelines to the new syntax. Newer versions of ZenML still work with pipelines and steps defined using the old syntax, but the old syntax is deprecated and will be removed in the future. Overview from typing import Optional from zenml.steps import BaseParameters, Output, StepContext, step from zenml.pipelines import pipeline # Define a Step class MyStepParameters(BaseParameters): param_1: int param_2: Optional[float] = None @step def my_step( params: MyStepParameters, context: StepContext, ) -> Output(int_output=int, str_output=str): result = int(params.param_1 * (params.param_2 or 1)) result_uri = context.get_output_artifact_uri() return result, result_uri # Run the Step separately my_step.entrypoint() # Define a Pipeline @pipeline def my_pipeline(my_step): my_step() step_instance = my_step(params=MyStepParameters(param_1=17)) pipeline_instance = my_pipeline(my_step=step_instance) # Configure and run the Pipeline pipeline_instance.configure(enable_cache=False) schedule = Schedule(...) pipeline_instance.run(schedule=schedule) # Fetch the Pipeline Run last_run = pipeline_instance.get_runs()[0] int_output = last_run.get_step["my_step"].outputs["int_output"].read() from typing import Annotated, Optional, Tuple from zenml import get_step_context, pipeline, step from zenml.client import Client # Define a Step @step def my_step( param_1: int, param_2: Optional[float] = None ) -> Tuple[Annotated[int, "int_output"], Annotated[str, "str_output"]]: result = int(param_1 * (param_2 or 1)) result_uri = get_step_context().get_output_artifact_uri() return result, result_uri # Run the Step separately my_step() # Define a Pipeline @pipeline
reference
https://docs.zenml.io/reference/migration-guide/migration-zero-forty
487
ed to update the way they are registered in ZenML.the updated ZenML server provides a new and improved collaborative experience. When connected to a ZenML server, you can now share your ZenML Stacks and Stack Components with other users. If you were previously using the ZenML Profiles or the ZenML server to share your ZenML Stacks, you should switch to the new ZenML server and Dashboard and update your existing workflows to reflect the new features. ZenML takes over the Metadata Store role ZenML can now run as a server that can be accessed via a REST API and also comes with a visual user interface (called the ZenML Dashboard). This server can be deployed in arbitrary environments (local, on-prem, via Docker, on AWS, GCP, Azure etc.) and supports user management, workspace scoping, and more. The release introduces a series of commands to facilitate managing the lifecycle of the ZenML server and to access the pipeline and pipeline run information: zenml connect / disconnect / down / up / logs / status can be used to configure your client to connect to a ZenML server, to start a local ZenML Dashboard or to deploy a ZenML server to a cloud environment. For more information on how to use these commands, see the ZenML deployment documentation. zenml pipeline list / runs / delete can be used to display information and about and manage your pipelines and pipeline runs. In ZenML 0.13.2 and earlier versions, information about pipelines and pipeline runs used to be stored in a separate stack component called the Metadata Store. Starting with 0.20.0, the role of the Metadata Store is now taken over by ZenML itself. This means that the Metadata Store is no longer a separate component in the ZenML architecture, but rather a part of the ZenML core, located wherever ZenML is deployed: locally on your machine or running remotely as a server.
reference
https://docs.zenml.io/reference/migration-guide/migration-zero-twenty
389
racking import MlflowClient, artifact_utils @stepdef deploy_model() -> Optional[MLFlowDeploymentService]: # Deploy a model using the MLflow Model Deployer zenml_client = Client() model_deployer = zenml_client.active_stack.model_deployer experiment_tracker = zenml_client.active_stack.experiment_tracker # Let's get the run id of the current pipeline mlflow_run_id = experiment_tracker.get_run_id( experiment_name=get_step_context().pipeline_name, run_name=get_step_context().run_name, # Once we have the run id, we can get the model URI using mlflow client experiment_tracker.configure_mlflow() client = MlflowClient() model_name = "model" # set the model name that was logged model_uri = artifact_utils.get_artifact_uri( run_id=mlflow_run_id, artifact_path=model_name mlflow_deployment_config = MLFlowDeploymentConfig( name: str = "mlflow-model-deployment-example", description: str = "An example of deploying a model using the MLflow Model Deployer", pipeline_name: str = get_step_context().pipeline_name, pipeline_step_name: str = get_step_context().step_name, model_uri: str = model_uri, model_name: str = model_name, workers: int = 1, mlserver: bool = False, timeout: int = 300, service = model_deployer.deploy_model(mlflow_deployment_config) return service Configuration Within the MLFlowDeploymentService you can configure: name: The name of the deployment. description: The description of the deployment. pipeline_name: The name of the pipeline that deployed the MLflow prediction server. pipeline_step_name: The name of the step that deployed the MLflow prediction server. model_name: The name of the model that is deployed in case of model registry the name must be a valid registered model name. model_version: The version of the model that is deployed in case of model registry the version must be a valid registered model version.
stack-components
https://docs.zenml.io/stack-components/model-deployers/mlflow
412
h='/local/path/to/config.yaml' # Run the pipelinetraining_pipeline() The reference to a local file will change depending on where you are executing the pipeline and code from, so please bear this in mind. It is best practice to put all config files in a configs directory at the root of your repository and check them into git history. A simple version of such a YAML file could be: parameters: gamma: 0.01 Please note that this would take precedence over any parameters passed in the code. If you are unsure how to format this config file, you can generate a template config file from a pipeline. training_pipeline.write_run_configuration_template(path='/local/path/to/config.yaml') Check out this section for advanced configuration options. Full Code Example This section combines all the code from this section into one simple script that you can use to run easily: from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step @step def training_data_loader() -> Tuple[ Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: """Load the iris dataset as tuple of Pandas DataFrame / Series.""" iris = load_iris(as_frame=True) X_train, X_test, y_train, y_test = train_test_split( iris.data, iris.target, test_size=0.2, shuffle=True, random_state=42 return X_train, X_test, y_train, y_test @step def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[ Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"], ]: """Train a sklearn SVC classifier and log to MLflow.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}")
user-guide
https://docs.zenml.io/v/docs/user-guide/starter-guide/create-an-ml-pipeline
470
N -x mlflow_bucket=gs://my_bucket Artifact StoresFor an artifact store, you can pass bucket_name as an argument to the command. zenml artifact-store deploy s3_artifact_store --flavor=s3 --provider=aws -r YOUR_REGION -x bucket_name=my_bucket Container Registries For container registries, you can pass the repository name using repo_name: zenml container-registry deploy aws_registry --flavor=aws -p aws -r YOUR_REGION -x repo_name=my_repo This is only useful for the AWS case since AWS requires a repository to be created before pushing images to it and the deploy command ensures that a repository with the name you provide is created. In case of GCP and other providers, you can choose the repository name at the same time as you are pushing the image via code. This is achieved through setting the target_repo attribute of the DockerSettings object. Other configuration In the case of GCP components, it is required that you pass a project ID to the command as extra configuration when you're creating any GCP resource. PreviousManage stacks NextDeploy a stack using mlstacks Last updated 15 days ago
how-to
https://docs.zenml.io/how-to/stack-deployment/deploy-a-stack-component
242
View logs on the dashboard By default, ZenML uses a logging handler to capture the logs that occur during the execution of a step. Users are free to use the default python logging module or print statements, and ZenML's logging handler will catch these logs and store them. import logging from zenml import step @step def my_step() -> None: logging.warning("`Hello`") # You can use the regular `logging` module. print("World.") # You can utilize `print` statements as well. These logs are stored within the respective artifact store of your stack. This means that you can only view these logs in the dashboard if the deployed ZenML server has direct access to the underlying artifact store. There are two cases in which this will be true: In case of a local ZenML server (via zenml up), both local and remote artifact stores may be accessible, depending on configuration of the client. In case of a deployed ZenML server, logs for runs on a local artifact store will not be accessible. Logs for runs using a remote artifact store may be accessible, if the artifact store has been configured with a service connector. Please read this chapter of the production guide to learn how to configure a remote artifact store with a service connector. If configured correctly, the logs are displayed in the dashboard as follows: If you do not want to store the logs for your pipeline (for example due to performance reduction or storage limits), you can follow these instructions. PreviousControl logging NextEnable or disable logs storage Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/control-logging/view-logs-on-the-dasbhoard
319
Kubeflow Run your ML pipelines on Kubeflow Pipelines. The ZenML Kubeflow Orchestrator allows you to run your ML pipelines on Kubeflow Pipelines without writing Kubeflow code. Prerequisites To use the Kubeflow Orchestrator, you'll need: ZenML kubeflow integration installed (zenml integration install kubeflow) Docker installed and running kubectl installed (optional, see below) A Kubernetes cluster with Kubeflow Pipelines installed (see deployment guide for your cloud provider) A remote artifact store and container registry in your ZenML stack A remote ZenML server deployed to the cloud The name of your Kubernetes context pointing to the remote cluster (optional, see below) Configuring the Orchestrator There are two ways to configure the orchestrator: Using a Service Connector to connect to the remote cluster (recommended for cloud-managed clusters). No local kubectl context needed. zenml orchestrator register <ORCHESTRATOR_NAME> --flavor kubeflow zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME> zenml stack update -o <ORCHESTRATOR_NAME> Configuring kubectl with a context pointing to the remote cluster and setting kubernetes_context in the orchestrator config: zenml orchestrator register <ORCHESTRATOR_NAME> \ --flavor=kubeflow \ --kubernetes_context=<KUBERNETES_CONTEXT> zenml stack update -o <ORCHESTRATOR_NAME> Running a Pipeline Once configured, you can run any ZenML pipeline using the Kubeflow Orchestrator: python your_pipeline.py This will create a Kubernetes pod for each step in your pipeline. You can view pipeline runs in the Kubeflow UI. Additional Configuration You can further configure the orchestrator using KubeflowOrchestratorSettings: from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings kubeflow_settings = KubeflowOrchestratorSettings( client_args={},
how-to
https://docs.zenml.io/how-to/popular-integrations/kubeflow
455
fully utilize the platform. Maximum data securityAt ZenML Pro, your data security and privacy are our top priority. The platform enables a secure connection to your infrastructure, tracking only metadata via an encrypted connection to maintain the confidentiality of your sensitive information. ZenML Pro integrates smoothly with your cloud services via service connectors, allowing a straightforward connection with various cloud resources without sacrificing data security. We hold your confidential information in a secure and isolated environment, offering an extra degree of protection. If desired, you can even supply your own secret store. Click here to understand about the ZenML Pro system architecture. PreviousSystem Architectures NextUser Management Last updated 12 days ago
getting-started
https://docs.zenml.io/v/docs/getting-started/zenml-pro/zenml-cloud
133
in our active stack. This can be done in two ways:If you have a Service Connector configured to access the remote Kubernetes cluster, you no longer need to set the kubernetes_context attribute to a local kubectl context. In fact, you don't need the local Kubernetes CLI at all. You can connect the stack component to the Service Connector instead:Copy$ zenml orchestrator register <ORCHESTRATOR_NAME> --flavor kubernetes Running with active workspace: 'default' (repository) Running with active stack: 'default' (repository) Successfully registered orchestrator `<ORCHESTRATOR_NAME>`.
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/kubernetes
125
ons configuration is a file on your local machine.migrate your existing Great Expectations configuration to ZenML. This is a compromise between 1. and 2. that allows you to continue to use your existing Data Sources, Metadata Stores and Data Docs sites even when running pipelines remotely. Some Great Expectations CLI commands will not work well with the deployment methods that puts ZenML in charge of your Great Expectations configuration (i.e. 1. and 3.). You will be required to use Python code to manage your Expectations and you will have to edit the Jupyter notebooks generated by the Great Expectations CLI to connect them to your ZenML managed configuration. . The default Data Validator setup plugs Great Expectations directly into the Artifact Store component that is part of the same stack. As a result, the Expectation Suites, Validation Results and Data Docs are stored in the ZenML Artifact Store and you don't have to configure Great Expectations at all, ZenML takes care of that for you: # Register the Great Expectations data validator zenml data-validator register ge_data_validator --flavor=great_expectations # Register and set a stack with the new data validator zenml stack register custom_stack -dv ge_data_validator ... --set If you have an existing Great Expectations configuration that you would like to reuse with your ZenML pipelines, the Data Validator allows you to do so. All you need is to point it to the folder where your local great_expectations.yaml configuration file is located: # Register the Great Expectations data validator zenml data-validator register ge_data_validator --flavor=great_expectations \ --context_root_dir=/path/to/my/great_expectations # Register and set a stack with the new data validator zenml stack register custom_stack -dv ge_data_validator ... --set
stack-components
https://docs.zenml.io/stack-components/data-validators/great-expectations
370
ent using service connector 'aws-session-token'...WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store The 'aws-session-token' Docker Service Connector connector was used to successfully configure the local Docker/OCI container registry client/SDK. # Verify that the local Docker client is now configured to access the remote Docker container registry $ docker pull 715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml-server Using default tag: latest latest: Pulling from zenml-server e9995326b091: Pull complete f3d7f077cdde: Pull complete 0db71afa16f3: Pull complete 6f0b5905c60c: Pull complete 9d2154d50fd1: Pull complete d072bba1f611: Pull complete 20e776588361: Pull complete 3ce69736a885: Pull complete c9c0554c8e6a: Pull complete bacdcd847a66: Pull complete 482033770844: Pull complete Digest: sha256:bf2cc3895e70dfa1ee1cd90bbfa599fa4cd8df837e27184bac1ce1cc239ecd3f Status: Downloaded newer image for 715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml-server:latest 715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml-server:latest Discover available resources One of the questions that you may have as a ZenML user looking to register and connect a Stack Component to an external resource is "what resources do I even have access to ?". Sure, you can browse through all the registered Service connectors and manually verify each one to find a particular resource that you are looking for, but this is counterproductive. A better way is to ask ZenML directly questions such as: what are the Kubernetes clusters that I can get access to through Service Connectors? can I access this particular S3 bucket through one of the Service Connectors? Which one? The zenml service-connector list-resources CLI command can be used exactly for this purpose.
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide
487
β”‚ β”‚ β”‚ β”‚ ┃┃ β”‚ β”‚ β”‚ β”‚ πŸŒ€ kubernetes-cluster β”‚ β”‚ β”‚ β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ β”‚ 🐳 docker-registry β”‚ β”‚ β”‚ β”‚ β”‚ ┃ ┗━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━┛ This checks the Kubernetes clusters that the AWS Service Connector has access to: zenml service-connector verify aws-session-token --resource-type kubernetes-cluster Example Command Output Service connector 'aws-session-token' is correctly configured with valid credentials and has access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────────────┼──────────────────┨ ┃ πŸŒ€ kubernetes-cluster β”‚ zenhacks-cluster ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛ Running the login CLI command will configure the local kubectl CLI to access the Kubernetes cluster: zenml service-connector login aws-session-token --resource-type kubernetes-cluster --resource-id zenhacks-cluster Example Command Output β ‡ Attempting to configure local client using service connector 'aws-session-token'... Cluster "arn:aws:eks:us-east-1:715803424590:cluster/zenhacks-cluster" set. Context "arn:aws:eks:us-east-1:715803424590:cluster/zenhacks-cluster" modified. Updated local kubeconfig with the cluster details. The current kubectl context was set to 'arn:aws:eks:us-east-1:715803424590:cluster/zenhacks-cluster'. The 'aws-session-token' Kubernetes Service Connector connector was used to successfully configure the local Kubernetes cluster client/SDK. The following can be used to check that the local kubectl CLI is correctly configured: kubectl cluster-info Example Command Output
how-to
https://docs.zenml.io/how-to/auth-management/aws-service-connector
527
ate output with `ArtifactConfig` giving it a name,# run_metadata and tags. As a result, the created artifact # `artifact_name` will get configured with metadata and tags @step def annotation_approach() -> ( Annotated[ str, ArtifactConfig( name="artifact_name", run_metadata={"metadata_key": "metadata_value"}, tags=["tag_name"], ), ): return "string" # below we annotate output using functional approach with # run_metadata and tags. As a result, the created artifact # `artifact_name` will get configured with metadata and tags @step def annotation_approach() -> Annotated[str, "artifact_name"]: step_context = get_step_context() step_context.add_output_metadata( output_name="artifact_name", metadata={"metadata_key": "metadata_value"} step_context.add_output_tags(output_name="artifact_name", tags=["tag_name"]) return "string" # below we combine both approaches, so the artifact will get # metadata and tags from both sources @step def annotation_approach() -> ( Annotated[ str, ArtifactConfig( name="artifact_name", run_metadata={"metadata_key": "metadata_value"}, tags=["tag_name"], ), ): step_context = get_step_context() step_context.add_output_metadata( output_name="artifact_name", metadata={"metadata_key2": "metadata_value2"} step_context.add_output_tags(output_name="artifact_name", tags=["tag_name2"]) return "string" Consuming external artifacts within a pipeline While most pipelines start with a step that produces an artifact, it is often the case to want to consume artifacts external from the pipeline. The ExternalArtifact class can be used to initialize an artifact within ZenML with any arbitrary data type. For example, let's say we have a Snowflake query that produces a dataframe, or a CSV file that we need to read. External artifacts can be used for this, to pass values to steps that are neither JSON serializable nor produced by an upstream step: import numpy as np from zenml import ExternalArtifact, pipeline, step @step def print_data(data: np.ndarray): print(data)
user-guide
https://docs.zenml.io/v/docs/user-guide/starter-guide/manage-artifacts
441
ge or if the ZenML version doesn't change at all).a backup file or database is created before every database migration attempt (i.e. during every Helm upgrade). If a backup already exists (i.e. persisted in a persistent volume or backup database), it is overwritten. the persistent backup file or database is cleaned up after the migration is completed successfully or if the database doesn't need to undergo a migration. This includes backups created by previous failed migration attempts. the persistent backup file or database is NOT cleaned up after a failed migration. This allows the user to manually inspect and/or apply the backup if the automatic recovery fails. The following example shows how to configure the ZenML server to use a persistent volume to store the database dump file: zenml: # ... database: url: "mysql://admin:[email protected]:3306/zenml" # Configure the database backup strategy backupStrategy: dump-file backupPVStorageSize: 1Gi podSecurityContext: fsGroup: 1000 # if you're using a PVC for backup, this should necessarily be set. PreviousDeploy with Docker NextDeploy using HuggingFace Spaces Last updated 15 days ago
getting-started
https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-helm
242
to get a quick global overview of our performance.# passing the results from all our previous evaluation steps @step(enable_cache=False) def visualize_evaluation_results( small_retrieval_eval_failure_rate: float, small_retrieval_eval_failure_rate_reranking: float, full_retrieval_eval_failure_rate: float, full_retrieval_eval_failure_rate_reranking: float, failure_rate_bad_answers: float, failure_rate_bad_immediate_responses: float, failure_rate_good_responses: float, average_toxicity_score: float, average_faithfulness_score: float, average_helpfulness_score: float, average_relevance_score: float, ) -> Optional[Image.Image]: """Visualizes the evaluation results.""" step_context = get_step_context() pipeline_run_name = step_context.pipeline_run.name normalized_scores = [ score / 20 for score in [ small_retrieval_eval_failure_rate, small_retrieval_eval_failure_rate_reranking, full_retrieval_eval_failure_rate, full_retrieval_eval_failure_rate_reranking, failure_rate_bad_answers, scores = normalized_scores + [ failure_rate_bad_immediate_responses, failure_rate_good_responses, average_toxicity_score, average_faithfulness_score, average_helpfulness_score, average_relevance_score, labels = [ "Small Retrieval Eval Failure Rate", "Small Retrieval Eval Failure Rate Reranking", "Full Retrieval Eval Failure Rate", "Full Retrieval Eval Failure Rate Reranking", "Failure Rate Bad Answers", "Failure Rate Bad Immediate Responses", "Failure Rate Good Responses", "Average Toxicity Score", "Average Faithfulness Score", "Average Helpfulness Score", "Average Relevance Score", # Create a new figure and axis fig, ax = plt.subplots(figsize=(10, 6)) # Plot the horizontal bar chart y_pos = np.arange(len(labels)) ax.barh(y_pos, scores, align="center") ax.set_yticks(y_pos) ax.set_yticklabels(labels) ax.invert_yaxis() # Labels read top-to-bottom ax.set_xlabel("Score") ax.set_xlim(0, 5) ax.set_title(f"Evaluation Metrics for {pipeline_run_name}") # Adjust the layout plt.tight_layout()
user-guide
https://docs.zenml.io/v/docs/user-guide/llmops-guide/reranking/evaluating-reranking-performance
454
nswer is satisfactory, or 'NO' if it is not.", },"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}\nIs the generated answer relevant and accurate?", }, ], model="gpt-3.5-turbo", judgment = chat_completion.choices[0].message.content.strip().lower() return judgment == "yes" retrieval_scores = [] generation_scores = [] for item in eval_data: retrieval_score = evaluate_retrieval( item["question"], item["expected_answer"], corpus retrieval_scores.append(retrieval_score) generated_answer = answer_question(item["question"], corpus) generation_score = evaluate_generation( item["question"], item["expected_answer"], generated_answer generation_scores.append(generation_score) retrieval_accuracy = sum(retrieval_scores) / len(retrieval_scores) generation_accuracy = sum(generation_scores) / len(generation_scores) print(f"Retrieval Accuracy: {retrieval_accuracy:.2f}") print(f"Generation Accuracy: {generation_accuracy:.2f}") As you can see, we've added two evaluation functions: evaluate_retrieval and evaluate_generation. The evaluate_retrieval function checks if the retrieved chunks contain any words from the expected answer. The evaluate_generation function uses OpenAI's chat completion LLM to evaluate the quality of the generated answer. We then loop through the evaluation data, which contains questions and expected answers, and evaluate the retrieval and generation components of our RAG pipeline. Finally, we calculate the accuracy of both components and print the results: As you can see, we get 100% accuracy for both retrieval and generation in this example. Not bad! The sections that follow will provide a more detailed and sophisticated implementation of RAG evaluation, but this example shows how you can think about it at a high level! PreviousEvaluation and metrics NextRetrieval evaluation Last updated 19 days ago
user-guide
https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/evaluation-in-65-loc
411
━━━━━━┷━━━━━━━━┷━━━━━━━━━┛ Other stack componentsThere are many more components that you can add to your stacks, like experiment trackers, model deployers, and more. You can see all supported stack component types in a single table view here Perhaps the most important stack component after the orchestrator and the artifact store is the container registry. A container registry stores all your containerized images, which hold all your code and the environment needed to execute them. We will learn more about them in the next section! Registering a stack Just to illustrate how to interact with stacks, let's create an alternate local stack. We start by first creating a local artifact store. Create an artifact store zenml artifact-store register my_artifact_store --flavor=local Let's understand the individual parts of this command: artifact-store : This describes the top-level group, to find other stack components simply run zenml --help register : Here we want to register a new component, instead, we could also update , delete and more zenml artifact-store --help will give you all possibilities my_artifact_store : This is the unique name that the stack component will have. --flavor=local: A flavor is a possible implementation for a stack component. So in the case of an artifact store, this could be an s3-bucket or a local filesystem. You can find out all possibilities with zenml artifact-store flavor --list This will be the output that you can expect from the command above. Using the default local database. Running with active workspace: 'default' (global) Running with active stack: 'default' (global) Successfully registered artifact_store `my_artifact_store`.bash To see the new artifact store that you just registered, just run: zenml artifact-store describe my_artifact_store Create a local stack With the artifact store created, we can now create a new stack with this artifact store. zenml stack register a_new_local_stack -o default -a my_artifact_store
user-guide
https://docs.zenml.io/v/docs/user-guide/production-guide/understand-stacks
420
. If not set, the cluster will not be autostopped.down: Tear down the cluster after all jobs finish (successfully or abnormally). If idle_minutes_to_autostop is also set, the cluster will be torn down after the specified idle time. Note that if errors occur during provisioning/data syncing/setting up, the cluster will not be torn down for debugging purposes. stream_logs: If True, show the logs in the terminal as they are generated while the cluster is running. docker_run_args: Additional arguments to pass to the docker run command. For example, ['--gpus=all'] to use all GPUs available on the VM. The following code snippets show how to configure the orchestrator settings for each cloud provider: Code Example: from zenml.integrations.skypilot_aws.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings skypilot_settings = SkypilotAWSOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", accelerator_args={"tpu_vm": True, "runtime_version": "tpu-vm-base"}, use_spot=True, spot_recovery="recovery_strategy", region="us-west-1", zone="us-west1-a", image_id="ami-1234567890abcdef0", disk_size=100, disk_tier="high", cluster_name="my_cluster", retry_until_up=True, idle_minutes_to_autostop=60, down=True, stream_logs=True docker_run_args=["--gpus=all"] @pipeline( settings={ "orchestrator.vm_aws": skypilot_settings Code Example: from zenml.integrations.skypilot_gcp.flavors.skypilot_orchestrator_gcp_vm_flavor import SkypilotGCPOrchestratorSettings skypilot_settings = SkypilotGCPOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", accelerator_args={"tpu_vm": True, "runtime_version": "tpu-vm-base"}, use_spot=True, spot_recovery="recovery_strategy", region="us-west1", zone="us-west1-a", image_id="ubuntu-pro-2004-focal-v20231101", disk_size=100, disk_tier="high", cluster_name="my_cluster", retry_until_up=True, idle_minutes_to_autostop=60, down=True, stream_logs=True @pipeline( settings={ "orchestrator.vm_gcp": skypilot_settings
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/skypilot-vm
533
ucket β”‚ user-account β”‚ β”‚ ┃┃ β”‚ β”‚ πŸŒ€ kubernetes-cluster β”‚ service-account β”‚ β”‚ ┃ ┃ β”‚ β”‚ 🐳 docker-registry β”‚ oauth2-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ impersonation β”‚ β”‚ ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛ ``` Register an individual single-instance GCP Service Connector using auto-configuration for each of the resources that will be needed for the Stack Components: a GCS bucket, a GCR registry, and generic GCP access for the VertexAI orchestrator and another one for the GCP Cloud Builder:Copyzenml service-connector register gcs-zenml-bucket-sl --type gcp --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl --auto-configure Example Command Output ```text Successfully registered service connector `gcs-zenml-bucket-sl` with access to the following resources: ┏━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────┼──────────────────────┨ ┃ πŸ“¦ gcs-bucket β”‚ gs://zenml-bucket-sl ┃ ┗━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━┛ ``` ```sh zenml service-connector register gcr-zenml-core --type gcp --resource-type docker-registry --auto-configure ``` Example Command Output ```text Successfully registered service connector `gcr-zenml-core` with access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠────────────────────┼───────────────────┨ ┃ 🐳 docker-registry β”‚ gcr.io/zenml-core ┃ ┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┛ ``` ```sh zenml service-connector register vertex-ai-zenml-core --type gcp --resource-type gcp-generic --auto-configure ``` Example Command Output ```text Successfully registered service connector `vertex-ai-zenml-core` with access to the following resources: ┏━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃
how-to
https://docs.zenml.io/how-to/auth-management/gcp-service-connector
618
user or auto-configured from a local environment.This method has the major limitation that the user must regularly generate new tokens and update the connector configuration as API tokens expire. On the other hand, this method is ideal in cases where the connector only needs to be used for a short period of time, such as sharing access temporarily with someone else in your team. This is the authentication method used during auto-configuration, if you have the local Azure CLI set up with credentials. The connector will generate an access token from the Azure CLI credentials and store it in the connector configuration. Given that Azure access tokens are scoped to a particular Azure resource and the access token generated during auto-configuration is scoped to the Azure Management API, this method does not work with Azure blob storage resources. You should use the Azure service principal authentication method for blob storage resources instead. Fetching Azure session tokens from the local Azure CLI is possible if the Azure CLI is already configured with valid credentials (i.e. by running az login): zenml service-connector register azure-session-token --type azure --auto-configure Example Command Output β ™ Registering service connector 'azure-session-token'... connector authorization failure: the 'access-token' authentication method is not supported for blob storage resources Successfully registered service connector `azure-session-token` with access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨
how-to
https://docs.zenml.io/how-to/auth-management/azure-service-connector
394
─────────────────────────────────────────────────┨┃ β”‚ β”‚ β”‚ πŸŒ€ kubernetes-cluster β”‚ zenhacks-cluster ┃ ┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ β”‚ β”‚ β”‚ 🐳 docker-registry β”‚ 715803424590.dkr.ecr.us-east-1.amazonaws.com ┃ ┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ fa9325ab-ce01-4404-aec3-61a3af395d48 β”‚ aws-s3-multi-instance β”‚ πŸ”Ά aws β”‚ πŸ“¦ s3-bucket β”‚ s3://aws-ia-mwaa-715803424590 ┃ ┃ β”‚ β”‚ β”‚ β”‚ s3://zenfiles ┃ ┃ β”‚ β”‚ β”‚ β”‚ s3://zenml-demos ┃ ┃ β”‚ β”‚ β”‚ β”‚ s3://zenml-generative-chat ┃ ┃ β”‚ β”‚ β”‚ β”‚ s3://zenml-public-datasets ┃
how-to
https://docs.zenml.io/how-to/auth-management/service-connectors-guide
301
e pipeline will not start correctly. How it worksThe HyperAI orchestrator works with Docker Compose, which can be used to construct machine learning pipelines. Under the hood, it creates a Docker Compose file which it then deploys and executes on the configured HyperAI instance. For each ZenML pipeline step, it creates a service in this file. It uses the service_completed_successfully condition to ensure that pipeline steps will only run if their connected upstream steps have successfully finished. If configured for it, the HyperAI orchestrator will connect the HyperAI instance to the stack's container registry to ensure a smooth transfer of Docker images. Scheduled pipelines Scheduled pipelines are supported by the HyperAI orchestrator. Currently, the HyperAI orchestrator supports the following inputs to Schedule: Cron expressions via cron_expression. When pipeline runs are scheduled, they are added as a crontab entry on the HyperAI instance. Use this when you want pipelines to run in intervals. Using cron expressions assumes that crontab is available on your instance and that its daemon is running. Scheduled runs via run_once_start_time. When pipeline runs are scheduled this way, they are added as an at entry on the HyperAI instance. Use this when you want pipelines to run just once and at a specified time. This assumes that at is available on your instance. How to deploy it To use the HyperAI orchestrator, you must configure a HyperAI Service Connector in ZenML and link it to the HyperAI orchestrator component. The service connector contains credentials with which ZenML connects to the HyperAI instance. Additionally, the HyperAI orchestrator must be used in a stack that contains a container registry and an image builder. How to use it To use the HyperAI orchestrator, we must configure a HyperAI Service Connector first using one of its supported authentication methods. For example, for authentication with an RSA-based key, create the service connector as follows:
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/hyperai
391
messages. Registering a Discord Alerter in ZenMLNext, you need to register a discord alerter in ZenML and link it to the bot you just created. You can do this with the following command: zenml alerter register discord_alerter \ --flavor=discord \ --discord_token=<DISCORD_TOKEN> \ --default_discord_channel_id=<DISCORD_CHANNEL_ID> After you have registered the discord_alerter, you can add it to your stack like this: zenml stack register ... -al discord_alerter Here is where you can find the required parameters: DISCORD_CHANNEL_ID Open the discord server, then right-click on the text channel and click on the 'Copy Channel ID' option. If you don't see any 'Copy Channel ID' option for your channel, go to "User Settings" > "Advanced" and make sure "Developer Mode" is active. DISCORD_TOKEN This is the Discord token of your bot. You can find the instructions on how to set up a bot, invite it to your channel, and find its token here. When inviting the bot to your channel, make sure it has at least the following permissions: Read Messages/View Channels Send Messages Send Messages in Threads How to Use the Discord Alerter After you have a DiscordAlerter configured in your stack, you can directly import the discord_alerter_post_step and discord_alerter_ask_step steps and use them in your pipelines. Since these steps expect a string message as input (which needs to be the output of another step), you typically also need to define a dedicated formatter step that takes whatever data you want to communicate and generates the string message that the alerter should post. As an example, adding discord_alerter_ask_step() to your pipeline could look like this: from zenml.integrations.discord.steps.discord_alerter_ask_step import discord_alerter_ask_step from zenml import step, pipeline @step def my_formatter_step(artifact_to_be_communicated) -> str: return f"Here is my artifact {artifact_to_be_communicated}!" @pipeline def my_pipeline(...): ... artifact_to_be_communicated = ...
stack-components
https://docs.zenml.io/v/docs/stack-components/alerters/discord
461
e your local Docker client to the remote registry:zenml service-connector login <CONNECTOR_NAME> --resource-type docker-registry --resource-id <CONTAINER_REGISTRY_URI> Example Command Output $ zenml service-connector login azure-demo --resource-type docker-registry --resource-id demozenmlcontainerregistry.azurecr.io β Ή Attempting to configure local client using service connector 'azure-demo'... WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store The 'azure-demo' Docker Service Connector connector was used to successfully configure the local Docker/OCI container registry client/SDK. For more information and a full list of configurable attributes of the Azure container registry, check out the SDK Docs . PreviousGoogle Cloud Container Registry NextGitHub Container Registry Last updated 19 days ago
stack-components
https://docs.zenml.io/v/docs/stack-components/container-registries/azure
193
default ┃┠──────────────────┼─────────────────────────────────────────────────┨ ┃ WORKSPACE β”‚ default ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ SHARED β”‚ βž– ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ CREATED_AT β”‚ 2023-06-19 18:13:34.146659 ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ UPDATED_AT β”‚ 2023-06-19 18:13:34.146664 ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Configuration ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠───────────────────────┼───────────┨ ┃ region β”‚ us-east-1 ┃ ┠───────────────────────┼───────────┨ ┃ aws_access_key_id β”‚ [HIDDEN] ┃ ┠───────────────────────┼───────────┨ ┃ aws_secret_access_key β”‚ [HIDDEN] ┃ ┠───────────────────────┼───────────┨ ┃ aws_session_token β”‚ [HIDDEN] ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━┛ zenml service-connector describe aws-implicit --resource-type s3-bucket --resource-id s3://sagemaker-studio-d8a14tvjsmb --client Example Command Output INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials Service connector 'aws-implicit (s3-bucket | s3://sagemaker-studio-d8a14tvjsmb client)' of type 'aws' with id 'e3853748-34a0-4d78-8006-00422ad32884' is owned by user 'default' and is 'private'. 'aws-implicit (s3-bucket | s3://sagemaker-studio-d8a14tvjsmb client)' aws Service Connector Details ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────┨
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector
612
steps_prefix=search_steps_prefix, after=after ...The main challenge of this implementation is that it is currently not possible to pass a variable number of artifacts into a step programmatically, so the select_model_step needs to query all artifacts produced by the previous steps via the ZenML Client instead: from zenml import step, get_step_context from zenml.client import Client @step def select_model_step(): run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) # Fetch all models trained by a 'train_step' before trained_models_by_lr = {} for step_name, step in run.steps.items(): if step_name.startswith("train_step"): for output_name, output in step.outputs.items(): if output_name == "<NAME_OF_MODEL_OUTPUT_IN_TRAIN_STEP>": model = output.load() lr = step.config.parameters["learning_rate"] trained_models_by_lr[lr] = model # Evaluate the models to find the best one for lr, model in trained_models_by_lr.items(): ... To set up the local environment used below, follow the recommendations from the Project templates. In the steps/hp_tuning folder, you will find two step files, which can be used as a starting point for building your own hyperparameter search tailored specifically to your use case: hp_tuning_single_search(...) is performing a randomized search for the best model hyperparameters in a configured space. hp_tuning_select_best_model(...) is searching for the best hyperparameters, looping other results of previous random searches to find the best model according to a defined metric. PreviousUse failure/success hooks NextVersion pipelines Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/build-pipelines/hyper-parameter-tuning
340
more information. Get the last run of a pipelineTo access the most recent run of a pipeline, you can either use the last_run property or access it through the runs list: last_run = pipeline_model.last_run # OR: pipeline_model.runs[0] If your most recent runs have failed, and you want to find the last run that has succeeded, you can use the last_successful_run property instead. Get the latest run from a pipeline Calling a pipeline executes it and then returns the response of the freshly executed run. run = training_pipeline() The run that you get back is the model stored in the ZenML database at the point of the method call. This means the pipeline run is still initializing and no steps have been run. To get the latest state can get a refreshed version from the client: from zenml.client import Client Client().get_pipeline_run(run.id) to get a refreshed version Get a run via the client If you already know the exact run that you want to fetch (e.g., from looking at the dashboard), you can use the Client.get_pipeline_run() method to fetch the run directly without having to query the pipeline first: from zenml.client import Client pipeline_run = Client().get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") Similar to pipelines, you can query runs by either ID, name, or name prefix, and you can also discover runs through the Client or CLI via the Client.list_pipeline_runs() or zenml pipeline runs list commands. Run information Each run has a collection of useful information which can help you reproduce your runs. In the following, you can find a list of some of the most useful pipeline run information, but there is much more available. See the PipelineRunResponse definition for a comprehensive list. Status The status of a pipeline run. There are five possible states: initialized, failed, completed, running, and cached. status = run.status Configuration
how-to
https://docs.zenml.io/how-to/build-pipelines/fetching-pipelines
405
s/kube-system/services/https:metrics-server:/proxyA similar process is possible with GCR container registries: zenml service-connector verify gcp-user-account --resource-type docker-registry Example Command Output Service connector 'gcp-user-account' is correctly configured with valid credentials and has access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠────────────────────┼───────────────────┨ ┃ 🐳 docker-registry β”‚ gcr.io/zenml-core ┃ ┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━┛ zenml service-connector login gcp-user-account --resource-type docker-registry Example Command Output β ¦ Attempting to configure local client using service connector 'gcp-user-account'... WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store The 'gcp-user-account' Docker Service Connector connector was used to successfully configure the local Docker/OCI container registry client/SDK. To verify that the local Docker container registry client is correctly configured, the following command can be used: docker push gcr.io/zenml-core/zenml-server:connectors Example Command Output The push refers to repository [gcr.io/zenml-core/zenml-server] d4aef4f5ed86: Pushed 2d69a4ce1784: Pushed 204066eca765: Pushed 2da74ab7b0c1: Pushed 75c35abda1d1: Layer already exists 415ff8f0f676: Layer already exists c14cb5b1ec91: Layer already exists a1d005f5264e: Layer already exists 3a3fd880aca3: Layer already exists 149a9c50e18e: Layer already exists 1f6d3424b922: Layer already exists 8402c959ae6f: Layer already exists 419599cb5288: Layer already exists 8553b91047da: Layer already exists connectors: digest: sha256:a4cfb18a5cef5b2201759a42dd9fe8eb2f833b788e9d8a6ebde194765b42fe46 size: 3256
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector
557
model. How do you use it? Deploy a logged modelFollowing MLflow's documentation, if we want to deploy a model as a local inference server, we need the model to be logged in the MLflow experiment tracker first. Once the model is logged, we can use the model URI either from the artifact path saved with the MLflow run or using model name and version if a model is registered in the MLflow model registry. In the following examples, we will show how to deploy a model using the MLflow Model Deployer, in two different scenarios: We already know the logged model URI and we want to deploy it as a local inference server. from zenml import pipeline, step, get_step_context from zenml.client import Client @step def deploy_model() -> Optional[MLFlowDeploymentService]: # Deploy a model using the MLflow Model Deployer zenml_client = Client() model_deployer = zenml_client.active_stack.model_deployer mlflow_deployment_config = MLFlowDeploymentConfig( name: str = "mlflow-model-deployment-example", description: str = "An example of deploying a model using the MLflow Model Deployer", pipeline_name: str = get_step_context().pipeline_name, pipeline_step_name: str = get_step_context().step_name, model_uri: str = "runs:/<run_id>/model" or "models:/<model_name>/<model_version>", model_name: str = "model", workers: int = 1 mlserver: bool = False timeout: int = DEFAULT_SERVICE_START_STOP_TIMEOUT service = model_deployer.deploy_model(mlflow_deployment_config) logger.info(f"The deployed service info: {model_deployer.get_model_server_info(service)}") return service We don't know the logged model URI, since the model was logged in a previous step. We want to deploy the model as a local inference server. ZenML provides set of functionalities that would make it easier to get the model URI from the current run and deploy it. from zenml import pipeline, step, get_step_context from zenml.client import Client from mlflow.tracking import MlflowClient, artifact_utils @step
stack-components
https://docs.zenml.io/v/docs/stack-components/model-deployers/mlflow
447
ender the images: from zenml.client import Clientfrom IPython.display import display, Image annotator = Client().active_stack.annotator annotations = annotator.launch( data=[ '/path/to/image1.png', '/path/to/image2.png' ], options=[ 'cat', 'dog' ], display_fn=lambda filename: display(Image(filename)) The launch method returns the annotations as a list of tuples, where each tuple contains the data item and its corresponding label. You can also use the zenml annotator dataset commands to manage your datasets: zenml annotator dataset list - List all available datasets zenml annotator dataset delete <dataset_name> - Delete a specific dataset zenml annotator dataset stats <dataset_name> - Get statistics for a specific dataset Annotation files are saved as JSON files in the specified output directory. Each annotation file represents a dataset, with the filename serving as the dataset name. Acknowledgements Pigeon was created by Anastasis Germanidis and released as a Python package and Github repository. It is licensed under the Apache License. It has been updated to work with more recent ipywidgets versions and some small UI improvements were added. We are grateful to Anastasis for creating this tool and making it available to the community. PreviousLabel Studio NextProdigy Last updated 15 days ago
stack-components
https://docs.zenml.io/stack-components/annotators/pigeon
271
ncepts covered in this guide to your own projects.By the end of this guide, you'll have a solid understanding of how to leverage LLMs in your MLOps workflows using ZenML, enabling you to build powerful, scalable, and maintainable LLM-powered applications. First up, let's take a look at a super simple implementation of the RAG paradigm to get started. PreviousAn end-to-end project NextRAG with ZenML Last updated 19 days ago
user-guide
https://docs.zenml.io/v/docs/user-guide/llmops-guide
98
nfiguration) ... context.save_expectation_suite(expectation_suite=suite, expectation_suite_name=expectation_suite_name, context.build_data_docs() return suite The same approach must be used if you are using a Great Expectations configuration managed by ZenML and are using the Jupyter notebooks generated by the Great Expectations CLI. Visualizing Great Expectations Suites and Results You can view visualizations of the suites and results generated by your pipeline steps directly in the ZenML dashboard by clicking on the respective artifact in the pipeline run DAG. Alternatively, if you are running inside a Jupyter notebook, you can load and render the suites and results using the artifact.visualize() method, e.g.: from zenml.client import Client def visualize_results(pipeline_name: str, step_name: str) -> None: pipeline = Client().get_pipeline(pipeline_name) last_run = pipeline.last_run validation_step = last_run.steps[step_name] validation_step.visualize() if __name__ == "__main__": visualize_results("validation_pipeline", "profiler") visualize_results("validation_pipeline", "train_validator") visualize_results("validation_pipeline", "test_validator") PreviousData Validators NextDeepchecks Last updated 18 days ago
stack-components
https://docs.zenml.io/v/docs/stack-components/data-validators/great-expectations
255
you can also add your own custom validators here.The CustomModelDeployer only comes into play when the component is ultimately in use. The design behind this interaction lets us separate the configuration of the flavor from its implementation. This way we can register flavors and components even when the major dependencies behind their implementation are not installed in our local setting (assuming the CustomModelDeployerFlavor and the CustomModelDeployerConfig are implemented in a different module/path than the actual CustomModelDeployer). PreviousHugging Face NextStep Operators Last updated 14 days ago
stack-components
https://docs.zenml.io/stack-components/model-deployers/custom
112
♻️Migration guide How to migrate your ZenML code to the newest version. Migrations are necessary for ZenML releases that include breaking changes, which are currently all releases that increment the minor version of the release, e.g., 0.X -> 0.Y. Furthermore, all releases that increment the first non-zero digit of the version contain major breaking changes or paradigm shifts that are explained in separate migration guides below. Release Type Examples 0.40.2 to 0.40.3 contains no breaking changes and requires no migration whatsoever, 0.40.3 to 0.41.0 contains minor breaking changes that need to be taken into account when upgrading ZenML, 0.39.1 to 0.40.0 contains major breaking changes that introduce major shifts in how ZenML code is written or used. Major Migration Guides The following guides contain detailed instructions on how to migrate between ZenML versions that introduced major breaking changes or paradigm shifts. The migration guides are sequential, meaning if there is more than one migration guide between your current version and the latest release, follow each guide in order. Migration guide 0.13.2 β†’ 0.20.0 Migration guide 0.23.0 β†’ 0.30.0 Migration guide 0.39.1 β†’ 0.41.0 Release Notes For releases with minor breaking changes, e.g., 0.40.3 to 0.41.0, check out the official ZenML Release Notes to see which breaking changes were introduced. PreviousHow do I...? NextMigration guide 0.13.2 β†’ 0.20.0 Last updated 10 months ago
reference
https://docs.zenml.io/v/docs/reference/migration-guide
347
local --set ``` Example Command Output ```textConnected to the ZenML server: 'https://stefan.develaws.zenml.io' Running with active workspace: 'default' (repository) Stack 'aws-demo' successfully registered! Active repository stack set to:'aws-demo' ``` Finally, run a simple pipeline to prove that everything works as expected. We'll use the simplest pipelines possible for this example:Copyfrom zenml import pipeline, step @step def step_1() -> str: """Returns the `world` string.""" return "world" @step(enable_cache=False) def step_2(input_one: str, input_two: str) -> None: """Combines the two strings at its input and prints them.""" combined_str = f"{input_one} {input_two}" print(combined_str) @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) if __name__ == "__main__": my_pipeline()Saving that to a run.py file and running it gives us: Example Command Output ```text $ python run.py Reusing registered pipeline simple_pipeline (version: 1). Building Docker image(s) for pipeline simple_pipeline. Building Docker image 715803424590.dkr.ecr.us-east-1.amazonaws.com/zenml:simple_pipeline-orchestrator. Including user-defined requirements: boto3==1.26.76 Including integration requirements: boto3, kubernetes==18.20.0, s3fs>2022.3.0,<=2023.4.0, sagemaker==2.117.0 No .dockerignore found, including all files inside build context. Step 1/10 : FROM zenmldocker/zenml:0.39.1-py3.8 Step 2/10 : WORKDIR /app Step 3/10 : COPY .zenml_user_requirements . Step 4/10 : RUN pip install --default-timeout=60 --no-cache-dir -r .zenml_user_requirements Step 5/10 : COPY .zenml_integration_requirements . Step 6/10 : RUN pip install --default-timeout=60 --no-cache-dir -r .zenml_integration_requirements Step 7/10 : ENV ZENML_ENABLE_REPO_INIT_WARNINGS=False Step 8/10 : ENV ZENML_CONFIG_PATH=/app/.zenconfig Step 9/10 : COPY . . Step 10/10 : RUN chmod -R a+rw .
how-to
https://docs.zenml.io/how-to/auth-management/aws-service-connector
545
y settings. Enabling CUDA for GPU-backed hardwareNote that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. Important Note for Multi-Tenancy Deployments Kubeflow has a notion of multi-tenancy built into its deployment. Kubeflow's multi-user isolation simplifies user operations because each user only views and edited the Kubeflow components and model artifacts defined in their configuration. Using the ZenML Kubeflow orchestrator on a multi-tenant deployment without any settings will result in the following error: HTTP response body: {"error":"Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace.","code":3,"message":"Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace.","details":[{"@type":"type.googleapis.com/api.Error","error_message":"Invalid resource references for experiment. ListExperiment requires filtering by namespace.","error_details":"Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace."}]} In order to get it to work, we need to leverage the KubeflowOrchestratorSettings referenced above. By setting the namespace option, and by passing in the right authentication credentials to the Kubeflow Pipelines Client, we can make it work. First, when registering your Kubeflow orchestrator, please make sure to include the kubeflow_hostname parameter. The kubeflow_hostname must end with the /pipeline post-fix. zenml orchestrator register <NAME> \ --flavor=kubeflow \ --kubeflow_hostname=<KUBEFLOW_HOSTNAME> # e.g. https://mykubeflow.example.com/pipeline Then, ensure that you use the pass the right settings before triggering a pipeline run. The following snippet will prove useful: import requests
stack-components
https://docs.zenml.io/stack-components/orchestrators/kubeflow
410
pe-specific metadata and visualizations. MetadataAll output artifacts saved through ZenML will automatically have certain datatype-specific metadata saved with them. NumPy Arrays, for instance, always have their storage size, shape, dtype, and some statistical properties saved with them. You can access such metadata via the run_metadata attribute of an output, e.g.: output_metadata = output.run_metadata storage_size_in_bytes = output_metadata["storage_size"].value We will talk more about metadata in the next section. Visualizations ZenML automatically saves visualizations for many common data types. Using the visualize() method you can programmatically show these visualizations in Jupyter notebooks: output.visualize() If you're not in a Jupyter notebook, you can simply view the visualizations in the ZenML dashboard by running zenml up and clicking on the respective artifact in the pipeline run DAG instead. Check out the artifact visualization page to learn more about how to build and view artifact visualizations in ZenML! Fetching information during run execution While most of this document has focused on fetching objects after a pipeline run has been completed, the same logic can also be used within the context of a running pipeline. This is often desirable in cases where a pipeline is running continuously over time and decisions have to be made according to older runs. For example, this is how we can fetch the last pipeline run of the same pipeline from within a ZenML step: from zenml import get_step_context from zenml.client import Client @step def my_step(): # Get the name of the current pipeline run current_run_name = get_step_context().pipeline_run.name # Fetch the current pipeline run current_run = Client().get_pipeline_run(current_run_name) # Fetch the previous run of the same pipeline previous_run = current_run.pipeline.runs[1] # index 0 is the current run
how-to
https://docs.zenml.io/v/docs/how-to/build-pipelines/fetching-pipelines
380
ettings to specify AzureML step operator settings.Difference between stack component settings at registration-time vs real-time For stack-component-specific settings, you might be wondering what the difference is between these and the configuration passed in while doing zenml stack-component register <NAME> --config1=configvalue --config2=configvalue, etc. The answer is that the configuration passed in at registration time is static and fixed throughout all pipeline runs, while the settings can change. A good example of this is the MLflow Experiment Tracker, where configuration which remains static such as the tracking_url is sent through at registration time, while runtime configuration such as the experiment_name (which might change every pipeline run) is sent through as runtime settings. Even though settings can be overridden at runtime, you can also specify default values for settings while configuring a stack component. For example, you could set a default value for the nested setting of your MLflow experiment tracker: zenml experiment-tracker register <NAME> --flavor=mlflow --nested=True This means that all pipelines that run using this experiment tracker use nested MLflow runs unless overridden by specifying settings for the pipeline at runtime. Using the right key for Stack-component-specific settings When specifying stack-component-specific settings, a key needs to be passed. This key should always correspond to the pattern: <COMPONENT_CATEGORY>.<COMPONENT_FLAVOR> For example, the SagemakerStepOperator supports passing in estimator_args. The way to specify this would be to use the key step_operator.sagemaker @step(step_operator="nameofstepoperator", settings= {"step_operator.sagemaker": {"estimator_args": {"instance_type": "m7g.medium"}}}) def my_step(): ... # Using the class @step(step_operator="nameofstepoperator", settings= {"step_operator.sagemaker": SagemakerStepOperatorSettings(instance_type="m7g.medium")}) def my_step(): ... or in YAML: steps: my_step:
how-to
https://docs.zenml.io/v/docs/how-to/use-configuration-files/runtime-configuration
399
Google Cloud VertexAI Orchestrator Orchestrating your pipelines to run on Vertex AI. Vertex AI Pipelines is a serverless ML workflow tool running on the Google Cloud Platform. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute. This component is only meant to be used within the context of a remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior! When to use it You should use the Vertex orchestrator if: you're already using GCP. you're looking for a proven production-grade orchestrator. you're looking for a UI in which you can track your pipeline runs. you're looking for a managed solution for running your pipelines. you're looking for a serverless solution for running your pipelines. How to deploy it In order to use a Vertex AI orchestrator, you need to first deploy ZenML to the cloud. It would be recommended to deploy ZenML in the same Google Cloud project as where the Vertex infrastructure is deployed, but it is not necessary to do so. You must ensure that you are connected to the remote ZenML server before using this stack component. The only other thing necessary to use the ZenML Vertex orchestrator is enabling Vertex-relevant APIs on the Google Cloud project. In order to quickly enable APIs, and create other resources necessary for using this integration, you can also consider using mlstacks, which helps you set up the infrastructure with one click. How to use it The Vertex Orchestrator (and GCP integration in general) currently only works for Python versions <3.11. The ZenML team is aware of this dependency clash/issue and is working on a fix. For now, please use Python <3.11 together with the GCP integration. To use the Vertex orchestrator, we need: The ZenML gcp integration installed. If you haven't done so, runCopyzenml integration install gcp
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/vertex
410
nswer is satisfactory, or 'NO' if it is not.", },"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}\nIs the generated answer relevant and accurate?", }, ], model="gpt-3.5-turbo", judgment = chat_completion.choices[0].message.content.strip().lower() return judgment == "yes" retrieval_scores = [] generation_scores = [] for item in eval_data: retrieval_score = evaluate_retrieval( item["question"], item["expected_answer"], corpus retrieval_scores.append(retrieval_score) generated_answer = answer_question(item["question"], corpus) generation_score = evaluate_generation( item["question"], item["expected_answer"], generated_answer generation_scores.append(generation_score) retrieval_accuracy = sum(retrieval_scores) / len(retrieval_scores) generation_accuracy = sum(generation_scores) / len(generation_scores) print(f"Retrieval Accuracy: {retrieval_accuracy:.2f}") print(f"Generation Accuracy: {generation_accuracy:.2f}") As you can see, we've added two evaluation functions: evaluate_retrieval and evaluate_generation. The evaluate_retrieval function checks if the retrieved chunks contain any words from the expected answer. The evaluate_generation function uses OpenAI's chat completion LLM to evaluate the quality of the generated answer. We then loop through the evaluation data, which contains questions and expected answers, and evaluate the retrieval and generation components of our RAG pipeline. Finally, we calculate the accuracy of both components and print the results: As you can see, we get 100% accuracy for both retrieval and generation in this example. Not bad! The sections that follow will provide a more detailed and sophisticated implementation of RAG evaluation, but this example shows how you can think about it at a high level! PreviousEvaluation and metrics NextRetrieval evaluation Last updated 15 days ago
user-guide
https://docs.zenml.io/user-guide/llmops-guide/evaluation/evaluation-in-65-loc
411
Displaying visualizations in the dashboard Displaying visualizations in the dashboard. In order for the visualizations to show up on the dashboard, the following must be true: Configuring a Service Connector Visualizations are usually stored alongside the artifact, in the artifact store. Therefore, if a user would like to see the visualization displayed on the ZenML dashboard, they must give access to the server to connect to the artifact store. The service connector documentation goes deeper into the concept of service connectors and how they can be configured to give the server permission to access the artifact store. For a concrete example, see the AWS S3 artifact store documentation. When using the default/local artifact store with a deployed ZenML, the server naturally does not have access to your local files. In this case, the visualizations are also not displayed on the dashboard. Please use a service connector enabled and remote artifact store alongside a deployed ZenML to view visualizations. Configuring Artifact Stores If all visualizations of a certain pipeline run are not showing up in the dashboard, it might be that your ZenML server does not have the required dependencies or permissions to access that artifact store. See the custom artifact store docs page for more information. PreviousCreating custom visualizations NextDisabling visualizations Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/visualize-artifacts/visualizations-in-dashboard
263
be32c108819e8a860a429b613e470ad58531f0730afff64545Important: If you configure encryption for your SQL database secrets store, you should keep the encryptionKey value somewhere safe and secure, as it will always be required by the ZenML Server to decrypt the secrets in the database. If you lose the encryption key, you will not be able to decrypt the secrets anymore and will have to reset them. Using the AWS Secrets Manager as a secrets store backend The AWS Secrets Store uses the ZenML AWS Service Connector under the hood to authenticate with the AWS Secrets Manager API. This means that you can use any of the authentication methods supported by the AWS Service Connector to authenticate with the AWS Secrets Manager API. "Version": "2012-10-17", "Statement": [ "Sid": "ZenMLSecretsStore", "Effect": "Allow", "Action": [ "secretsmanager:CreateSecret", "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:PutSecretValue", "secretsmanager:TagResource", "secretsmanager:DeleteSecret" ], "Resource": "arn:aws:secretsmanager:<AWS-region>:<AWS-account-id>:secret:zenml/*" Example configuration for the AWS Secrets Store: zenml: # ... # Secrets store settings. This is used to store centralized secrets. secretsStore: # Set to false to disable the secrets store. enabled: true # The type of the secrets store type: aws # Configuration for the AWS Secrets Manager secrets store aws: # The AWS Service Connector authentication method to use. authMethod: secret-key # The AWS Service Connector configuration. authConfig: # The AWS region to use. This must be set to the region where the AWS # Secrets Manager service that you want to use is located. region: us-east-1 # The AWS credentials to use to authenticate with the AWS Secrets aws_access_key_id: <your AWS access key ID> aws_secret_access_key: <your AWS secret access key> Using the GCP Secrets Manager as a secrets store backend
getting-started
https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-helm
443
Neptune Logging and visualizing experiments with neptune.ai The Neptune Experiment Tracker is an Experiment Tracker flavor provided with the Neptune-ZenML integration that uses neptune.ai to log and visualize information from your pipeline steps (e.g. models, parameters, metrics). When would you want to use it? Neptune is a popular tool that you would normally use in the iterative ML experimentation phase to track and visualize experiment results or as a model registry for your production-ready models. Neptune can also track and visualize the results produced by your automated pipeline runs, as you make the transition towards a more production-oriented workflow. You should use the Neptune Experiment Tracker: if you have already been using neptune.ai to track experiment results for your project and would like to continue doing so as you are incorporating MLOps workflows and best practices in your project through ZenML. if you are looking for a more visually interactive way of navigating the results produced from your ZenML pipeline runs (e.g. models, metrics, datasets) if you would like to connect ZenML to neptune.ai to share the artifacts and metrics logged by your pipelines with your team, organization, or external stakeholders You should consider one of the other Experiment Tracker flavors if you have never worked with neptune.ai before and would rather use another experiment tracking tool that you are more familiar with. How do you deploy it? The Neptune Experiment Tracker flavor is provided by the Neptune-ZenML integration. You need to install it on your local machine to be able to register the Neptune Experiment Tracker and add it to your stack: zenml integration install neptune -y The Neptune Experiment Tracker needs to be configured with the credentials required to connect to Neptune using an API token. Authentication Methods You need to configure the following credentials for authentication to Neptune:
stack-components
https://docs.zenml.io/stack-components/experiment-trackers/neptune
361
targeted improvements to the retrieval component.To wrap up, the retrieval evaluation process we've walked through - from manual spot-checking with carefully crafted queries to automated testing with synthetic question-document pairs - has provided a solid baseline understanding of our retrieval component's performance. The failure rates of 20% on our handpicked test cases and 16% on a larger sample of generated queries highlight clear room for improvement, but also validate that our semantic search is generally pointing in the right direction. Going forward, we have a rich set of options to refine and upgrade our evaluation approach. Generating a more diverse array of test questions, leveraging semantic similarity metrics for a nuanced view beyond binary success/failure, performing comparative evaluations of different retrieval techniques, and conducting deep error analysis on failure cases - all of these avenues promise to yield valuable insights. As our RAG pipeline grows to handle more complex and wide-ranging queries, continued investment in comprehensive retrieval evaluation will be essential to ensure we're always surfacing the most relevant information. Before we start working to improve or tweak our retrieval based on these evaluation results, let's shift gears and look at how we can evaluate the generation component of our RAG pipeline. Assessing the quality of the final answers produced by the system is equally crucial to gauging the effectiveness of our retrieval.
user-guide
https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/retrieval
260
onfiguration, if specified overrides for this stepenable_artifact_metadata: True enable_artifact_visualization: True enable_cache: False enable_step_logs: True # Same as pipeline level configuration, if specified overrides for this step extra: {} # Same as pipeline level configuration, if specified overrides for this step model: {} # Same as pipeline level configuration, if specified overrides for this step settings: docker: {} resources: {} # Stack component specific settings step_operator.sagemaker: estimator_args: instance_type: m7g.medium Deep-dive enable_XXX parameters These are boolean flags for various configurations: enable_artifact_metadata: Whether to associate metadata with artifacts or not. enable_artifact_visualization: Whether to attach visualizations of artifacts. enable_cache: Utilize caching or not. enable_step_logs: Enable tracking step logs. enable_artifact_metadata: True enable_artifact_visualization: True enable_cache: True enable_step_logs: True build ID The UUID of the build to use for this pipeline. If specified, Docker image building is skipped for remote orchestrators, and the Docker image specified in this build is used. build: <INSERT-BUILD-ID-HERE> Configuring the model Specifies the ZenML Model to use for this pipeline. model: name: "ModelName" version: "production" description: An example model tags: ["classifier"] Pipeline and step parameters A dictionary of JSON-serializable parameters specified at the pipeline or step level. For example: parameters: gamma: 0.01 steps: trainer: parameters: gamma: 0.001 Corresponds to: from zenml import step, pipeline @step def trainer(gamma: float): # Use gamma as normal print(gamma) @pipeline def my_pipeline(gamma: float): # use gamma or pass it into the step print(0.01) trainer(gamma=gamma)
how-to
https://docs.zenml.io/how-to/use-configuration-files/what-can-be-configured
405
to_numpy()) print(f"Train accuracy: {train_acc}")return model, train_acc @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() PreviousStarter guide NextCache previous executions Last updated 15 days ago
user-guide
https://docs.zenml.io/user-guide/starter-guide/create-an-ml-pipeline
97
aws s3api create-bucket --bucket your-bucket-nameOnce this is done, you can create the ZenML stack component as follows: Register an S3 Artifact Store with the connector zenml artifact-store register cloud_artifact_store -f s3 --path=s3://bucket-name --connector aws_connector More details here. Orchestrator (SageMaker Pipelines) An orchestrator is the compute backend to run your pipelines. Before you run anything within the ZenML CLI, head on over to AWS and create a SageMaker domain (Skip this if you already have one). The instructions for creating a domain can be found in the AWS core documentation. A SageMaker domain is a central management unit for all SageMaker users and resources within a region. It provides a single sign-on (SSO) experience and enables users to create and manage SageMaker resources, such as notebooks, training jobs, and endpoints, within a collaborative environment. When you create a SageMaker domain, you specify the configuration settings, such as the domain name, user profiles, and security settings. Each user within a domain gets their own isolated workspace, which includes a JupyterLab interface, a set of compute resources, and persistent storage. The SageMaker orchestrator in ZenML requires a SageMaker domain to run pipelines because it leverages the SageMaker Pipelines service, which is part of the SageMaker ecosystem. SageMaker Pipelines allows you to define, execute, and manage end-to-end machine learning workflows using a declarative approach. By creating a SageMaker domain, you establish the necessary environment and permissions for the SageMaker orchestrator to interact with SageMaker Pipelines and other SageMaker resources seamlessly. The domain acts as a prerequisite for using the SageMaker orchestrator in ZenML. Once this is done, you can create the ZenML stack component as follows: Register a SageMaker Pipelines orchestrator stack component:
how-to
https://docs.zenml.io/how-to/popular-integrations/aws-guide
389
Local Artifact Store Storing artifacts on your local filesystem. The local Artifact Store is a built-in ZenML Artifact Store flavor that uses a folder on your local filesystem to store artifacts. When would you want to use it? The local Artifact Store is a great way to get started with ZenML, as it doesn't require you to provision additional local resources or to interact with managed object-store services like Amazon S3 and Google Cloud Storage. All you need is the local filesystem. You should use the local Artifact Store if you're just evaluating or getting started with ZenML, or if you are still in the experimental phase and don't need to share your pipeline artifacts (dataset, models, etc.) with others. The local Artifact Store is not meant to be utilized in production. The local filesystem cannot be shared across your team and the artifacts stored in it cannot be accessed from other machines. This also means that artifact visualizations will not be available when using a local Artifact Store through a ZenML instance deployed in the cloud. Furthermore, the local Artifact Store doesn't cover services like high-availability, scalability, backup and restore and other features that are expected from a production grade MLOps system. The fact that it stores artifacts on your local filesystem also means that not all stack components can be used in the same stack as a local Artifact Store: only Orchestrators running on the local machine, such as the local Orchestrator, a local Kubeflow Orchestrator, or a local Kubernetes Orchestrator can be combined with a local Artifact Store only Model Deployers that are running locally, such as the MLflow Model Deployer, can be used in combination with a local Artifact Store Step Operators: none of the Step Operators can be used in the same stack as a local Artifact Store, given that their very purpose is to run ZenML steps in remote specialized environments
stack-components
https://docs.zenml.io/stack-components/artifact-stores/local
378
Define where an image is built Defining the image builder. ZenML executes pipeline steps sequentially in the active Python environment when running locally. However, with remote orchestrators or step operators, ZenML builds Docker images to run your pipeline in an isolated, well-defined environment. By default, execution environments are created locally in the client environment using the local Docker client. However, this requires Docker installation and permissions. ZenML offers image builders, a special stack component, allowing users to build and push Docker images in a different specialized image builder environment. Note that even if you don't configure an image builder in your stack, ZenML still uses the local image builder to retain consistency across all builds. In this case, the image builder environment is the same as the client environment. You don't need to directly interact with any image builder in your code. As long as the image builder that you want to use is part of your active ZenML stack, it will be used automatically by any component that needs to build container images. PreviousBuild the pipeline without running NextTrain with GPUs Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/customize-docker-builds/define-where-an-image-is-built
223
ipeline_run.run_metadata["orchestrator_url"].valueIf you cannot see the Airflow UI credentials in the console, you can find the password in <GLOBAL_CONFIG_DIR>/airflow/<ORCHESTRATOR_UUID>/standalone_admin_password.txt. GLOBAL_CONFIG_DIR depends on your OS. Run python -c "from zenml.config.global_config import GlobalConfiguration; print(GlobalConfiguration().config_directory)" to get the path for your machine. ORCHESTRATOR_UUID is the unique ID of the Airflow orchestrator, but there should be only one folder here, so you can just navigate into that one. The username will always be admin. Additional configuration For additional configuration of the Airflow orchestrator, you can pass AirflowOrchestratorSettings when defining or running your pipeline. Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings. Enabling CUDA for GPU-backed hardware Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. Using different Airflow operators Airflow operators specify how a step in your pipeline gets executed. As ZenML relies on Docker images to run pipeline steps, only operators that support executing a Docker image work in combination with ZenML. Airflow comes with two operators that support this: the DockerOperator runs the Docker images for executing your pipeline steps on the same machine that your Airflow server is running on. For this to work, the server environment needs to have the apache-airflow-providers-docker package installed. the KubernetesPodOperator runs the Docker image on a pod in the Kubernetes cluster that the Airflow server is deployed to. For this to work, the server environment needs to have the apache-airflow-providers-cncf-kubernetes package installed.
stack-components
https://docs.zenml.io/stack-components/orchestrators/airflow
399
Storing embeddings in a vector database Store embeddings in a vector database for efficient retrieval. The process of generating the embeddings doesn't take too long, especially if the machine on which the step is running has a GPU, but it's still not something we want to do every time we need to retrieve a document. Instead, we can store the embeddings in a vector database, which allows us to quickly retrieve the most relevant chunks based on their similarity to the query. For the purposes of this guide, we'll use PostgreSQL as our vector database. This is a popular choice for storing embeddings, as it provides a scalable and efficient way to store and retrieve high-dimensional vectors. However, you can use any vector database that supports high-dimensional vectors. If you want to explore a list of possible options, this is a good website to compare different options. For more information on how to set up a PostgreSQL database to follow along with this guide, please see the instructions in the repository which show how to set up a PostgreSQL database using Supabase. Since PostgreSQL is a well-known and battle-tested database, we can use known and minimal packages to connect and to interact with it. We can use the psycopg2 package to connect and then raw SQL statements to interact with the database. The code for the step is fairly simple: from zenml import step @step def index_generator( documents: List[Document], ) -> None: try: conn = get_db_conn() with conn.cursor() as cur: # Install pgvector if not already installed cur.execute("CREATE EXTENSION IF NOT EXISTS vector") conn.commit() # Create the embeddings table if it doesn't exist table_create_command = f""" CREATE TABLE IF NOT EXISTS embeddings ( id SERIAL PRIMARY KEY, content TEXT, token_count INTEGER, embedding VECTOR({EMBEDDING_DIMENSIONALITY}), filename TEXT, parent_section TEXT, url TEXT ); """ cur.execute(table_create_command) conn.commit() register_vector(conn)
user-guide
https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-vector-database
398
rom the host). Secret store environment variablesUnless explicitly disabled or configured otherwise, the ZenML server will use the SQL database as a secrets store backend where secret values are stored. If you want to use an external secrets management service like the AWS Secrets Manager, GCP Secrets Manager, Azure Key Vault, HashiCorp Vault or even your custom Secrets Store back-end implementation instead, you need to configure it explicitly using Docker environment variables. Depending on where you deploy your ZenML server and how your Kubernetes cluster is configured, you will also need to provide the credentials needed to access the secrets management service API. Important: If you are updating the configuration of your ZenML Server container to use a different secrets store back-end or location, you should follow the documented secrets migration strategy to minimize downtime and to ensure that existing secrets are also properly migrated. The SQL database is used as the default secret store location. You only need to configure these options if you want to change the default behavior. It is particularly recommended to enable encryption at rest for the SQL database if you plan on using it as a secrets store backend. You'll have to configure the secret key used to encrypt the secret values. If not set, encryption will not be used and passwords will be stored unencrypted in the database. ZENML_SECRETS_STORE_TYPE: Set this to sql in order to explicitly set this type of secret store. ZENML_SECRETS_STORE_ENCRYPTION_KEY: the secret key used to encrypt all secrets stored in the SQL secrets store. It is recommended to set this to a random string with a length of at least 32 characters, e.g.:Copyfrom secrets import token_hex token_hex(32)or:Copyopenssl rand -hex 32
getting-started
https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-docker
350
ect URL' (see above). Extra configuration optionsBy default, the ZenML application will be configured to use an SQLite non-persistent database. If you want to use a persistent database, you can configure this by amending the Dockerfile to your Space's root directory. For full details on the various parameters you can change, see our reference documentation on configuring ZenML when deployed with Docker. If you are using the space just for testing and experimentation, you don't need to make any changes to the configuration. Everything will work out of the box. You can also use an external secrets backend together with your HuggingFace Spaces as described in our documentation. You should be sure to use HuggingFace's inbuilt ' Repository secrets' functionality to configure any secrets you need to use in yourDockerfile configuration. See the documentation for more details on how to set this up. If you wish to use a cloud secrets backend together with ZenML for secrets management, you must update your password on your ZenML Server on the Dashboard. This is because the default user created by the HuggingFace Spaces deployment process has no password assigned to it and as the Space is publicly accessible (since the Space is public) potentially anyone could access your secrets without this extra step. To change your password navigate to the Settings page by clicking the button in the upper right-hand corner of the Dashboard and then click 'Update Password'. Troubleshooting If you are having trouble with your ZenML server on HuggingFace Spaces, you can view the logs by clicking on the "Open Logs" button at the top of the space. This will give you more context of what's happening with your server. If you have any other issues, please feel free to reach out to us on our Slack channel for more support. Upgrading your ZenML Server on HF Spaces
getting-started
https://docs.zenml.io/getting-started/deploying-zenml/deploy-using-huggingface-spaces
370
βš’οΈManage stacks Deploying your stack components directly from the ZenML CLI The first step in running your pipelines on remote infrastructure is to deploy all the components that you would need, like an MLflow tracking server, a Seldon Core model deployer, and more to your cloud. This can bring plenty of benefits like scalability, reliability, and collaboration. ZenML eases the path to production by providing a seamless way for all tools to interact with others through the use of abstractions. However, one of the most painful parts of this process, from what we see on our Slack and in general, is the deployment of these stack components. Deploying and managing MLOps tools is tricky πŸ˜­πŸ˜΅β€πŸ’« It is not trivial to set up all the different tools that you might need for your pipeline. 🌈 Each tool comes with a certain set of requirements. For example, a Kubeflow installation will require you to have a Kubernetes cluster, and so would a Seldon Core deployment. πŸ€” Figuring out the defaults for infra parameters is not easy. Even if you have identified the backing infra that you need for a stack component, setting up reasonable defaults for parameters like instance size, CPU, memory, etc., needs a lot of experimentation to figure out. 🚧 Many times, standard tool installations don't work out of the box. For example, to run a custom pipeline in Vertex AI, it is not enough to just run an imported pipeline. You might also need a custom service account that is configured to perform tasks like reading secrets from your secret store or talking to other GCP services that your pipeline might need. πŸ” Some tools need an additional layer of installations to enable a more secure, production-grade setup. For example, a standard MLflow tracking server deployment comes without an authentication frontend which might expose all of your tracking data to the world if deployed as-is.
how-to
https://docs.zenml.io/v/docs/how-to/stack-deployment
392
┃┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ OWNER β”‚ default ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ WORKSPACE β”‚ default ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ SHARED β”‚ βž– ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ CREATED_AT β”‚ 2023-06-19 19:23:39.982950 ┃ ┠──────────────────┼─────────────────────────────────────────────────────────────────────────┨ ┃ UPDATED_AT β”‚ 2023-06-19 19:23:39.982952 ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Configuration ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠───────────────────────┼───────────┨ ┃ region β”‚ us-east-1 ┃ ┠───────────────────────┼───────────┨ ┃ aws_access_key_id β”‚ [HIDDEN] ┃ ┠───────────────────────┼───────────┨ ┃ aws_secret_access_key β”‚ [HIDDEN] ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━┛ AWS STS Token Uses temporary STS tokens explicitly configured by the user or auto-configured from a local environment. This method has the major limitation that the user must regularly generate new tokens and update the connector configuration as STS tokens expire. On the other hand, this method is ideal in cases where the connector only needs to be used for a short period of time, such as sharing access temporarily with someone else in your team.
how-to
https://docs.zenml.io/how-to/auth-management/aws-service-connector
478
ss', cat_features=['country', 'state']), ... )check_kwargs: Additional keyword arguments to be passed to the Deepchecks check object constructors. Arguments are grouped for each check and indexed using the full check class name or check enum value as dictionary keys, e.g.:Copydeepchecks_data_integrity_check_step( check_list=[ DeepchecksDataIntegrityCheck.TABULAR_OUTLIER_SAMPLE_DETECTION, DeepchecksDataIntegrityCheck.TABULAR_STRING_LENGTH_OUT_OF_BOUNDS, DeepchecksDataIntegrityCheck.TABULAR_STRING_MISMATCH, ], check_kwargs={ DeepchecksDataIntegrityCheck.TABULAR_OUTLIER_SAMPLE_DETECTION: dict( nearest_neighbors_percent=0.01, extent_parameter=3, ), DeepchecksDataIntegrityCheck.TABULAR_STRING_LENGTH_OUT_OF_BOUNDS: dict( num_percentiles=1000, min_unique_values=3, ), }, ... ) run_kwargs: Additional keyword arguments to be passed to the Deepchecks Suite run method. The check_kwargs attribute can also be used to customize the conditions configured for each Deepchecks test. ZenML attaches a special meaning to all check arguments that start with condition_ and have a dictionary as value. This is required because there is no declarative way to specify conditions for Deepchecks checks. For example, the following step configuration: deepchecks_data_integrity_check_step( check_list=[ DeepchecksDataIntegrityCheck.TABULAR_OUTLIER_SAMPLE_DETECTION, DeepchecksDataIntegrityCheck.TABULAR_STRING_LENGTH_OUT_OF_BOUNDS, ], dataset_kwargs=dict(label='class', cat_features=['country', 'state']), check_kwargs={ DeepchecksDataIntegrityCheck.TABULAR_OUTLIER_SAMPLE_DETECTION: dict( nearest_neighbors_percent=0.01, extent_parameter=3, condition_outlier_ratio_less_or_equal=dict( max_outliers_ratio=0.007, outlier_score_threshold=0.5, ), condition_no_outliers=dict( outlier_score_threshold=0.6, ), DeepchecksDataIntegrityCheck.TABULAR_STRING_LENGTH_OUT_OF_BOUNDS: dict( num_percentiles=1000,
stack-components
https://docs.zenml.io/stack-components/data-validators/deepchecks
444
Set logging verbosity How to set the logging verbosity in ZenML. By default, ZenML sets the logging verbosity to INFO. If you wish to change this, you can do so by setting the following environment variable: export ZENML_LOGGING_VERBOSITY=INFO Choose from INFO, WARN, ERROR, CRITICAL, DEBUG. This will set the logs to whichever level you suggest. Note that setting this on the client environment (e.g. your local machine which runs the pipeline) will not automatically set the same logging verbosity for remote pipeline runs. That means setting this variable locally with only effect pipelines that run locally. If you wish to control for remote pipeline runs, you can set the ZENML_LOGGING_VERBOSITY environment variable in your pipeline runs environment as follows: docker_settings = DockerSettings(environment={"ZENML_LOGGING_VERBOSITY": "DEBUG"}) # Either add it to the decorator @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Or configure the pipelines options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} PreviousEnable or disable logs storage NextDisable rich traceback output Last updated 15 days ago
how-to
https://docs.zenml.io/how-to/control-logging/set-logging-verbosity
250
"zenml/rag_qa_embedding_questions", split="train")# Shuffle the dataset and select a random sample sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) total_tests = len(sampled_dataset) total_toxicity = 0 total_faithfulness = 0 total_helpfulness = 0 total_relevance = 0 for item in sampled_dataset: question = item["generated_questions"][0] context = item["page_content"] try: result = test_function(question, context) except json.JSONDecodeError as e: logging.error(f"Failed for question: {question}. Error: {e}") total_tests -= 1 continue total_toxicity += result.toxicity total_faithfulness += result.faithfulness total_helpfulness += result.helpfulness total_relevance += result.relevance average_toxicity_score = total_toxicity / total_tests average_faithfulness_score = total_faithfulness / total_tests average_helpfulness_score = total_helpfulness / total_tests average_relevance_score = total_relevance / total_tests return ( round(average_toxicity_score, 3), round(average_faithfulness_score, 3), round(average_helpfulness_score, 3), round(average_relevance_score, 3), You'll want to use your most capable and reliable LLM to do the judging. In our case, we used the new GPT-4 Turbo. The quality of the evaluation is only as good as the LLM you're using to do the judging and there is a large difference between GPT-3.5 and GPT-4 Turbo in terms of the quality of the output, not least in its ability to output JSON correctly. Here was the output following an evaluation for 50 randomly sampled datapoints: Step e2e_evaluation_llm_judged has started. Average toxicity: 1.0 Average faithfulness: 4.787 Average helpfulness: 4.595 Average relevance: 4.87 Step e2e_evaluation_llm_judged has finished in 8m51s. Pipeline run has finished in 8m52s. This took around 9 minutes to run using GPT-4 Turbo as the evaluator and the default GPT-3.5 as the LLM being evaluated. To take this further, there are a number of ways it might be improved:
user-guide
https://docs.zenml.io/user-guide/llmops-guide/evaluation/generation
503
g_suite: bool = True, ) -> ExpectationSuite: ...You can view the complete list of configuration parameters in the SDK docs. The Great Expectations data validator step The standard Great Expectations data validator step validates an input pandas.DataFrame dataset by running an existing Expectation Suite on it. The validation results are saved in the Great Expectations Validation Store, but also returned as an CheckpointResult artifact that is versioned and saved in the ZenML Artifact Store. The step automatically rebuilds the Data Docs. At a minimum, the step configuration expects the name of the Expectation Suite to be used for the validation: from zenml.integrations.great_expectations.steps import ( great_expectations_validator_step, ge_validator_step = great_expectations_validator_step.with_options( parameters={ "expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df", The step can then be inserted into your pipeline where it can take in a pandas dataframe and a bool flag used solely for order reinforcement purposes, e.g.: docker_settings = DockerSettings(required_integrations=[SKLEARN, GREAT_EXPECTATIONS]) @pipeline(settings={"docker": docker_settings}) def validation_pipeline(): """Data validation pipeline for Great Expectations. The pipeline imports a test data from a source, then uses the builtin Great Expectations data validation step to validate the dataset against the expectation suite generated in the profiling pipeline. Args: importer: test data importer step validator: dataset validation step checker: checks the validation results """ dataset, condition = importer() results = ge_validator_step(dataset, condition) message = checker(results) validation_pipeline()
stack-components
https://docs.zenml.io/v/docs/stack-components/data-validators/great-expectations
338
ner(gamma=gamma, X_train=X_train, y_train=y_train)if __name__ == "__main__": first_pipeline() python run.py ... Registered pipeline first_pipeline (version 2). ... This will now create a single run for version 2 of the pipeline called first_pipeline. PreviousHyperparameter tuning NextAccess secrets in a step Last updated 19 days ago
how-to
https://docs.zenml.io/v/docs/how-to/build-pipelines/version-pipelines
79
he Post-execution workflow has changed as follows:The get_pipelines and get_pipeline methods have been moved out of the Repository (i.e. the new Client ) class and lie directly in the post_execution module now. To use the user has to do: from zenml.post_execution import get_pipelines, get_pipeline New methods to directly get a run have been introduced: get_run and get_unlisted_runs method has been introduced to get unlisted runs. Usage remains largely similar. Please read the new docs for post-execution to inform yourself of what further has changed. How to migrate: Replace all post-execution workflows from the paradigm of Repository.get_pipelines or Repository.get_pipeline_run to the corresponding post_execution methods. πŸ“‘Future Changes While this rehaul is big and will break previous releases, we do have some more work left to do. However we also expect this to be the last big rehaul of ZenML before our 1.0.0 release, and no other release will be so hard breaking as this one. Currently planned future breaking changes are: Following the metadata store, the secrets manager stack component might move out of the stack. ZenML StepContext might be deprecated. 🐞 Reporting Bugs While we have tried our best to document everything that has changed, we realize that mistakes can be made and smaller changes overlooked. If this is the case, or you encounter a bug at any time, the ZenML core team and community are available around the clock on the growing Slack community. For bug reports, please also consider submitting a GitHub Issue. Lastly, if the new changes have left you desiring a feature, then consider adding it to our public feature voting board. Before doing so, do check what is already on there and consider upvoting the features you desire the most. PreviousMigration guide NextMigration guide 0.23.0 β†’ 0.30.0 Last updated 12 days ago
reference
https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-twenty
394
L Server instance connected to that same database.if you deployed a kubernetes Metadata Store flavor (i.e. a MySQL database service deployed in Kubernetes), you can deploy a ZenML Server in the same Kubernetes cluster and connect it to that same database. However, ZenML will no longer provide the kubernetes Metadata Store flavor and you'll have to manage the Kubernetes MySQL database service deployment yourself going forward. The ZenML Server inherits the same limitations that the Metadata Store had prior to ZenML 0.20.0: it is not possible to use a local ZenML Server to track pipelines and pipeline runs that are running remotely in the cloud, unless the ZenML server is explicitly configured to be reachable from the cloud (e.g. by using a public IP address or a VPN connection). using a remote ZenML Server to track pipelines and pipeline runs that are running locally is possible, but can have significant performance issues due to the network latency. It is therefore recommended that you always use a ZenML deployment that is located as close as possible to and reachable from where your pipelines and step operators are running. This will ensure the best possible performance and usability. πŸ‘£ How to migrate pipeline runs from your old metadata stores The zenml pipeline runs migrate CLI command is only available under ZenML versions [0.21.0, 0.21.1, 0.22.0]. If you want to migrate your existing ZenML runs from zenml<0.20.0 to zenml>0.22.0, please first upgrade to zenml==0.22.0 and migrate your runs as shown below, then upgrade to the newer version. To migrate the pipeline run information already stored in an existing metadata store to the new ZenML paradigm, you can use the zenml pipeline runs migrate CLI command. Before upgrading ZenML, make a backup of all metadata stores you want to migrate, then upgrade ZenML.
reference
https://docs.zenml.io/reference/migration-guide/migration-zero-twenty
390
Spark Executing individual steps on Spark The spark integration brings two different step operators: Step Operator: The SparkStepOperator serves as the base class for all the Spark-related step operators. Step Operator: The KubernetesSparkStepOperator is responsible for launching ZenML steps as Spark applications with Kubernetes as a cluster manager. Step Operators: SparkStepOperator A summarized version of the implementation can be summarized in two parts. First, the configuration: from typing import Optional, Dict, Any from zenml.step_operators import BaseStepOperatorConfig class SparkStepOperatorConfig(BaseStepOperatorConfig): """Spark step operator config. Attributes: master: is the master URL for the cluster. You might see different schemes for different cluster managers which are supported by Spark like Mesos, YARN, or Kubernetes. Within the context of this PR, the implementation supports Kubernetes as a cluster manager. deploy_mode: can either be 'cluster' (default) or 'client' and it decides where the driver node of the application will run. submit_kwargs: is the JSON string of a dict, which will be used to define additional params if required (Spark has quite a lot of different parameters, so including them, all in the step operator was not implemented). """ master: str deploy_mode: str = "cluster" submit_kwargs: Optional[Dict[str, Any]] = None and then the implementation: from typing import List from pyspark.conf import SparkConf from zenml.step_operators import BaseStepOperator class SparkStepOperator(BaseStepOperator): """Base class for all Spark-related step operators.""" def _resource_configuration( self, spark_config: SparkConf, resource_configuration: "ResourceSettings", ) -> None: """Configures Spark to handle the resource configuration.""" def _backend_configuration( self, spark_config: SparkConf, step_config: "StepConfiguration", ) -> None:
stack-components
https://docs.zenml.io/v/docs/stack-components/step-operators/spark-kubernetes
388
lly registered orchestrator `<ORCHESTRATOR_NAME>`.$ zenml service-connector list-resources --resource-type kubernetes-cluster -e The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────┨ ┃ e33c9fac-5daa-48b2-87bb-0187d3782cde β”‚ aws-iam-multi-eu β”‚ πŸ”Ά aws β”‚ πŸŒ€ kubernetes-cluster β”‚ kubeflowmultitenant ┃ ┃ β”‚ β”‚ β”‚ β”‚ zenbox ┃ ┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────┨ ┃ ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 β”‚ aws-iam-multi-us β”‚ πŸ”Ά aws β”‚ πŸŒ€ kubernetes-cluster β”‚ zenhacks-cluster ┃ ┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────┨ ┃ 1c54b32a-4889-4417-abbd-42d3ace3d03a β”‚ gcp-sa-multi β”‚ πŸ”΅ gcp β”‚ πŸŒ€ kubernetes-cluster β”‚ zenml-test-cluster ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━┛
stack-components
https://docs.zenml.io/v/docs/stack-components/orchestrators/kubeflow
508
run.pipeline.runs[1] # index 0 is the current runAs shown in the example, we can get additional information about the current run using the StepContext, which is explained in more detail in the advanced docs. Code example This section combines all the code from this section into one simple script that you can use to see the concepts discussed above: Putting it all together, this is how we can load the model trained by the svc_trainer step of our example pipeline from the previous sections: from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.client import Client @step def training_data_loader() -> Tuple[ Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: """Load the iris dataset as tuple of Pandas DataFrame / Series.""" iris = load_iris(as_frame=True) X_train, X_test, y_train, y_test = train_test_split( iris.data, iris.target, test_size=0.2, shuffle=True, random_state=42 return X_train, X_test, y_train, y_test @step def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[ Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"], ]: """Train a sklearn SVC classifier and log to MLflow.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": # You can run the pipeline and get the run object directly last_run = training_pipeline() print(last_run.id)
how-to
https://docs.zenml.io/how-to/build-pipelines/fetching-pipelines
494
y service connectors configured in your workspace:┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼────────────────┼────────────────┼───────────────┼─────────────────────────────────────────────────┨ ┃ eeeabc13-9203-463b-aa52-216e629e903c β”‚ gcp-demo-multi β”‚ πŸ”΅ gcp β”‚ πŸ“¦ gcs-bucket β”‚ gs://zenml-bucket-sl ┃ ┃ β”‚ β”‚ β”‚ β”‚ gs://zenml-core.appspot.com ┃ ┃ β”‚ β”‚ β”‚ β”‚ gs://zenml-core_cloudbuild ┃ ┃ β”‚ β”‚ β”‚ β”‚ gs://zenml-datasets ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ ``` ```sh zenml service-connector list-resources --resource-type kubernetes-cluster ``` Example Command Output ```text The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┓ ┃ CONNECTOR ID β”‚ CONNECTOR NAME β”‚ CONNECTOR TYPE β”‚ RESOURCE TYPE β”‚ RESOURCE NAMES ┃ ┠──────────────────────────────────────┼────────────────┼────────────────┼───────────────────────┼────────────────────┨ ┃ eeeabc13-9203-463b-aa52-216e629e903c β”‚ gcp-demo-multi β”‚ πŸ”΅ gcp β”‚ πŸŒ€ kubernetes-cluster β”‚ zenml-test-cluster ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┛ ``` ```sh
how-to
https://docs.zenml.io/how-to/auth-management/gcp-service-connector
641
troller, you can use a command like the following:kubectl -n nginx-ingress get svc nginx-ingress-ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}' You can deploy the ZenML server with the following Helm values: zenml: ingress: enabled: true annotations: cert-manager.io/cluster-issuer: "letsencrypt-staging" host: zenml.<nginx ingress IP address>.nip.io tls: enabled: true generateCerts: false Note This method does not work if your Ingress controller is behind a load balancer that uses a hostname mapped to several IP addresses instead of an IP address. Use a dedicated Ingress URL path for ZenML If you cannot use a dedicated Ingress hostname for ZenML, you can use a dedicated Ingress URL path instead. For example, you can expose ZenML at the URL path https://<your ingress hostname>/zenml. To deploy the ZenML server with a dedicated Ingress URL path, you can use the following Helm values: zenml: ingress: enabled: true annotations: cert-manager.io/cluster-issuer: "letsencrypt-staging" nginx.ingress.kubernetes.io/rewrite-target: /$1 path: /zenml/?(.*) tls: enabled: true generateCerts: false Note This method has one current limitation: the ZenML UI does not support URL rewriting and will not work properly if you use a dedicated Ingress URL path. You can still connect your client to the ZenML server and use it to run pipelines as usual, but you will not be able to use the ZenML UI. Use a DNS service to map a different hostname to the Ingress controller This method requires you to configure a DNS service like AWS Route 53 or Google Cloud DNS to map a different hostname to the Ingress controller. For example, you can map the hostname zenml.<subdomain> to the Ingress controller's IP address or hostname. Then, simply use the new hostname to expose ZenML at the root URL path. Secret Store configuration
getting-started
https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-helm
437
⭐Introduction Welcome to ZenML! ZenML is an extensible, open-source MLOps framework for creating portable, production-ready machine learning pipelines. By decoupling infrastructure from code, ZenML enables developers across your organization to collaborate more effectively as they develop to production. ZenML enables MLOps infrastructure experts to define, deploy, and manage sophisticated production environments that are easy to share with colleagues. ZenML Pro: ZenML Pro provides a control plane that allows you to deploy a managed ZenML instance and get access to exciting new features such as CI/CD, Model Control Plane, and RBAC. Self-hosted deployment: ZenML can be deployed on any cloud provider and provides many Terraform-based utility functions to deploy other MLOps tools or even entire MLOps stacks:Copy# Deploy ZenML to any cloud zenml deploy --provider aws # Deploy MLOps tools and infrastructure to any cloud zenml orchestrator deploy kfp --flavor kubeflow --provider gcp # Deploy entire MLOps stacks at once zenml stack deploy gcp-vertexai --provider gcp -o kubeflow ... Standardization: With ZenML, you can standardize MLOps infrastructure and tooling across your organization. Simply register your staging and production environments as ZenML stacks and invite your colleagues to run ML workflows on them.Copy# Register MLOps tools and infrastructure zenml orchestrator register kfp_orchestrator -f kubeflow # Register your production environment zenml stack register production --orchestrator kubeflow ... # Make it available to your colleagues zenml stack share production Registering your environments as ZenML stacks also enables you to browse and explore them in a convenient user interface. Try it out at https://www.zenml.io/live-demo!
null
https://docs.zenml.io
380
-registry β”‚ iam-role β”‚ β”‚ ┃┃ β”‚ β”‚ β”‚ session-token β”‚ β”‚ ┃ ┃ β”‚ β”‚ β”‚ federation-token β”‚ β”‚ ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛ This service connector will not be able to work if Multi-Factor Authentication (MFA) is enabled on the role used by the AWS CLI. When MFA is enabled, the AWS CLI generates temporary credentials that are valid for a limited time. These temporary credentials cannot be used by the ZenML AWS Service Connector, as it requires long-lived credentials to authenticate and access AWS resources. To use the AWS Service Connector with ZenML, you will need to use a different AWS CLI profile that does not have MFA enabled. You can do this by setting the AWS_PROFILE environment variable to the name of the profile you want to use before running the ZenML CLI commands. Prerequisites The AWS Service Connector is part of the AWS ZenML integration. You can either install the entire integration or use a PyPI extra to install it independently of the integration: pip install "zenml[connectors-aws]" installs only prerequisites for the AWS Service Connector Type zenml integration install aws installs the entire AWS ZenML integration It is not required to install and set up the AWS CLI on your local machine to use the AWS Service Connector to link Stack Components to AWS resources and services. However, it is recommended to do so if you are looking for a quick setup that includes using the auto-configuration Service Connector features. The auto-configuration examples in this page rely on the AWS CLI being installed and already configured with valid credentials of one type or another. If you want to avoid installing the AWS CLI, we recommend using the interactive mode of the ZenML CLI to register Service Connectors: zenml service-connector register -i --type aws Resource Types Generic AWS resource
how-to
https://docs.zenml.io/how-to/auth-management/aws-service-connector
443
Amazon SageMaker Executing individual steps in SageMaker. SageMaker offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's SageMaker step operator allows you to submit individual steps to be run on Sagemaker compute instances. When to use it You should use the SageMaker step operator if: one or more steps of your pipeline require computing resources (CPU, GPU, memory) that are not provided by your orchestrator. you have access to SageMaker. If you're using a different cloud provider, take a look at the Vertex or AzureML step operators. How to deploy it Create a role in the IAM console that you want the jobs running in SageMaker to assume. This role should at least have the AmazonS3FullAccess and AmazonSageMakerFullAccess policies applied. Check here for a guide on how to set up this role. Infrastructure Deployment A Sagemaker step operator can be deployed directly from the ZenML CLI: zenml orchestrator deploy sagemaker_step_operator --flavor=sagemaker --provider=aws ... You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section. How to use it To use the SageMaker step operator, we need: The ZenML aws integration installed. If you haven't done so, runCopyzenml integration install aws Docker installed and running. An IAM role with the correct permissions. See the deployment section for detailed instructions. An AWS container registry as part of our stack. Take a look here for a guide on how to set that up.
stack-components
https://docs.zenml.io/v/docs/stack-components/step-operators/sagemaker
364
# This will build the Docker image the first timepython run.py --training-pipeline # This will skip Docker building python run.py --training-pipeline You can read more about the ZenML Git Integration here. PreviousConfigure your pipeline to add compute NextSet up CI/CD Last updated 15 days ago
user-guide
https://docs.zenml.io/user-guide/production-guide/connect-code-repository
67
┃┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ 373a73c2-8295-45d4-a768-45f5a0f744ea β”‚ aws-multi-type β”‚ πŸ”Ά aws β”‚ πŸ”Ά aws-generic β”‚ us-east-1 ┃ ┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┨ ┃ β”‚ β”‚ β”‚ πŸ“¦ s3-bucket β”‚ s3://aws-ia-mwaa-715803424590 ┃ ┃ β”‚ β”‚ β”‚ β”‚ s3://zenfiles ┃ ┃ β”‚ β”‚ β”‚ β”‚ s3://zenml-demos ┃ ┃ β”‚ β”‚ β”‚ β”‚ s3://zenml-generative-chat ┃ ┃ β”‚ β”‚ β”‚ β”‚ s3://zenml-public-datasets ┃ ┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────┨
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide
309
─────────────────────────────────────────────────┨┃ RESOURCE TYPES β”‚ πŸ”΅ gcp-generic, πŸ“¦ gcs-bucket, πŸŒ€ kubernetes-cluster, 🐳 docker-registry ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ RESOURCE NAME β”‚ <multiple> ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ SECRET ID β”‚ 4694de65-997b-4929-8831-b49d5e067b97 ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ SESSION DURATION β”‚ N/A ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ EXPIRES IN β”‚ 59m46s ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ OWNER β”‚ default ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ WORKSPACE β”‚ default ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ SHARED β”‚ βž– ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ CREATED_AT β”‚ 2023-05-19 09:04:33.557126 ┃ ┠──────────────────┼──────────────────────────────────────────────────────────────────────────┨ ┃ UPDATED_AT β”‚ 2023-05-19 09:04:33.557127 ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ Configuration ┏━━━━━━━━━━━━┯━━━━━━━━━━━━┓
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector
456
rce-type s3-bucket --resource-id zenfiles --clientExample Command Output INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials Service connector 'aws-implicit (s3-bucket | s3://zenfiles client)' of type 'aws' with id 'e3853748-34a0-4d78-8006-00422ad32884' is owned by user 'default' and is 'private'. 'aws-implicit (s3-bucket | s3://zenfiles client)' aws Service Connector Details ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY β”‚ VALUE ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ ID β”‚ 9a810521-ef41-4e45-bb48-8569c5943dc6 ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ NAME β”‚ aws-implicit (s3-bucket | s3://zenfiles client) ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ TYPE β”‚ πŸ”Ά aws ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ AUTH METHOD β”‚ sts-token ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ RESOURCE TYPES β”‚ πŸ“¦ s3-bucket ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ RESOURCE NAME β”‚ s3://zenfiles ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ SECRET ID β”‚ ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ SESSION DURATION β”‚ N/A ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ EXPIRES IN β”‚ 59m57s ┃ ┠──────────────────┼─────────────────────────────────────────────────┨ ┃ OWNER β”‚ default ┃
how-to
https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector
502
ons. Try it out at https://www.zenml.io/live-demo!Automated Deployments: With ZenML, you no longer need to upload custom Docker images to the cloud whenever you want to deploy a new model to production. Simply define your ML workflow as a ZenML pipeline, let ZenML handle the containerization, and have your model automatically deployed to a highly scalable Kubernetes deployment service like Seldon.Copyfrom zenml.integrations.seldon.steps import seldon_model_deployer_step from my_organization.steps import data_loader_step, model_trainer_step @pipeline def my_pipeline(): data = data_loader_step() model = model_trainer_step(data) seldon_model_deployer_step(model) πŸš€ Learn More Ready to manage your ML lifecycles end-to-end with ZenML? Here is a collection of pages you can take a look at next: Get started with ZenML and learn how to build your first pipeline and stack. Discover advanced ZenML features like config management and containerization. Explore ZenML through practical use-case examples. NextInstallation Last updated 14 days ago
null
https://docs.zenml.io/
228