Upload 675 files
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- -_Architecture_Center_home.txt +5 -0
- AI-ML_image_processing_on_Cloud_Functions.txt +5 -0
- AI_and_ML.txt +5 -0
- AI_and_Machine_Learning.txt +5 -0
- AI_for_Data_Analytics.txt +5 -0
- API_Gateway.txt +5 -0
- APIs_and_Applications.txt +5 -0
- Active_Assist(1).txt +5 -0
- Active_Assist.txt +5 -0
- Active_Directory_single_sign-on.txt +5 -0
- Active_Directory_user_account_provisioning.txt +5 -0
- Align_spending_with_business_value.txt +5 -0
- AlloyDB_for_PostgreSQL.txt +5 -0
- Analyst_reports.txt +5 -0
- Analytics_Hub.txt +5 -0
- Analytics_hybrid_and_multicloud_patterns.txt +5 -0
- Analytics_lakehouse.txt +5 -0
- Analyze_FHIR_data_in_BigQuery.txt +5 -0
- Anti_Money_Laundering_AI.txt +5 -0
- Apigee_API_Management(1).txt +5 -0
- Apigee_API_Management(2).txt +5 -0
- Apigee_API_Management.txt +5 -0
- AppSheet_Automation.txt +5 -0
- App_Engine(1).txt +5 -0
- App_Engine.txt +5 -0
- Application_Integration.txt +5 -0
- Application_Migration(1).txt +5 -0
- Application_Migration.txt +5 -0
- Application_Modernization.txt +5 -0
- Architect_for_Multicloud.txt +5 -0
- Architect_your_workloads.txt +5 -0
- Architecting_for_cloud_infrastructure_outages.txt +0 -0
- Architecting_for_locality-restricted_workloads.txt +5 -0
- Architectural_approaches.txt +5 -0
- Architectural_approaches_to_adopt_a_hybrid_or_multicloud_architecture.txt +5 -0
- Architecture.txt +5 -0
- Architecture_and_functions_in_a_data_mesh.txt +5 -0
- Architecture_decision_records_overview.txt +5 -0
- Architecture_for_connecting_visualization_software_to_Hadoop_on_Google_Cloud.txt +5 -0
- Architecture_patterns.txt +5 -0
- Architecture_using_Cloud_Functions.txt +5 -0
- Architecture_using_Cloud_Run.txt +5 -0
- Artifact_Registry(1).txt +5 -0
- Artifact_Registry.txt +5 -0
- Artificial_Intelligence.txt +5 -0
- Assess_and_discover_your_workloads.txt +5 -0
- Assess_existing_user_accounts.txt +5 -0
- Assess_onboarding_plans.txt +5 -0
- Assess_reliability_requirements.txt +5 -0
- Assess_the_impact_of_user_account_consolidation_on_federation.txt +5 -0
-_Architecture_Center_home.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture
|
2 |
+
Date Scraped: 2025-02-23T11:42:26.739Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Stay organized with collections Save and categorize content based on your preferences. Cloud Architecture Center Discover reference architectures, design guidance, and best practices for building, migrating, and managing your cloud workloads. See what's new! construction Operational excellence security Security, privacy, compliance restore Reliability payment Cost optimization speed Performance optimization Explore the Architecture Framework Deployment archetypes Learn about the basic archetypes for building cloud architectures, and the use cases and design considerations for each archetype: zonal, regional, multi-regional, global, hybrid, and multicloud. Infrastructure reliability guide Design and build infrastructure to run your workloads reliably in the cloud. Landing zone design Design and build a landing zone that includes identity onboarding, resource hierarchy, network design, and security controls. Enterprise foundations blueprint Design and build a foundation that enables consistent governance, security controls, scale, visibility, and access. forward_circle Jump Start Solution guides 14 Learn and experiment with pre-built solution templates book Design guides 97 Build architectures using recommended patterns and practices account_tree Reference architectures 83 Deploy or adapt cloud topologies to meet your specific needs AI and machine learning 26 account_tree Infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search book Build and deploy generative AI and machine learning models in an enterprise forward_circle Generative AI RAG with Cloud SQL More arrow_forward Application development 72 book Microservices overview account_tree Patterns for scalable and resilient apps More arrow_forward Databases 13 book Multi-cloud database management More arrow_forward Hybrid and multicloud 33 book Hybrid and multicloud overview account_tree Patterns for connecting other cloud service providers with Google Cloud account_tree Authenticate workforce users in a hybrid environment More arrow_forward Migration 26 book Migrate to Google Cloud book Database migration More arrow_forward Monitoring and logging 14 account_tree Stream logs from Google Cloud to Splunk More arrow_forward Networking 32 book Best practices and reference architectures for VPC design account_tree Hub-and-spoke network architecture account_tree Build internet connectivity for private VMs More arrow_forward Reliability and DR 11 book Google Cloud infrastructure reliability guide book Disaster recovery planning guide forward_circle Load balanced managed VMs More arrow_forward Security and IAM 42 book Identity and access management overview book Enterprise foundations blueprint account_tree Automate malware scanning for files uploaded to Cloud Storage More arrow_forward Storage 14 book Design an optimal storage strategy for your cloud workload More arrow_forward stars Google Cloud certification Demonstrate your expertise and validate your ability to transform businesses with Google Cloud technology. verified_user Google Cloud security best practices center Explore these best practices for meeting your security and compliance objectives as you deploy workloads on Google Cloud. Google Cloud Migration Center Accelerate your end-to-end migration journey from your current on-premises environment to Google Cloud. Google Cloud partner advantage Connect with a Google Cloud partner who can help you with your architecture needs. Send feedback
|
AI-ML_image_processing_on_Cloud_Functions.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/ai-ml/image-processing-cloud-functions
|
2 |
+
Date Scraped: 2025-02-23T11:46:27.561Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Jump Start Solution: AI/ML image processing on Cloud Functions Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-08-29 UTC This guide helps you understand, deploy, and use the AI/ML image processing on Cloud Functions Jump Start Solution. This solution uses pre-trained machine learning models to analyze images provided by users and generate image annotations. Deploying this solution creates an image processing service that can help you do the following, and more: Handle unsafe or harmful user-generated content. Digitize text from physical documents. Detect and classify objects in images. This document is intended for developers who have some familiarity with backend service development, the capabilities of AI/ML, and basic cloud computing concepts. Though not required, Terraform experience is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives Learn how a serverless architecture is used to create a scalable image processing service. Understand how the image processing service uses pre-trained machine learning models for image analysis. Deploy the image processing service and invoke it through REST API calls or in response to image upload events. Review configuration and security settings to understand how to adapt the image processing service to different needs. Products used The solution uses the following Google Cloud products: Cloud Vision API: An API offering powerful pre-trained machine learning models for image annotation. The solution uses the Cloud Vision API to analyze images and obtain image annotation data. Cloud Storage: An enterprise-ready service that provides low-cost, no-limit object storage for diverse data types. Data is accessible from within and outside of Google Cloud and is replicated geo-redundantly. The solution uses Cloud Storage to store input images and resulting image annotation data. Cloud Run functions: A lightweight serverless compute service that lets you create single-purpose, standalone functions that can respond to Google Cloud events without the need to manage a server or runtime environment. The solution uses Cloud Run functions to host the image processing service's endpoints. For information about how these products are configured and how they interact, see the next section. Architecture The solution consists of an example image processing service that analyzes input images and generates annotations for the images using pre-trained machine learning models. The following diagram shows the architecture of the Google Cloud resources used in the solution. The service can be invoked in two ways: directly through REST API calls or indirectly in response to image uploads. Request flow The request processing flow of the image processing service depends on how users invoke the service. The following steps are numbered as shown in the preceding architecture diagram. When the user invokes the image processing service directly through a REST API call: The user makes a request to the image processing service's REST API endpoint, deployed as a Cloud Run function. The request specifies an image as a URI or a base64 encoded stream. The Cloud Run function makes a call to the Cloud Vision API to generate annotations for the specified image. The image annotation data is returned in JSON format in the function's response to the user. When the user invokes the image processing service indirectly in response to image uploads: The user uploads images to a Cloud Storage bucket for input. Each image upload generates a Cloud Storage event that triggers a Cloud Run function to process the uploaded image. The Cloud Run function makes a call to the Cloud Vision API to generate annotations for the specified image. The Cloud Run function writes the image annotation data as a JSON file in another Cloud Storage bucket for output. Cost For an estimate of the cost of the Google Cloud resources that the AI/ML image processing on Cloud Functions solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. The amount of data stored in Cloud Storage. The number of times the image processing service is invoked. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/serviceusage.serviceUsageAdmin roles/iam.serviceAccountAdmin roles/resourcemanager.projectIamAdmin roles/cloudfunctions.admin roles/run.admin roles/storage.admin roles/pubsublite.admin roles/iam.securityAdmin roles/logging.admin roles/artifactregistry.reader roles/cloudbuild.builds.editor roles/compute.admin roles/iam.serviceAccountUser Deploy the solution This section guides you through the process of deploying the solution. To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the AI/ML image processing on Cloud Functions solution. Go to the AI/ML image processing on Cloud Functions solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour Next, to try the solution out yourself, see Explore the solution. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! The Terraform output also includes the image processing service's entry point URL, the name of the input Cloud Storage bucket for uploading images, and the name of the output Cloud Storage bucket that contains image annotation data, as shown in the following example output: vision_annotations_gcs = "gs://vision-annotations-1234567890" vision_input_gcs = "gs://vision-input-1234567890" vision_prediction_url = [ "https://annotate-http-abcde1wxyz-wn.a.run.app", "ingressIndex:0", "ingressValue:ALLOW_ALL", "isAuthenticated:false", ] To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour Next, you can explore the solution and see how it works. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Explore the solution In this section, you can try using the solution to see it in action. The image processing service can be invoked in two ways: by calling its REST API directly or by uploading images to the input Cloud Storage bucket. Note: The image processing service might respond with errors if you're making a very large volume of requests, due to product usage quotas and limits. See Cloud Run functions quotas and Cloud Vision API quotas and limits for details. Invoke the service through the REST API In scenarios where you want to process images synchronously in a request-response flow, use the image processing service's REST API. The annotate-http function deployed by the solution is the entry point to the image processing service's REST API. You can find the URL of this function in the console, or if you deployed by using the Terraform CLI, in the output variable vision_prediction_url. This entry point URL exposes an endpoint named /annotate for making image processing requests. The /annotate endpoint supports GET and POST requests with the following parameters: Parameter Description image (POST requests only) Image content, uploaded in binary format or specified as base64-encoded image data. image_uri A URI pointing to an image. features (Optional) A comma-separated list of Vision API annotation features to request. Possible feature values are: CROP_HINTS DOCUMENT_TEXT_DETECTION FACE_DETECTION IMAGE_PROPERTIES LABEL_DETECTION LANDMARK_DETECTION LOGO_DETECTION OBJECT_LOCALIZATION PRODUCT_SEARCH SAFE_SEARCH_DETECTION TEXT_DETECTION WEB_DETECTION To specify the image to be analyzed, only include one of the image or image_uri parameters. If you specify both, image_uri is used. For example, to perform object detection on an image with an internet URI, you can send a GET request such as the following using curl: curl "YOUR_ENTRYPOINT_URL/annotate?features=OBJECT_LOCALIZATION&image_uri=YOUR_IMAGE_URI" Alternatively, to specify image content directly using a local image file, you can use a POST request such as the following: curl -X POST -F image=@YOUR_IMAGE_FILENAME -F features=OBJECT_LOCALIZATION "YOUR_ENTRYPOINT_URL/annotate" The response contains the image annotations from the Vision API in JSON format. Invoke the service by uploading images to Cloud Storage In scenarios where you want to process images asynchronously or by batch upload, use the image processing service's Cloud Storage trigger, which automatically invokes the service in response to image uploads. Follow the steps to analyze images using the Cloud Storage trigger: In the console, go to the Cloud Storage Buckets page. Go to Cloud Storage Click the name of your input bucket (vision-input-ID) to go to its Bucket details page. In the Objects tab, click Upload files. Select the image file or files you want to analyze. After the upload is complete, go back to the Cloud Storage Buckets page. Go to Cloud Storage Click the name of your annotation output bucket (vision-annotations-ID) to go to its Bucket details page. The Objects tab contains a separate JSON file for each image you uploaded. The JSON files contain the annotation data for each image. Note: If a corresponding JSON file doesn't appear in the annotation output bucket, wait a moment for image processing to complete and refresh the page. Customize the solution This section provides information that Terraform developers can use to modify the AI/ML image processing on Cloud Functions solution in order to meet their own technical and business requirements. The guidance in this section is relevant only if you deploy the solution by using the Terraform CLI. Note: Changing the Terraform code for this solution requires familiarity with the Terraform configuration language. If you modify the Google-provided Terraform configuration, and then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. The Terraform configuration for this solution provides the following variables you can use to customize the image processing service: Variable Description Default value region The Google Cloud region in which to deploy the Cloud Run functions and other solution resources. See Cloud Run functions Locations for more information. us-west4 gcf_max_instance_count The maximum number of Cloud Run functions instances for the service. This helps control the service's scaling behavior. See Using maximum instances for more information. 10 gcf_timeout_seconds The timeout for requests to the service, in seconds. This controls how long the service can take to respond. See Function timeout for more information. 120 gcf_http_ingress_type_index Controls whether the service can be invoked by resources outside of your Google Cloud project. See Ingress settings for more information. Possible values are: 0 (Allow all) 1 (Allow internal only) 2 (Allow internal and Cloud Load Balancing) 0 (Allow all) gcf_require_http_authentication Controls whether authentication is required to make a request to the service. See Authenticating for invocation for more information. false gcf_annotation_features A comma-separated list of Vision API annotation features for the service to include by default. This can be overridden for individual requests. Possible feature values are: CROP_HINTS DOCUMENT_TEXT_DETECTION FACE_DETECTION IMAGE_PROPERTIES LABEL_DETECTION LANDMARK_DETECTION LOGO_DETECTION OBJECT_LOCALIZATION PRODUCT_SEARCH SAFE_SEARCH_DETECTION TEXT_DETECTION WEB_DETECTION FACE_DETECTION,PRODUCT_SEARCH,SAFE_SEARCH_DETECTION To customize the solution, complete the following steps in Cloud Shell: Make sure that the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. If it isn't, go to that directory. Open your terraform.tfvars file and make the required changes, specifying appropriate values for the variables listed in the previous table. Note: For guidance about the effects of such customization on reliability, security, performance, cost, and operations, see Design recommendations. Validate and review the Terraform configuration. Provision the resources. Design recommendations As you make changes to the solution either by changing the values of the provided Terraform variables or modifying the Terraform configuration itself, refer to the resources in this section to help you develop an architecture that meets your requirements for security, reliability, cost, and performance. Note the following: Before you make any design changes, assess the cost impact and consider potential trade-offs with other features. You can assess the cost impact of design changes by using the Google Cloud Pricing Calculator. To implement design changes in the solution, you need expertise in Terraform coding and advanced knowledge of the Google Cloud services that are used in the solution. If you modify the Google-provided Terraform configuration and if you then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For more information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Security By default, the image processing service allows requests from the internet and does not require authentication for requests. In a production environment, you might want to restrict access to the service. You can control where requests to your service are allowed to originate by modifying the gcf_http_ingress_type_index Terraform variable. Take caution against unintentionally making the solution's service endpoints publicly accessible on the internet. See Configuring network settings in the Cloud Run functions documentation for more information. You can require authentication for requests to the image processing service's REST API by modifying the gcf_require_http_authentication Terraform variable. This helps to control individual access to the service. If you require authentication, then callers of the service must provide credentials to make a request. See Authenticating for invocation in the Cloud Run functions documentation for more information. For security principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Security in the Architecture Framework. Reliability When users upload images to the input Cloud Storage bucket, they might experience varying levels of latency in the resulting annotation output. By default, users must poll the output bucket to determine when the annotations are available. To make your application reliably take action as soon as image processing is complete, you can subscribe to Cloud Storage events in the output bucket. For example, you might deploy another Cloud Run function to process the annotation data - see Cloud Storage triggers in the Cloud Run functions documentation for more information. For more recommendations, refer to the following guides to help optimize the reliability of the products used in this solution: Cloud Run functions tips and tricks Best practices for Cloud Storage For reliability principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Reliability in the Architecture Framework. Performance The throughput of the image processing service is directly affected by the Cloud Run functions scaling ability. Cloud Run functions scales automatically by creating function instances to handle the incoming traffic load, up to a configurable instance limit. You can control the scaling of the functions, and in turn the image processing service's throughput, by changing the maximum instance limit or removing the limit altogether. Use the gcf_max_instance_count Terraform variable to change the limit. See Using maximum instances and Auto-scaling behavior in the Cloud Run functions documentation for more information. For performance optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Performance optimization in the Architecture Framework. Cost For cost optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Cost optimization in the Architecture Framework. Delete the deployment When you no longer need the solution deployment, to avoid continued billing for the resources that you created, delete the deployment. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. If an API not enabled error persists, follow the link in the error message to enable the API. Wait a few moments for the API to become enabled and then run the terraform apply command again. Cannot assign requested address When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/infra Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next Learn more about serverless computing on Google Cloud. Learn more about machine learning for image analysis on Google Cloud. Learn more about event-driven architectures. Understand the capabilities and limits of products used in this solution: Cloud Vision API documentation Cloud Storage documentation Cloud Run functions documentation For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. Send feedback
|
AI_and_ML.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/ai
|
2 |
+
Date Scraped: 2025-02-23T11:57:10.775Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceGenerative AI on Google CloudBring generative AI to real-world experiences quickly, efficiently, and responsibly, powered by Google’s most advanced technology and models including Gemini. Plus, new customers can start their AI journey today with $300 in free credits.Try it Contact salesProducts and servicesExplore tools from Google Cloud that make it easier for developers to build with generative AI and new AI-powered experiences across our cloud portfolio. For more information, view all our AI products.Build applications and experiences powered by generative AIWith Vertex AI, you can interact with, customize, and embed foundation models into your applications. Access foundation models on Model Garden, tune models via a simple UI on Vertex AI Studio, or use models directly in a data science notebook.Plus, with Vertex AI Agent Builder developers can build and deploy AI agents grounded in their data.Your ultimate guide to the latest in gen AI on Vertex AIRead the blogGoogle named a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024Read the reportCustomize and deploy Gemini models to production in Vertex AI Gemini, a multimodal model from Google DeepMind, is capable of understanding virtually any input, combining different types of information, and generating almost any output. Prompt and test Gemini in Vertex AI using text, images, video, or code. With Gemini’s advanced reasoning and generation capabilities, developers can try sample prompts for extracting text from images, converting image text to JSON, and even generate answers about uploaded images.Vertex AI Gemini API QuickstartDocumentationKickstart your AI journey with our 10-step planDownload the guideNew generation of AI assistants for developers, Google Cloud services, and applicationsGemini Code Assist offers AI-powered assistance to help developers build applications with higher velocity, quality, and security in popular code editors like VS Code and JetBrains, and on developer platforms like Firebase. Built with robust enterprise features, it enables organizations to adopt AI assistance at scale while meeting security, privacy, and compliance requirements. Additional Gemini for Google Cloud offerings assist users in working and coding more effectively, gaining deeper data insights, navigating security challenges, and more. Get to know Gemini Code Assist EnterpriseWatch nowPowering Google Cloud with GeminiRead the blogExplore Google Cloud's open and innovative AI partner ecosystemThe emergence of generative AI has the potential to transform entire businesses and entire industries. At Google Cloud, we believe the future of AI will be open. Our broad ecosystem of partners provides you choice while maximizing opportunities for innovation.Explore our partner ecosystemFind a partnerHelping businesses with generative AIRead the blogTurn your gen AI ideas into reality with the Google for Startups Cloud ProgramUnlock up to $200,000 USD (up to $350,000 USD for AI startups) in Google Cloud and Firebase credits and supercharge your innovation with cutting-edge cloud infrastructure and generative AI tools. Get access to expert guidance, relevant resources, and powerful technologies like Vertex AI to accelerate your development journey.Apply now and start building with the Google for Startups Cloud Program today.Google Cloud Startup SummitRegister to watch on demandUnlock your startup’s gen AI potential with the Startup Learning CenterGet started Vertex AIBuild applications and experiences powered by generative AIWith Vertex AI, you can interact with, customize, and embed foundation models into your applications. Access foundation models on Model Garden, tune models via a simple UI on Vertex AI Studio, or use models directly in a data science notebook.Plus, with Vertex AI Agent Builder developers can build and deploy AI agents grounded in their data.Your ultimate guide to the latest in gen AI on Vertex AIRead the blogGoogle named a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024Read the reportGemini modelsCustomize and deploy Gemini models to production in Vertex AI Gemini, a multimodal model from Google DeepMind, is capable of understanding virtually any input, combining different types of information, and generating almost any output. Prompt and test Gemini in Vertex AI using text, images, video, or code. With Gemini’s advanced reasoning and generation capabilities, developers can try sample prompts for extracting text from images, converting image text to JSON, and even generate answers about uploaded images.Vertex AI Gemini API QuickstartDocumentationKickstart your AI journey with our 10-step planDownload the guideGemini for Google Cloud New generation of AI assistants for developers, Google Cloud services, and applicationsGemini Code Assist offers AI-powered assistance to help developers build applications with higher velocity, quality, and security in popular code editors like VS Code and JetBrains, and on developer platforms like Firebase. Built with robust enterprise features, it enables organizations to adopt AI assistance at scale while meeting security, privacy, and compliance requirements. Additional Gemini for Google Cloud offerings assist users in working and coding more effectively, gaining deeper data insights, navigating security challenges, and more. Get to know Gemini Code Assist EnterpriseWatch nowPowering Google Cloud with GeminiRead the blogGenerative AI partnersExplore Google Cloud's open and innovative AI partner ecosystemThe emergence of generative AI has the potential to transform entire businesses and entire industries. At Google Cloud, we believe the future of AI will be open. Our broad ecosystem of partners provides you choice while maximizing opportunities for innovation.Explore our partner ecosystemFind a partnerHelping businesses with generative AIRead the blogAI for Startups Turn your gen AI ideas into reality with the Google for Startups Cloud ProgramUnlock up to $200,000 USD (up to $350,000 USD for AI startups) in Google Cloud and Firebase credits and supercharge your innovation with cutting-edge cloud infrastructure and generative AI tools. Get access to expert guidance, relevant resources, and powerful technologies like Vertex AI to accelerate your development journey.Apply now and start building with the Google for Startups Cloud Program today.Google Cloud Startup SummitRegister to watch on demandUnlock your startup’s gen AI potential with the Startup Learning CenterGet startedsparkLooking to build a solution?I want to build a solution where users can upload documents on a case and the system generates a summary of the caseI want to support customer service agents with an AI that can get answers from external and internal documentsI want to automatically generate social media tags and metadata for marketing videosMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionSummarize documentsprompt_suggestionSupport agents with AIprompt_suggestionGenerate metadata and tagsCustomer success storiesFOX Sports uses Vertex AI to store, organize, and surface video highlights for sports broadcastsWatch the video 0:30Wendy’s reimagined drive-thru takes and displays custom orders with help from generative AIWatch the video (1:29)GE Appliances uses Google Cloud AI to craft recipes from what’s already inside the fridgeWatch the video (1:34)UKG and Google Cloud Announce Partnership to Transform Employee Experiences with Generative AIUKG is bringing generative AI to their HCM apps using Vertex AI and proprietary data.3-min readGitLab and Google Cloud Partner to Expand AI-Assisted Capabilities with Customizable Gen AI Foundation ModelsGitLab employs Vertex AI to power new vulnerability detection feature.3-min readMidjourney Selects Google Cloud to Power AI-Generated Creative PlatformMidjourney provides users with a seamless creative experience with Google Cloud's TPUs and GPUs. 3-min readSnorkel AI Teams with Google Cloud to speed AI deployment with Vertex AISnorkel AI and Vertex AI are equipping enterprises to solve their most critical challenges.4-min readAnthropic Forges Partnership With Google Cloud to Help Deliver Reliable and Responsible AIAnthropic uses Google Cloud infrastructure to train their LLMs quickly and sustainably.3-min readView MoreBusiness use cases Learn how generative AI can transform customer service, enhance employee productivity, automate business processes, and more. Build a single knowledge base from disparate datasets Video (3:58)Transform chatbots to full-on customer service assistants Video (2:48)Find and summarize complex information in moments Video (2:37)Generate creative content for multi-channel marketing Video (2:40)Simplify how products are onboarded, categorized, labeled for search, and marketed Video (2:12)Automated data collection and documentation processes Video (3:45)Machine-generated events monitoring to predict upcoming maintenance Video (3:08)Accelerate product innovation through data-led insightsVideo (3:36)Build virtual stylists that help consumers find what they need Video (3:29)Get care-related information in real time to deliver a better patient experience Video (3:52)Control operating costs and improve content performance Video (2:47)Summarize, transcribe, and index recordings to include more voices in the discussionVideo (2:29)Improve carbon performance with advanced data analysis and communication toolsVideo (2:52)Detect and resolve concerns faster for improved customer experience Video (3:39)Consistent voice experience across devices Video (3:31)Deliver personalized content recommendations including music, video, and blogsWatch how media companies can provide personalized content discovery Video (1:47)Improve developer efficiency with code assistance Video (2:30)Accelerate generative AI-driven transformation with databasesView MoreDeveloper resourcesIntroduction to generative AI No cost introductory learning course 45 minutesTips to becoming a world-class Prompt EngineerVideo (1:53)Learn how generative AI fits into the entire software development lifecycle5-min readFind out how to build a serverless application that uses generative AIVideo (8:53)Domain-specific AI apps: A three-step design pattern for specializing LLMs5-min readLearn how to enrich product data with generative AI using Vertex AI5-min readCode samples to get started building generative AI apps on Google Cloud3-min readContext-aware code generation: Retrieval augmentation and Vertex AI Codey APIs6-min readHow to build a gen AI application: Design principles and design patterns5-min readUnlock gen AI’s full potential with operational databasesView MoreExecutive resourcesLearn how nonprofits are tackling climate action, education, and crisis response with AIDownload the guideJoin Google experts for a deep dive into how companies are putting AI to workListen to the podcastUnlocking gen AI success: Five pitfalls every executive should knowRead the blogView MoreConsulting servicesCreate with Generative AITransform your creative process. Boost productivity by automatically generating writing and art. Learn moreDiscover with Generative AIBuild AI-enhanced search engines or assistive experiences to enhance your customer experience.Learn moreSummarize with Generative AITake long-form chats, emails, or reports and distill them to their core for quick comprehension.Learn moreAutomate with Generative AITransform from time-consuming, expensive processes to efficient ones.Learn moreWe believe GenAI can be a tremendously powerful tool that changes how people go about analyzing information and insights at work. Our collaboration with Google Cloud will help employees and leaders make better decisions, have more productive conversations, and anticipate how today's choices can impact tomorrow's operations and workplace culture overall.Hugo Sarrazin, Chief Product and Technology Officer at UKGStart your AI journey today Try Google Cloud AI and machine learning products. New customers can get started with up to $300 in free credits.Try it in consoleContact salesJoin our technical community to build your AI skillsGoogle Cloud Innovators Read our latest AI announcementsLearn moreGet updates with the Google Cloud newsletterSubscribeGoogle Accountjamzith [email protected]
|
AI_and_Machine_Learning.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/products/ai
|
2 |
+
Date Scraped: 2025-02-23T12:01:56.855Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
AI and machine learning productsTry Gemini 2.0 models, the latest and most advanced multimodal models in Vertex AI. See what you can build with up to a 2M token context window, starting as low as $0.0001.Try it in consoleContact salesSummarize large documents with generative AIDeploy a preconfigured solution that uses generative AI to quickly extract text and summarize large documents.Deploy an AI/ML image processing pipelineLaunch a preconfigured, interactive solution that uses pre-trained machine learning models to analyze images and generate image annotations.Create a chat app using retrieval-augmented generation (RAG)Deploy a preconfigured solution with a chat-based experience that provides questions and answers based on embeddings stored as vectors.sparkLooking to build a solution?I want to support customer service agents with an AI that can get answers from external and internal documentsI want to build a solution where users can upload documents on a case and it gives them a summary of their case, and makes arguments based on the data.I want to automatically flag videos with inappropriate content My use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionSupport agents with AIprompt_suggestionSummarize documentsprompt_suggestionAnalyze videosProducts, solutions, and servicesCategoryProducts and solutions Good forGenerative AIVertex AI StudioA Vertex AI tool for rapidly prototyping and testing generative AI models. Test sample prompts, design your own prompts, and customize foundation models and LLMs to handle tasks that meet your application's needs.Prompt design and tuning with an easy-to-use interface Code completion and generation with CodeyGenerating and customizing images with ImagenUniversal speech modelsVertex AI Agent BuilderCreate a range of generative AI agents and applications grounded in your organization’s data. Vertex AI Agent Builder provides the convenience of a no code agent building console alongside powerful grounding, orchestration and customization capabilities.Building multimodal conversational AI agents Building a Google-quality search experience on your own dataEnjoy powerful orchestration, grounding and customization tools Generative AI Document SummarizationThe one-click solution establishes a pipeline that extracts text from PDFs, creates a summary from the extracted text with Vertex AI Generative AI Studio, and stores the searchable summary in a BigQuery database.Process and summarize large documents using Vertex AI LLMsDeploy an application that orchestrates the documentation summarization processTrigger the pipeline with a PDF upload and view a generated summaryMachine learning and MLOPsVertex AI Platform A single platform for data scientists and engineers to create, train, test, monitor, tune, and deploy ML and AI models. Choose from over 150 models in Vertex's Model Garden, including Gemini and open source models like Stable Diffusion, BERT, T-5. Custom ML trainingTraining models with minimal ML expertiseTesting, monitoring, and tuning ML models Deploying 150+ models, including multimodal and foundation models like GeminiVertex AI NotebooksChoose from Colab Enterprise or Vertex AI Workbench. Access every capability in Vertex AI Platform to work across the entire data science workflow—from data exploration to prototype to production. Data scientist workflowsRapid prototyping and model developmentDeveloping and deploying AI solutions on Vertex AI with minimal transitionAutoMLTrain high-quality custom machine learning models with minimal effort and machine learning expertise.Building custom machine learning models in minutes with minimal expertiseTraining models specific to your business needsSpeech, text, and language APIsNatural Language AI Derive insights from unstructured text using Google machine learning.Applying natural language understanding to apps with the Natural Language APITraining your open ML models to classify, extract, and detect sentimentSpeech-to-TextAccurately convert speech into text using an API powered by Google's AI technologies.Automatic speech recognitionReal-time transcriptionEnhanced phone call models in Google Contact Center AIText-to-SpeechConvert text into natural-sounding speech using a Google AI powered API. Improving customer interactions Voice user interface in devices and applicationsPersonalized communication Translation AIMake your content and apps multilingual with fast, dynamic machine translation.Real-time translationCompelling localization of your contentInternationalizing your productsImage and video APIsVision AIDerive insights from your images in the cloud or at the edge with AutoML Vision or use pre-trained Vision API models to detect objects, understand text, and more.Accurately predicting and understanding images with MLTraining ML models to classify images by custom labels using AutoML VisionVideo AIEnable powerful content discovery and engaging video experiences.Extracting rich metadata at the video, shot, or frame levelCustom entity labels with AutoML Video IntelligenceDocument and data APIsDocument AIDocument AI includes pre-trained models for data extraction, Document AI Workbench to create new custom models or uptrain existing ones, and Document AI Warehouse to search and store documents. Extracting, classifying, and splitting data from documents Reducing manual document processing and minimizing setup costsGaining insights from document dataAI assistance and conversational AIConversational Agents (Dialogflow)Conversational AI platform with both intent-based and generative AI LLM capabilities for building natural, rich conversational experiences into mobile and web applications, smart devices, bots, interactive voice response systems, popular messaging platforms, and more. Features a visual builder to create, build, and manage virtual agents. Natural interactions for complex multi-turn conversationsBuilding and deploying advanced agents quicklyEnterprise-grade scalabilityBuilding a chatbot based on a website or collection of documentsCustomer Engagement Suite with Google AIDelight customers with an end-to-end application that combines our most advanced conversational AI, with multimodal and omnichannel functionality to deliver exceptional customer experiences at every touchpoint.Creating advanced virtual agents in minutes that smoothly switch between topicsReal-time, step-by-step assistance for human agentsMultichannel communications between customers and agentsGemini Code Assist Gemini Code Assist offers code recommendations in real time, suggests full function and code blocks, and identifies vulnerabilities and errors in the code—while suggesting fixes. Assistance can be accessed via a chat interface, Cloud Shell Editor, or Cloud Code IDE extensions for VSCode and JetBrains IDEs. Code assistance for Go, Java, JavaScript, Python, and SQLSQL completions, query generation, and summarization using natural language Suggestions to structure, modify, or query your data during database migrationIdentify and troubleshoot errors using natural languageAI InfrastructureTPUs, GPUs, and CPUsHardware for every type of AI workload from our partners, like NVIDIA, Intel, AMD, Arm, and more, we provide customers with the widest range of AI-optimized compute options across TPUs, GPUs, and CPUs for training and serving the most data-intensive models. AI Accelerators for every use case from high performance training to inferenceAccelerating specific workloads on your VMsSpeeding up compute jobs like machine learning and HPCGoogle Kubernetes EngineWith one platform for all workloads, GKE offers a consistent and robust development process. As a foundation platform, it provides unmatched scalability, compatibility with a diverse set of hardware accelerators allowing customers to achieve superior price performance for their training and inference workloads.Building with industry-leading support for 15,000 nodes in a single clusterChoice of diverse hardware accelerators for training and inferenceGKE Autopilot reduces the burden of Day 2 operationsRapid node start-up, image streaming, integration with GCSFuseConsulting serviceAI Readiness ProgramOur AI Readiness Program is a 2-3 week engagement designed to accelerate value realization from your AI efforts. Our experts will work with you to understand your business objectives, benchmark your AI capabilities, and provide tailored recommendations for your needs.AI value benchmarking and capability assessmentReadout and recommendationsAI planning and roadmapping Products, solutions, and servicesGenerative AIVertex AI StudioA Vertex AI tool for rapidly prototyping and testing generative AI models. Test sample prompts, design your own prompts, and customize foundation models and LLMs to handle tasks that meet your application's needs.Prompt design and tuning with an easy-to-use interface Code completion and generation with CodeyGenerating and customizing images with ImagenUniversal speech modelsMachine learning and MLOPsVertex AI Platform A single platform for data scientists and engineers to create, train, test, monitor, tune, and deploy ML and AI models. Choose from over 150 models in Vertex's Model Garden, including Gemini and open source models like Stable Diffusion, BERT, T-5. Custom ML trainingTraining models with minimal ML expertiseTesting, monitoring, and tuning ML models Deploying 150+ models, including multimodal and foundation models like GeminiSpeech, text, and language APIsNatural Language AI Derive insights from unstructured text using Google machine learning.Applying natural language understanding to apps with the Natural Language APITraining your open ML models to classify, extract, and detect sentimentImage and video APIsVision AIDerive insights from your images in the cloud or at the edge with AutoML Vision or use pre-trained Vision API models to detect objects, understand text, and more.Accurately predicting and understanding images with MLTraining ML models to classify images by custom labels using AutoML VisionDocument and data APIsDocument AIDocument AI includes pre-trained models for data extraction, Document AI Workbench to create new custom models or uptrain existing ones, and Document AI Warehouse to search and store documents. Extracting, classifying, and splitting data from documents Reducing manual document processing and minimizing setup costsGaining insights from document dataAI assistance and conversational AIConversational Agents (Dialogflow)Conversational AI platform with both intent-based and generative AI LLM capabilities for building natural, rich conversational experiences into mobile and web applications, smart devices, bots, interactive voice response systems, popular messaging platforms, and more. Features a visual builder to create, build, and manage virtual agents. Natural interactions for complex multi-turn conversationsBuilding and deploying advanced agents quicklyEnterprise-grade scalabilityBuilding a chatbot based on a website or collection of documentsAI InfrastructureTPUs, GPUs, and CPUsHardware for every type of AI workload from our partners, like NVIDIA, Intel, AMD, Arm, and more, we provide customers with the widest range of AI-optimized compute options across TPUs, GPUs, and CPUs for training and serving the most data-intensive models. AI Accelerators for every use case from high performance training to inferenceAccelerating specific workloads on your VMsSpeeding up compute jobs like machine learning and HPCConsulting serviceAI Readiness ProgramOur AI Readiness Program is a 2-3 week engagement designed to accelerate value realization from your AI efforts. Our experts will work with you to understand your business objectives, benchmark your AI capabilities, and provide tailored recommendations for your needs.AI value benchmarking and capability assessmentReadout and recommendationsAI planning and roadmapping Ready to start building with AI?Try Google Cloud's AI products and services designed for businesses and professional developers.Get startedExplore our ecosystem of Gemini products to help you get the most out of Google AI.Learn more about GeminiLearn from our customersSee how developers and data scientists are using our tools to leverage the power of AINewsPriceline rolls out new gen AI powered tools to enhance trip planning and improve employee productivity5-min readBlog postOrange utilizes AI to tackle a range of projects from retail recommendations to complex wiring jobs5-min readCase StudyChristus Muguerza developed a model that can predict 77% of acute pain in patients undergoing surgery5-min readVideoWisconsin Department of Workforce Development cleared a backlog of 777,000 claims with the help of Doc AIVideo (3:05)See all customersCloud AI products comply with our SLA policies. They may offer different latency or availability guarantees from other Google Cloud services.Start your AI journey todayTry Google Cloud AI and machine learning products in the console.Go to my console Have a large project?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith [email protected]
|
AI_for_Data_Analytics.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/use-cases/ai-data-analytics
|
2 |
+
Date Scraped: 2025-02-23T11:59:12.320Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Store data and run query analyses with free usage of BigQuery, up to monthly limitsAI data analyticsWrite SQL, build predictive models, and visualize data with AI data analyticsUse foundational models and chat assistance for predictive analytics, sentiment analysis, and AI-enhanced business intelligence.Go to consoleRequest a demoPage highlightsWhat is AI data analytics?Write queries, SQL, and code with AI chat assistanceBigQuery Studio explainedBigQuery in a minute01:26OverviewWhat is AI data analytics?AI data analytics refers to the practice of using artificial intelligence (AI) to analyze large data sets, simplify and scale trends, and uncover insights for data analysts.How can AI be used with data analytics?AI data analytics is designed to support, automate, and simplify each stage of the data analysis journey. AI tools can help with data collection (ingesting from multiple sources) and preparation (cleaning and organizing for analysis). Machine learning (ML) models can be trained and applied on prepared data to extract insights and patterns. Finally, AI can help analysts interpret trends and insights for more informed decision-making.VIDEOAI tools for data professionals7:30How can data analysts use AI data analytics?Data analysts across industries can use AI data analytics to enhance their work. From real-time credit card fraud detection and assisting in disease diagnosis to demand forecasting in retail and propensity modeling for gaming apps, AI data analytics can assist with all types of industry-specific use cases.Can AI do the work of a data analyst?AI data analytics is designed to enhance the core contributions and skill sets of data analysts. Given their subject matter expertise, critical thinking abilities, and the capacity to pose insightful queries, data analysts are critical to the success of any AI-assisted data analysis.View moreHow It WorksBigQuery Studio provides a single, unified interface for all data practitioners to simplify analytics workflows from data preparation and visualization to ML model creation and training. Using simple SQL, access Vertex AI foundational models and chat assist directly in BigQuery for a variety of data analytics use cases.Get startedLearn how to use AI code assistance in BiqQuery StudioCommon UsesAI-powered predictive analytics and forecastingBuild predictive and forecasting models using SQL and AILeverage your existing SQL skills to build, train, and deploy batch predictive models directly within BigQuery or your chosen data warehouse with BigQuery ML. Plus, BigQuery ML integrates with Vertex AI, our end-to-end platform for AI and ML, broadening your access to powerful models that generate real-time, low-latency online predictions, including identifying new audiences based on current customer lifetime value, recommending personalized investment products, and forecasting demand.Try BigQuery ML3:38How to simplify AI models with Vertex AI and BigQuery MLManage BigQuery ML models in Vertex AI documentationFull list of supported AI resources for BigQueryML-compatible remote modelsPredictive forecasting data analytics design patterns How-tosBuild predictive and forecasting models using SQL and AILeverage your existing SQL skills to build, train, and deploy batch predictive models directly within BigQuery or your chosen data warehouse with BigQuery ML. Plus, BigQuery ML integrates with Vertex AI, our end-to-end platform for AI and ML, broadening your access to powerful models that generate real-time, low-latency online predictions, including identifying new audiences based on current customer lifetime value, recommending personalized investment products, and forecasting demand.Try BigQuery ML3:38How to simplify AI models with Vertex AI and BigQuery MLManage BigQuery ML models in Vertex AI documentationFull list of supported AI resources for BigQueryML-compatible remote modelsPredictive forecasting data analytics design patterns Sentiment analysisRun sentiment analysis on datasets using BigQuery MLFrom understanding customer feedback on social media or product reviews to developing market research through competitor analysis and campaign effectiveness, data analysts use sentiment analysis to parse positive, negative, and neutral scores on their datasets. With BigQuery ML, you use SQL to train models to automatically run sentiment analysis and predictions for stronger insights, including customer pain points, product feature enhancements, and more. Try BigQuery MLSentiment analysis with BigQuery ML guide Sentiment analysis tutorial using Natural Language APIBigQuery interactive tutorialHow-tosRun sentiment analysis on datasets using BigQuery MLFrom understanding customer feedback on social media or product reviews to developing market research through competitor analysis and campaign effectiveness, data analysts use sentiment analysis to parse positive, negative, and neutral scores on their datasets. With BigQuery ML, you use SQL to train models to automatically run sentiment analysis and predictions for stronger insights, including customer pain points, product feature enhancements, and more. Try BigQuery MLSentiment analysis with BigQuery ML guide Sentiment analysis tutorial using Natural Language APIBigQuery interactive tutorialImage and video analysisAnalyze unstructured data images and videos with AIEffortlessly analyze images and videos to extract valuable information, streamline processes, and enhance decision-making with Google Cloud AI. To analyze unstructured data in images, use remote functions in BigQuery like Vertex AI Vision or perform inference on unstructured image data with BigQuery ML. For video analysis, Video Description on Vertex AI summarizes short video clip content, providing detailed metadata about videos for storing and searching.Get started6:01Analyzing images, video, and other unstructured data in BigQuery with Vertex AI Analyze an object table using Vertex AI Vision tutorialRun inference on image object tables tutorialSummarize short video clips with Video Description on Vertex AIHow-tosAnalyze unstructured data images and videos with AIEffortlessly analyze images and videos to extract valuable information, streamline processes, and enhance decision-making with Google Cloud AI. To analyze unstructured data in images, use remote functions in BigQuery like Vertex AI Vision or perform inference on unstructured image data with BigQuery ML. For video analysis, Video Description on Vertex AI summarizes short video clip content, providing detailed metadata about videos for storing and searching.Get started6:01Analyzing images, video, and other unstructured data in BigQuery with Vertex AI Analyze an object table using Vertex AI Vision tutorialRun inference on image object tables tutorialSummarize short video clips with Video Description on Vertex AIAI assistance for SQL generation and completion Write queries, SQL, and code with Gemini in BigQueryGemini in BigQuery provides AI-powered assistive and collaboration features including help with writing and editing SQL or Python code, visual data preparation, and intelligent recommendations for enhancing productivity and optimizing costs. You can leverage BigQuery’s in-console chat interface to explore tutorials, documentation, and best practices for specific tasks using simple prompts such as: “How can I use BigQuery materialized views?” “How do I ingest JSON data?” and “How can I improve query performance?”Request access to Gemini in BigQuery preview3:42Introduction to Gemini in BigQueryWrite BigQuery queries with Gemini setup guideGenerate data insights in BigQuery with GeminiWrite code in a Colab Enterprise notebook with Gemini setup guideHow-tosWrite queries, SQL, and code with Gemini in BigQueryGemini in BigQuery provides AI-powered assistive and collaboration features including help with writing and editing SQL or Python code, visual data preparation, and intelligent recommendations for enhancing productivity and optimizing costs. You can leverage BigQuery’s in-console chat interface to explore tutorials, documentation, and best practices for specific tasks using simple prompts such as: “How can I use BigQuery materialized views?” “How do I ingest JSON data?” and “How can I improve query performance?”Request access to Gemini in BigQuery preview3:42Introduction to Gemini in BigQueryWrite BigQuery queries with Gemini setup guideGenerate data insights in BigQuery with GeminiWrite code in a Colab Enterprise notebook with Gemini setup guideAI-enhanced data visualizationUse AI chat to derive data insights, generate reports, and visualize trendsGain insights and build data-powered applications with AI-powered business intelligence from Looker. Using Gemini in Looker, chat directly with your data to uncover business opportunities, create entire reports or advanced visualizations, and build formulas for calculated fields—all with only a few sentences of conversational instruction.Get started2:54Learn how to use AI-powered business intelligence with LookerLooker Studio quickstart guideLooker documentationLooker best practices guideHow-tosUse AI chat to derive data insights, generate reports, and visualize trendsGain insights and build data-powered applications with AI-powered business intelligence from Looker. Using Gemini in Looker, chat directly with your data to uncover business opportunities, create entire reports or advanced visualizations, and build formulas for calculated fields—all with only a few sentences of conversational instruction.Get started2:54Learn how to use AI-powered business intelligence with LookerLooker Studio quickstart guideLooker documentationLooker best practices guideNatural language-driven analysis Discover, transform, query, and visualize data using natural languageReimagine your data analysis experience with the AI-powered BigQuery data canvas. This natural language centric tool simplifies the process of finding, querying, and visualizing your data. Its intuitive features help you discover data assets quickly, generate SQL queries, automatically visualize results, and seamlessly collaborate with others—all within a unified interface.Request access to Gemini in BigQuery preview6:03Example prompts of a typical BigQuery data canvas workflowSet up Gemini in BigQueryHow-tosDiscover, transform, query, and visualize data using natural languageReimagine your data analysis experience with the AI-powered BigQuery data canvas. This natural language centric tool simplifies the process of finding, querying, and visualizing your data. Its intuitive features help you discover data assets quickly, generate SQL queries, automatically visualize results, and seamlessly collaborate with others—all within a unified interface.Request access to Gemini in BigQuery preview6:03Example prompts of a typical BigQuery data canvas workflowSet up Gemini in BigQueryStart your proof of conceptStore data and run query analyses with free usage of BigQuery, up to monthly limitsGet startedLearn more about BigQueryView BigQueryData analytics design patternsView sample codeQuery data—without a credit card—with BigQuery sandboxRun sample queryData analytics technical guidesView docsGoogle Accountjamzith [email protected]
|
API_Gateway.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/api-gateway/docs
|
2 |
+
Date Scraped: 2025-02-23T12:09:45.221Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home API Gateway Documentation Stay organized with collections Save and categorize content based on your preferences. API Gateway documentation View all product documentation API Gateway enables you to provide secure access to your backend services through a well-defined REST API that is consistent across all of your services, regardless of the service implementation. Clients consume your REST APIS to implement standalone apps for a mobile device or tablet, through apps running in a browser, or through any other type of app that can make a request to an HTTP endpoint. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstart: Secure traffic to a service with the gcloud CLI Quickstart: Secure traffic to a service with the Google Cloud console Choosing an Authentication Method Creating an API Creating an API config Authentication between services Deploying an API to a gateway Configuring the development environment About quotas find_in_page Reference REST API info Resources Pricing Quotas and limits Release notes Getting support Billing questions Related videos
|
APIs_and_Applications.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/solutions/apis-and-applications
|
2 |
+
Date Scraped: 2025-02-23T11:58:50.821Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Majority of CIOs cite “increasing operational efficiency” as their primary goal for innovation. Learn more.APIs and applicationsAccelerate digital innovation by securely automating processes and easily creating applications without coding by extending your existing data with APIs.Contact usGoogle Cloud APIs and applications solutionsSolutionUnlock legacy applications using APIsExtend and modernize legacy applications alongside new cloud services.Speed up time to marketDeliver dynamic customer experiencesEmpower developers and partnersOpen new business channels using APIsAttract and empower an ecosystem of developers and partners.Drive more adoption and consumption of your APIsGenerate new revenue sourcesMonitor and manage APIs to measure successOpen banking APIxSimplify and accelerate secure delivery of open banking compliant APIs.Accelerate open banking complianceGrow an ecosystem of partners and customersPromote internal and external innovationHealthAPIxEasily connect healthcare providers and app developers to build FHIR API-based digital services.Reduce risks during care transitionsDeliver patient-centric digital servicesImprove chronic condition managementSolutionUnlock legacy applications using APIsExtend and modernize legacy applications alongside new cloud services.Speed up time to marketDeliver dynamic customer experiencesEmpower developers and partnersOpen new business channels using APIsAttract and empower an ecosystem of developers and partners.Drive more adoption and consumption of your APIsGenerate new revenue sourcesMonitor and manage APIs to measure successOpen banking APIxSimplify and accelerate secure delivery of open banking compliant APIs.Accelerate open banking complianceGrow an ecosystem of partners and customersPromote internal and external innovationHealthAPIxEasily connect healthcare providers and app developers to build FHIR API-based digital services.Reduce risks during care transitionsDeliver patient-centric digital servicesImprove chronic condition managementWant to learn more? Find out how our APIs and applications solutions can help you accelerate digital innovation.Contact usNext OnAir: Powering business applications with APIs, microservices, AI, and no-code app development.Watch video Learn from our customersCase StudySee how Pitney Bowes reduced the time to get products to market from 18 months to five.5-min readCase StudyCitrix connects people with security and speed by proactively monitoring APIs.5-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith [email protected]
|
Active_Assist(1).txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/solutions/active-assist
|
2 |
+
Date Scraped: 2025-02-23T12:05:55.860Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Catch up on the latest product launches, demos, and trainings from Next '23. Let's go.Active AssistActive Assist is a portfolio of intelligent tools that helps you optimize your cloud operations with recommendations to reduce costs, increase performance, improve security, and even help you make more sustainable decisions.View your recommendationsRead documentationThe Total Economic Impact of Google Active AssistFrom fewer security breaches to faster troubleshooting, learn how Active Assist can benefit your organization.Modernize with AIOps to Maximize your ImpactLeverage AIOps to increase efficiency and productivity across your day-to-day operations. Google Active Assist Sustainability SpotlightLearn how Active Assist can help your organization reduce CO2 emissions associated with your cloud applications.Active Assist value categories and benefitsExplore your recommendations and insights in-context on individual product pages, together in your Recommendation Hub, or through our Recommender API. Value Categoryhow this benefits your cloudSample Solutions to get startedCostManage your cost wiselyHelp you manage your cost wisely, such as recommending to delete unused or idle resources, downsizing VMs to fit your workload needs, or using committed use discounts to save money.VM machine type recommenderCommitted use discount recommenderIdle VM recommenderCloud SQL overprovisioned instance recommenderSecurityMitigate your security risks proactivelyHarden your security posture by applying recommended actions to reduce over-granted permissions, enable additional security features, and help with compliance and security incident investigations.IAM recommenderFirewall insightsCloud Run recommenderPerformanceMaximize performance of your systemImprove the performance of your cloud resources and workloads through prediction and automation that take your infrastructure one step ahead of what your applications need next.VM machine type recommenderManaged instance group machine type recommenderReliabilityDeliver highly available services to your end usersIncrease the availability and reliability of your cloud resources and your workloads running on Google Cloud via various health checks, auto-scaling capabilities, and Business Continuity and Disaster Recovery options.Compute Engine predictive autoscalingCloud SQL out-of-disk recommenderPolicy TroubleshooterPolicy AnalyzerManageabilitySpend less time managing your cloud configurationEnhance your management experience on Google Cloud via simplification and automation so that you spend less time managing your cloud configuration and spend more time on innovating your digital businesses and delighting your customers.Network Intelligence CenterProduct suggestion recommenderPolicy SimulatorSustainabilityReduce the carbon footprint of your workloadsOffer you the insights and simple-to-use tools to allow you assess, manage, and reduce the carbon footprint of your workloads running on Google Cloud.Unattended project recommenderActive Assist value categories and benefitsCostManage your cost wiselyHelp you manage your cost wisely, such as recommending to delete unused or idle resources, downsizing VMs to fit your workload needs, or using committed use discounts to save money.VM machine type recommenderCommitted use discount recommenderIdle VM recommenderCloud SQL overprovisioned instance recommenderSecurityMitigate your security risks proactivelyHarden your security posture by applying recommended actions to reduce over-granted permissions, enable additional security features, and help with compliance and security incident investigations.IAM recommenderFirewall insightsCloud Run recommenderPerformanceMaximize performance of your systemImprove the performance of your cloud resources and workloads through prediction and automation that take your infrastructure one step ahead of what your applications need next.VM machine type recommenderManaged instance group machine type recommenderReliabilityDeliver highly available services to your end usersIncrease the availability and reliability of your cloud resources and your workloads running on Google Cloud via various health checks, auto-scaling capabilities, and Business Continuity and Disaster Recovery options.Compute Engine predictive autoscalingCloud SQL out-of-disk recommenderPolicy TroubleshooterPolicy AnalyzerManageabilitySpend less time managing your cloud configurationEnhance your management experience on Google Cloud via simplification and automation so that you spend less time managing your cloud configuration and spend more time on innovating your digital businesses and delighting your customers.Network Intelligence CenterProduct suggestion recommenderPolicy SimulatorSustainabilityReduce the carbon footprint of your workloadsOffer you the insights and simple-to-use tools to allow you assess, manage, and reduce the carbon footprint of your workloads running on Google Cloud.Unattended project recommenderFeeling inspired? Let’s solve your challenges together.Explore how Active Assist can help you proactively reduce costs, tighten security, and optimize resources. Interactive demoCheck out our latest blogs to see what's new with Active Assist.Blog postsSign up for our Active Assist Trusted Tester Group to get early access to new features as they're developed.How customers are maximizing performance and security while reducing toilBlog postKPMG makes sure VMs run optimally with automated rightsizing recommendations.5-min readVideoFlowmon uses clear insights to optimize firewall rules quickly, easily.4:20Blog postRandstad reduces networking troubleshooting effort, saves significant time.5-min readBlog postQuickly discovered and turned off 200+ idle VMs using proactive recommendations.5-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.View recommendationsSee the full listList of RecommendersRead all about itRead documentationGet tips & best practicesSee tutorialsGoogle Accountjamzith [email protected]
|
Active_Assist.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/solutions/active-assist
|
2 |
+
Date Scraped: 2025-02-23T11:59:43.866Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Catch up on the latest product launches, demos, and trainings from Next '23. Let's go.Active AssistActive Assist is a portfolio of intelligent tools that helps you optimize your cloud operations with recommendations to reduce costs, increase performance, improve security, and even help you make more sustainable decisions.View your recommendationsRead documentationThe Total Economic Impact of Google Active AssistFrom fewer security breaches to faster troubleshooting, learn how Active Assist can benefit your organization.Modernize with AIOps to Maximize your ImpactLeverage AIOps to increase efficiency and productivity across your day-to-day operations. Google Active Assist Sustainability SpotlightLearn how Active Assist can help your organization reduce CO2 emissions associated with your cloud applications.Active Assist value categories and benefitsExplore your recommendations and insights in-context on individual product pages, together in your Recommendation Hub, or through our Recommender API. Value Categoryhow this benefits your cloudSample Solutions to get startedCostManage your cost wiselyHelp you manage your cost wisely, such as recommending to delete unused or idle resources, downsizing VMs to fit your workload needs, or using committed use discounts to save money.VM machine type recommenderCommitted use discount recommenderIdle VM recommenderCloud SQL overprovisioned instance recommenderSecurityMitigate your security risks proactivelyHarden your security posture by applying recommended actions to reduce over-granted permissions, enable additional security features, and help with compliance and security incident investigations.IAM recommenderFirewall insightsCloud Run recommenderPerformanceMaximize performance of your systemImprove the performance of your cloud resources and workloads through prediction and automation that take your infrastructure one step ahead of what your applications need next.VM machine type recommenderManaged instance group machine type recommenderReliabilityDeliver highly available services to your end usersIncrease the availability and reliability of your cloud resources and your workloads running on Google Cloud via various health checks, auto-scaling capabilities, and Business Continuity and Disaster Recovery options.Compute Engine predictive autoscalingCloud SQL out-of-disk recommenderPolicy TroubleshooterPolicy AnalyzerManageabilitySpend less time managing your cloud configurationEnhance your management experience on Google Cloud via simplification and automation so that you spend less time managing your cloud configuration and spend more time on innovating your digital businesses and delighting your customers.Network Intelligence CenterProduct suggestion recommenderPolicy SimulatorSustainabilityReduce the carbon footprint of your workloadsOffer you the insights and simple-to-use tools to allow you assess, manage, and reduce the carbon footprint of your workloads running on Google Cloud.Unattended project recommenderActive Assist value categories and benefitsCostManage your cost wiselyHelp you manage your cost wisely, such as recommending to delete unused or idle resources, downsizing VMs to fit your workload needs, or using committed use discounts to save money.VM machine type recommenderCommitted use discount recommenderIdle VM recommenderCloud SQL overprovisioned instance recommenderSecurityMitigate your security risks proactivelyHarden your security posture by applying recommended actions to reduce over-granted permissions, enable additional security features, and help with compliance and security incident investigations.IAM recommenderFirewall insightsCloud Run recommenderPerformanceMaximize performance of your systemImprove the performance of your cloud resources and workloads through prediction and automation that take your infrastructure one step ahead of what your applications need next.VM machine type recommenderManaged instance group machine type recommenderReliabilityDeliver highly available services to your end usersIncrease the availability and reliability of your cloud resources and your workloads running on Google Cloud via various health checks, auto-scaling capabilities, and Business Continuity and Disaster Recovery options.Compute Engine predictive autoscalingCloud SQL out-of-disk recommenderPolicy TroubleshooterPolicy AnalyzerManageabilitySpend less time managing your cloud configurationEnhance your management experience on Google Cloud via simplification and automation so that you spend less time managing your cloud configuration and spend more time on innovating your digital businesses and delighting your customers.Network Intelligence CenterProduct suggestion recommenderPolicy SimulatorSustainabilityReduce the carbon footprint of your workloadsOffer you the insights and simple-to-use tools to allow you assess, manage, and reduce the carbon footprint of your workloads running on Google Cloud.Unattended project recommenderFeeling inspired? Let’s solve your challenges together.Explore how Active Assist can help you proactively reduce costs, tighten security, and optimize resources. Interactive demoCheck out our latest blogs to see what's new with Active Assist.Blog postsSign up for our Active Assist Trusted Tester Group to get early access to new features as they're developed.How customers are maximizing performance and security while reducing toilBlog postKPMG makes sure VMs run optimally with automated rightsizing recommendations.5-min readVideoFlowmon uses clear insights to optimize firewall rules quickly, easily.4:20Blog postRandstad reduces networking troubleshooting effort, saves significant time.5-min readBlog postQuickly discovered and turned off 200+ idle VMs using proactive recommendations.5-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.View recommendationsSee the full listList of RecommendersRead all about itRead documentationGet tips & best practicesSee tutorialsGoogle Accountjamzith [email protected]
|
Active_Directory_single_sign-on.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-configuring-single-sign-on
|
2 |
+
Date Scraped: 2025-02-23T11:55:42.550Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Active Directory single sign-on Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-13 UTC This article shows you how to set up single sign-on between your Active Directory environment and your Cloud Identity or Google Workspace account by using Microsoft Active Directory Federation Services (AD FS) and SAML Federation. The article assumes that you understand how Active Directory identity management can be extended to Google Cloud and have already configured user provisioning. The article also assumes that you have a working AD FS 4.0 server that is running on Windows Server 2016 or a later version of Windows Server. To follow this guide, knowledge of Active Directory Domain Services and AD FS is required. You also need a user in Cloud Identity or Google Workspace that has super-admin privileges and a user in Active Directory that has administrative access to your AD FS server. Objectives Configure your AD FS server so that Cloud Identity or Google Workspace can use it as an identity provider. Create a claims issuance policy that matches identities between Active Directory and Cloud Identity or Google Workspace. Configure your Cloud Identity or Google Workspace account so that it delegates authentication to AD FS. Costs If you're using the free edition of Cloud Identity, following this article will not use any billable components of Google Cloud. Before you begin Verify that your AD FS server runs Windows Server 2016 or later. While you can also configure single sign-on by using previous versions of Windows Server and AD FS, the necessary configuration steps might be different from what this article describes. Make sure you understand how Active Directory identity management can be extended to Google Cloud. Configure user provisioning between Active Directory and Cloud Identity or Google Workspace. Consider setting up AD FS in a server farm configuration in order to avoid it becoming a single point of failure. After you've enabled single sign-on, the availability of AD FS determines whether users can log in to the Google Cloud console. Understanding single sign-on By using Google Cloud Directory Sync, you've already automated the creation and maintenance of users and tied their lifecycle to the users in Active Directory. Although GCDS provisions user account details, it doesn't synchronize passwords. Whenever a user needs to authenticate in Google Cloud, the authentication must be delegated back to Active Directory, which is done by using AD FS and the Security Assertion Markup Language (SAML) protocol. This setup ensures that only Active Directory has access to user credentials and is enforcing any existing policies or multi-factor authentication (MFA) mechanisms. Moreover, it establishes a single sign-on experience between your on-premises environment and Google. For more details on single sign-on, see Single sign-on Create a SAML profile To configure single sign-on with AD FS, you first create a SAML profile in your Cloud Identity or Google Workspace account. The SAML profile contains the settings related to your AD FS instance, including its URL and signing certificate. You later assign the SAML profile to certain groups or organizational units. To create a new SAML profile in your Cloud Identity or Google Workspace account, do the following: In the Admin Console, go to SSO with third-party IdP. Go to SSO with third-party IdP Click Third-party SSO profiles > Add SAML profile. On the SAML SSO profile page, enter the following settings: Name: AD FS IDP entity ID: https://ADFS/adfs/services/trust Sign-in page URL: https://ADFS/adfs/ls/ Sign-out page URL: https://ADFS/adfs/ls/?wa=wsignout1.0 Change password URL: https://ADFS/adfs/portal/updatepassword/ In all URLs, replace ADFS with the fully qualified domain name of your AD FS server. Don't upload a verification certificate yet. Click Save. The SAML SSO profile page that appears contains two URLs: Entity ID ACS URL You need these URLs in the next section when you configure AD FS. Configure AD FS You configure your AD FS server by creating a relying party trust. Creating the relying party trust Create a new relying party trust: Connect to your AD FS server and open the AD FS Management MMC snap-in. Select AD FS > Relying Party Trusts. On the Actions pane, click Add relying party trust. On the Welcome page of the wizard, select Claims aware, and click Start. On the Select data source page, select Enter data about the relying party manually, and click Next. On the Specify display name page, enter a name such as Google Cloud and click Next. On the Configure certificate page, click Next. On the Configure URL page, select Enable support for the SAML 2.0 WebSSO protocol, and enter the ACS URL from your SAML profile. Then click Next. On the Configure identifiers page, add the Entity ID from your SAML profile. Then click Next. On the Choose access control policy page, choose an appropriate access policy and click Next. On the Ready to Add Trust page, review your settings, and then click Next. On the final page, clear the Configure claims issuance policy checkbox and close the wizard. In the list of relying party trusts, you see a new entry. Configuring the logout URL When you're enabling users to use single sign-on across multiple applications, it's important to allow them to sign out across multiple applications: Open the relying party trust that you just created. Select the Endpoints tab. Click Add SAML and configure the following settings: Endpoint type: SAML Logout Binding: POST Trusted URL: https://ADFS/adfs/ls/?wa=wsignout1.0 Replace ADFS with the fully qualified domain name of your AD FS server. Click OK. Click OK to close the dialog. Configuring the claims mapping After AD FS has authenticated a user, it issues a SAML assertion. This assertion serves as proof that authentication has successfully taken place. The assertion must identify who has been authenticated, which is the purpose of the NameID claim. To enable Google Sign-In to associate the NameID with a user, the NameID must contain the primary email address of that user. Depending on how you are mapping users between Active Directory and Cloud Identity or Google Workspace, the NameID must contain the UPN or the email address from the Active Directory user, with domain substitutions applied as necessary. UPN In the list of relying party trusts, select the trust that you just created and click Edit claim issuance policy. Click Add rule On the Choose rule type page of the Add transform claim rule wizard, select Transform an incoming claim, then click Next. On the Configure claim rule page, configure the following settings: Claim rule name: Name Identifier Incoming claim type: UPN Outgoing claim type: Name ID Outgoing name ID format: Email Select Pass through all claim values and click Finish. Click OK to close the claim issuance policy dialog. UPN: domain substitution In the list of relying party trusts, select the trust that you just created and click Edit claim issuance policy. Click Add rule On the Choose rule type page of the Add transform claim rule wizard, select Transform an incoming claim, then click Next. On the Configure claim rule page, configure the following settings: Claim rule name: Name Identifier Incoming claim type: UPN Outgoing claim type: Name ID Outgoing name ID format: Email Select Replace incoming claim e-mail suffix with a new e-mail suffix and configure the following setting: New e-mail suffix: A domain name used by your Cloud Identity or Google Workspace account. Click Finish, and then click OK. Email In the list of relying party trusts, select the trust that you just created and click Edit claim issuance policy. Add a rule to lookup the email address: In the dialog, click Add Rule. Select Send LDAP Attributes as Claims, and click Next. On the next page, apply the following settings: Claim rule name: Email address Attribute Store: Active Directory Add a row to the list of LDAP attribute mappings: LDAP Attribute: E-Mail-Addresses Outgoing Claim Type: E-Mail-Address Click Finish. Add another rule to set the NameID: Click Add rule On the Choose rule type page of the Add transform claim rule wizard, select Transform an incoming claim, then click Next. On the Configure claim rule page, configure the following settings: Claim rule name: Name Identifier Incoming claim type: E-Mail-Address Outgoing claim type: Name ID Outgoing name ID format: Email Select Pass through all claim values and click Finish. Click OK to close the claim issuance policy dialog. Email: domain substitution In the list of relying party trusts, select the trust that you just created and click Edit claim issuance policy. Add a rule to lookup the email address: In the dialog, click Add Rule. Select Send LDAP Attributes as Claims, and click Next. On the next page, apply the following settings: Claim rule name: Email address Attribute Store: Active Directory Add a row to the list of LDAP attribute mappings: LDAP Attribute: E-Mail-Addresses Outgoing Claim Type: E-Mail-Address Click Finish. Add another rule to set the NameID value: Click Add rule On the Choose rule type page of the Add transform claim rule wizard, select Transform an incoming claim, then click Next. On the Configure claim rule page, configure the following settings: Claim rule name: Name Identifier Incoming claim type: E-Mail-Address Outgoing claim type: Name ID Outgoing name ID format: Email Select Replace incoming claim e-mail suffix with a new e-mail suffix and configure the following setting: New e-mail suffix: A domain name used by your Cloud Identity or Google Workspace account. Click Finish, and then click OK.single-sign-on Exporting the AD FS token-signing certificate After AD FS authenticates a user, it passes a SAML assertion to Cloud Identity or Google Workspace. To enable Cloud Identity and Google Workspace to verify the integrity and authenticity of that assertion, AD FS signs the assertion with a special token-signing key and provides a certificate that enables Cloud Identity or Google Workspace to check the signature. Export the signing certificate from AD FS by doing the following: In the AD FS Management console, click Service > Certificates. Right-click the certificate that is listed under Token-signing, and click View Certificate. Select the Details tab. Click Copy to File to open the Certificate Export Wizard. On the Welcome to the certificate export wizard, click Next. On the Export private key page, select No, do not export the private key. On the Export file format page, select Base-64 encoded X.509 (.CER) and click Next. On the File to export page, provide a local filename, and click Next. Click Finish to close the dialog. Copy the exported certificate to your local computer. Complete the SAML profile You use the signing certificate to complete the configuration of your SAML profile: Return to the Admin Console and go to Security > Authentication > SSO with third-party IdP. Go to SSO with third-party IdP Open the AD FS SAML profile that you created earlier. Click the IDP details section to edit the settings. Click Upload certificate and pick the token signing certificate that you exported from AD FS. Click Save. Note: The token-signing certificate is valid for a limited period of time. Depending on your configuration, AD FS either renews the certificate automatically before it expires, or it requires you to provide a new certificate before the current certificate expires. In both cases, you must update your configuration to use the new certificate. Your SAML profile is complete, but you still need to assign it. Assign the SAML profile Select the users for which the new SAML profile should apply: In the Admin Console, on the SSO with third-party IDPs page, click Manage SSO profile assignments > Manage. Go to Manage SSO profile assignments In the left pane, select the group or organizational unit for which you want to apply the SSO profile. To apply the profile to all users, select the root organizational unit. In the right pane, select Another SSO profile. In the menu, select the AD FS - SAML SSO profile that you created earlier. Click Save. Repeat the steps to assign the SAML profile to another group or organizational unit. Test single sign-on You've completed the single sign-on configuration. You can check whether SSO works as intended. Choose an Active Directory user that satisfies the following criteria: The user has been provisioned to Cloud Identity or Google Workspace. The Cloud Identity user does not have super-admin privileges. User accounts that have super-admin privileges must always sign in by using Google credentials, so they aren't suitable for testing single sign-on. Open a new browser window and go to https://console.cloud.google.com/. On the Google Sign-In page that appears, enter the email address of the user, and click Next. If you use domain substitution, you must apply the substitution to the email address. You are redirected to AD FS. If you configured AD FS to use forms-based authentication, you see the sign-in page. Enter your UPN and password for the Active Directory user, and click Sign in. After successful authentication, AD FS redirects you back to the Google Cloud console. Because this is the first login for this user, you're asked to accept the Google terms of service and privacy policy. If you agree to the terms, click Accept. You are redirected to the Google Cloud console, which asks you to confirm preferences and accept the Google Cloud terms of service. If you agree to the terms, click Yes, and then click Agree and Continue. At the upper left, click the avatar icon, and click Sign out. You are redirected to an AD FS page confirming that you've been successfully signed out. If you have trouble signing in, you might find additional information in the AD FS admin log. Keep in mind that users that have super-admin privileges are exempted from single sign-on, so you can still use the Admin console to verify or change settings. Optional: Configure redirects for domain-specific service URLs When you link to the Google Cloud console from internal portals or documents, you can improve the user experience by using domain-specific service URLs. Unlike regular service URLs such as https://console.cloud.google.com/, domain specific-service URLs include the name of your primary domain. Unauthenticated users that click a link to a domain specific-service URL are immediately redirected to AD FS instead of being shown a Google sign-in page first. Examples for domain-specific service URLs include the following: Google service URL Logo Google Cloud console https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://console.cloud.google.com Google Docs https://docs.google.com/a/DOMAIN Google Sheets https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://sheets.google.com Google Sites https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://slides.google.com Google Drive https://drive.google.com/a/DOMAIN Gmail https://mail.google.com/a/DOMAIN Google Groups https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://groups.google.com Google Keep https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://keep.google.com Looker Studio https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://lookerstudio.google.com YouTube https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://www.youtube.com/ To configure domain-specific service URLs so that they redirect to AD FS, do the following: In the Admin Console, on the SSO with third-party IDPs page, click Domain-specific service URLs > Edit. Go to domain-specific service URLs Set Automatically redirect users to the third-party IdP in the following SSO profile to enabled. Set SSO profile to AD FS. Click Save. Optional: Configure login challenges Google sign-in might ask users for additional verification when they sign in from unknown devices or when their sign-in attempt looks suspicious for other reasons. These login challenges help to improve security, and we recommend that you leave login challenges enabled. If you find that login challenges cause too much inconvenience, you can disable login challenges by doing the following: In the Admin Console, go to Security > Authentication > Login challenges. In the left pane, select an organizational unit for which you want to disable login challenges. To disable login challenges for all users, select the root organizational unit. Under Settings for users signing in using other SSO profiles, select Don't ask users for additional verifications from Google. Click Save. Clean up If you don't intend to keep single sign-on enabled for your organization, follow these steps to disable single sign-on in Cloud Identity or Google Workspace: In the Admin Console and go to Manage SSO profile assignments. Go to Manage SSO profile assignments For each profile assignment, do the following: Open the profile. If you see an Inherit button, click Inherit. If you don't see an Inherit button, select None and click Save. Return to the SSO with third-party IDPs page and open the AD FS SAML profile. Click Delete. To clean up configuration in AD FS, follow these steps: Connect to your AD FS server and open the AD FS MMC snap-in. In the menu at left, right-click the Relying Party Trusts folder. In the list of relying party trusts, right-click the relying party trust you created, and click Delete. Confirm the deletion by clicking Yes. What's next Learn more about federating Google Cloud with Active Directory. Learn about Azure Active Directory B2B user provisioning and single sign-on. Read about best practices for planning accounts and organizations and best practices for federating Google Cloud with an external identity provider. Acquaint yourself with our best practices for managing super-admin users.. Send feedback
|
Active_Directory_user_account_provisioning.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-synchronizing-user-accounts
|
2 |
+
Date Scraped: 2025-02-23T11:55:39.357Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Active Directory user account provisioning Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This document shows you how to set up user and group provisioning between Active Directory and your Cloud Identity or Google Workspace account by using Google Cloud Directory Sync (GCDS). To follow this guide, you must have an Active Directory user that is allowed to manage users and groups in Active Directory. Also, if you don't yet have a Cloud Identity or Google Workspace account, you'll need administrative access to your DNS zone in order to verify domains. If you already have a Cloud Identity or Google Workspace account, make sure that your user has super admin privileges. Objectives Install GCDS and connect it to Active Directory and Cloud Identity or Google Workspace. Configure GCDS to provision users and, optionally, groups to Google Cloud. Set up a scheduled task for continuous provisioning. Costs If you're using the free edition of Cloud Identity, following this guide won't use any billable Google Cloud components. Before you begin Make sure you understand how Active Directory identity management can be extended to Google Cloud. Decide how you want to map identities, groups, and domains. Specifically, make sure that you've answered the following questions: Which DNS domain do you plan to use as the primary domain for Cloud Identity or Google Workspace? Which additional DNS domains do you plan to use as secondary domains? Do you need to use domain substitution? Do you plan to use the email address (mail) or User Principal Name (userPrincipalName) as common identifiers for users? Do you plan to provision groups, and if so, do you intend to use the common name (cn) or email address (mail) as common identifiers for groups? For guidance on making these decisions, refer to the overview document on extending Active Directory identity and access management to Google Cloud. Before connecting your production Active Directory to Google Cloud, consider using an Active Directory test environment for setting up and testing user provisioning. Sign up for Cloud Identity if you don't have an account already, and add additional DNS domains if necessary. If you're using the free edition of Cloud Identity and intend to provision more than 50 users, request an increase of the total number of free Cloud Identity users through your support contact. If you suspect that any of the domains you plan to use for Cloud Identity could have been used by employees to register consumer accounts, consider migrating these user accounts first. For more details, see Assessing existing user accounts. Plan the GCDS deployment The following sections describe how to plan your GCDS deployment. Decide where to deploy GCDS GCDS can provision users and groups from an LDAP directory to Cloud Identity or Google Workspace. Acting as a go-between for the LDAP server and Cloud Identity or Google Workspace, GCDS queries the LDAP directory to retrieve the necessary information from the directory and uses the Directory API to add, modify, or delete users in your Cloud Identity or Google Workspace account. Because Active Directory Domain Services is based on LDAP, GCDS is well suited to implement user provisioning between Active Directory and Cloud Identity or Google Workspace. When connecting an on-premises Active Directory infrastructure to Google Cloud, you can run GCDS either on-premises or on a Compute Engine virtual machine in Google Cloud. In most cases, it's best to run GCDS on-premises: Because the information that Active Directory manages includes personally identifiable information and is usually considered sensitive, you might not want Active Directory to be accessed from outside the local network. By default, Active Directory uses unencrypted LDAP. If you access Active Directory remotely from within Google Cloud, you should use encrypted communication. Although you can encrypt the connection by using LDAPS (LDAP+SSL) or Cloud VPN. Communication from GCDS to Cloud Identity or Google Workspace is conducted through HTTPS and requires little or no change to your firewall configuration. You can run GCDS on either Windows or Linux. Although it's possible to deploy GCDS on the domain controller, it's best to run GCDS on a separate machine. This machine must satisfy the system requirements and have LDAP access to Active Directory. Although it's not a prerequisite for the machine to be domain joined or to run Windows, this guide assumes that Cloud Directory Sync runs on a domain-joined Windows machine. To aid with setting up provisioning, GCDS includes a graphical user interface (GUI) called Configuration Manager. If the server on which you intend to run GCDS has a desktop experience, you can run Configuration Manager on the server itself. Otherwise, you must run Configuration Manager locally and then copy the resulting configuration file to the server, where you can use it to run GCDS. This guide assumes that you run Configuration Manager on a server with a GUI. Decide where to retrieve data GCDS uses LDAP to interact with Active Directory and to retrieve information about users and groups. To make this interaction possible, GCDS requires you to provide a hostname and port in the configuration. In a small Active Directory environment that runs only a single global catalog (GC) server, providing a hostname and port is not a problem because you can point GCDS directly to the global catalog server. In a more complex environment that runs redundant GC servers, pointing GCDS to a single server does not make use of the redundancy and is therefore not ideal. Although it's possible to set up a load balancer that distributes LDAP queries across multiple GC servers and keeps track of servers that might be temporarily unavailable, it's preferable to use the DC Locator mechanism to locate servers dynamically. By default, GCDS requires you to explicitly specify the endpoint of an LDAP server and does not support using the DC Locator mechanism. In this guide, you complement GCDS with a small PowerShell script that engages the DC Locator mechanism so that you don't have to statically configure endpoints of global catalog servers. Prepare your Cloud Identity or Google Workspace account This section describes how to create a user for GCDS. To enable GCDS to interact with the Directory API and Domain Shared Contacts API of Cloud Identity and Google Workspace, the application needs a user account that has administrative privileges. When signing up for Cloud Identity or Google Workspace, you already created one super admin user. Although you could use this user for GCDS, it's preferable to create a separate user that is exclusively used by Cloud Directory Sync: Open the Admin console and sign in by using the super admin user that you created when signing up for Cloud Identity or Google Workspace. In the menu, click Directory > Users, and then click Add new user to create a user. Provide an appropriate name and email address, such as: First Name: Google Cloud Last Name: Directory Sync Primary email: cloud-directory-sync Retain the primary domain in the email address, even if the domain does not correspond to the forest that you're provisioning from. Ensure that Automatically generate a new password is set to Disabled, and enter a password. Ensure that Ask for a password change at the next sign-in is set to Disabled. Click Add New User. Click Done. To enable GCDS to create, list, and delete user accounts and groups, the user needs additional privileges. Additionally, it's a good idea to exempt the user from single sign-on—otherwise, you might not be able to re-authorize GCDS when experiencing single sign-on problems. Both can be accomplished by making the user a super admin: Locate the newly created user in the list and open it. Under Admin roles and privileges, click Assign Roles. Enable the Super Admin role. Click Save. Warning: The super admin role grants the user full access to Cloud Identity, Google Workspace, and Google Cloud resources. To protect the user against credential theft and malicious use, we recommend that you enable 2-step verification for the user. For more details on how to protect super admin users, see Security best practices for administrator accounts. Configure user provisioning The following sections describe how to configure user provisioning. Create an Active Directory user for GCDS To enable GCDS to retrieve information about users and groups from Active Directory, GCDS also requires a domain user with sufficient access. Rather than reusing an existing Windows user for this purpose, create a dedicated user for GCDS: Graphical Interface Open the Active Directory Users and Computers MMC snap-in from the Start menu. Navigate to the domain and organizational unit where you want to create the user. If there are multiple domains in your forest, create the user in the same domain as the GCDS machine. Right-click on the right window pane and choose New > User. Provide an appropriate name and email address, such as: First Name: Google Cloud Last Name: Directory Sync User logon name: gcds User logon name (pre-Windows 2000): gcds Click Next. Provide a password that satisfies your password policy. Clear User must change password at next logon. Select Password never expires. Click Next, and then click Finish. PowerShell Open a PowerShell console as Administrator. Create a user by running the following command: New-ADUser -Name "Google Cloud Directory Sync" ` -GivenName "Google Cloud" ` -Surname "Directory Sync" ` -SamAccountName "gcds" ` -UserPrincipalName (-Join("gcds@",(Get-ADDomain).DNSRoot)) ` -AccountPassword(Read-Host -AsSecureString "Type password for User") ` -Enabled $True Note: You can use the "Path" argument to create a user under a specific organizational unit (OU). For example: -Path "OU=dest,OU=root,DC=domain,DC=com". You now have the prerequisites in place for installing GCDS. Install GCDS On the machine on which you will run GCDS, download and run the GCDS installer. Rather than using a browser to perform the download, you can use the following PowerShell command to download the installer: (New-Object net.webclient).DownloadFile("https://dl.google.com/dirsync/dirsync-win64.exe", "$(pwd)\dirsync-win64.exe") After the download has completed, you can launch the installation wizard by running the following command: .\dirsync-win64.exe If you have already had GCDS installed, you can update GCDS to ensure that you are using the latest version. Create a folder for the GCDS configuration GCDS stores its configuration in an XML file. Because this configuration includes an OAuth refresh token that GCDS uses to authenticate with Google, make sure that you properly secure the folder used for configuration. In addition, because GCDS doesn't require access to local resources other than this folder, you can configure GCDS to run as a limited user, LocalService: On the machine where you installed GCDS, log on as a local administrator. Open a PowerShell console that has administrative privileges. Run the following commands to create a folder that is named $Env:ProgramData\gcds to store the configuration, and to apply an access control list (ACL) so that only GCDS and administrators have access: $gcdsDataFolder = "$Env:ProgramData\gcds" New-Item -ItemType directory -Path $gcdsDataFolder &icacls "$gcdsDataFolder" /inheritance:r &icacls "$gcdsDataFolder" /grant:r "CREATOR OWNER:(OI)(CI)F" /T &icacls "$gcdsDataFolder" /grant "BUILTIN\Administrators:(OI)(CI)F" /T &icacls "$gcdsDataFolder" /grant "Domain Admins:(OI)(CI)F" /T &icacls "$gcdsDataFolder" /grant "LOCAL SERVICE:(OI)(CI)F" /T To determine the location of the ProgramData folder, run the command Write-Host $Env:ProgramData. On English versions of Windows, this path will usually be c:\ProgramData. You need this path later. Connect to Google You will now use Configuration Manager to prepare the GCDS configuration. These steps assume that you run Configuration Manager on the same server where you plan to run GCDS. If you use a different machine to run Configuration Manager, make sure to copy the configuration file to the GCDS server afterward. Also, be aware that testing the configuration on a different machine might not be possible. Launch Configuration Manager. You can find Configuration Manager in the Windows Start menu under Google Cloud Directory Sync > Configuration Manager. Click Google Domain Configuration > Connection Settings. Authorize GCDS and configure domain settings. In the menu, click File > Save as. In the file dialog, enter PROGRAM_DATA\gcds\config.xml as the filename. Replace PROGRAM_DATA with the path to the ProgramData folder that the PowerShell command returned when you ran it earlier. Click Save, and then click OK. Connect to Active Directory The next step is to configure GCDS to connect to Active Directory: In Configuration Manager, click LDAP Configuration > Connection Settings. Configure the LDAP connection settings: Server Type: Select MS Active Directory. Connection Type: Select either Standard LDAP or LDAP+SSL. Host Name: Enter the name of a GC server. This setting is used only for testing. Later, you will automate the discovery of the GC server. Port: 3268 (GC) or 3269 (GC over SSL). Using a GC server instead of a domain controller helps ensure that you can provision users from all domains of your Active Directory forest. Also, ensure authentication after Microsoft ADV190023 update. Authentication Type: Simple. Authorized User: Enter the User Principal Name (UPN) of the domain user that you created earlier: gcds@UPN_SUFFIX_DOMAIN. Replace UPN_SUFFIX_DOMAIN with the appropriate UPN suffix domain for the user. Alternatively, you can also specify the user by using the NETBIOS_DOMAIN_NAME\gcds syntax. Base DN: Leave this field empty to ensure that searches are performed across all domains in the forest. To verify the settings, click Test connection. If the connection fails, double-check that you've specified the hostname of a GC server and that the username and password are correct. Click Close. Decide what to provision Now that you've successfully connected GCDS, you can decide which items to provision: In Configuration Manager, click General Settings. Ensure that User Accounts is selected. If you intend to provision groups, ensure that Groups is selected; otherwise, clear the checkbox. Synchronizing organizational units is beyond the scope of this guide, so leave Organizational Units unselected. Leave User Profiles and Custom Schemas unselected. For more details, see Decide what to provision. Provision users To provision users, you configure how to map users between Active Directory: In Configuration Manager, click User Accounts > Additional User Attributes. Click Use defaults to automatically populate the attributes for Given Name and Family Name with givenName and sn, respectively. The remaining settings depend on whether you intend to use the UPN or email address to map Active Directory to users in Cloud Identity or Google Workspace, and whether you need to apply domain name substitutions. If you're unsure which option is best for you, see the article on how Active Directory identity management can be extended to Google Cloud. UPN In Configuration Manager, click User Accounts > User Attributes. Click Use defaults. Change Email Address Attribute to userPrincipalName. Click proxyAddresses > Remove if you don't want to sync alias addresses. Click the Search Rules tab, and then click Add Search Rule. Enter the following settings: Scope: Sub-tree Rule: (&(objectCategory=person)(objectClass=user)(!(userAccountControl:1.2.840.113556.1.4.803:=2))(!(userPrincipalName=gcds@*))) This rule matches all non-disabled users but ignores computer and managed service accounts, as well as the gcds user account. Base DN: Leave blank to search all domains in the forest. Click OK to create the rule. UPN: domain substitution In Configuration Manager, click the User Accounts > User Attributes tab. Click Use defaults. Change Email Address Attribute to userPrincipalName Click proxyAddresses > Remove if you don't want to sync alias addresses. Click the Search Rules tab, and then click Add Search Rule. Enter the following settings: Scope: Sub-tree Rule: (&(objectCategory=person)(objectClass=user)(!(userAccountControl:1.2.840.113556.1.4.803:=2))(!(userPrincipalName=gcds@*))) This rule matches all non-disabled users but ignores computer and managed service accounts, as well as the gcds user account. Base DN: Leave blank to search all domains within the forest. Click OK to create the rule. Click Google Domain Configuration > Connection Settings, and choose Replace domain names in LDAP email addresses with this domain name. Email In Configuration Manager, click User Accounts > User Attributes. Click Use defaults. Click the Search Rules tab, and then click Add Search Rule. Enter the following settings: Scope: Sub-tree Rule: (&(objectCategory=person)(objectClass=user)(mail=*)(!(userAccountControl:1.2.840.113556.1.4.803:=2))) This rule matches all non-disabled users with a non-empty email address but ignores computer and managed service accounts. Base DN: Leave blank to search all domains in the forest. Click OK to create the rule. Email: domain substitution In Configuration Manager, click User Accounts > User Attributes. Click Use defaults. Click proxyAddresses > Remove if you don't want to sync alias addresses. Click the Search Rules tab, and then click Use defaults. Click Google Domain Configuration > Connection Settings, and choose Replace domain names in LDAP email addresses with this domain name. For further details on mapping user attributes, see Set up your sync with Configuration Manager. Deletion policy So far, the configuration has focused on adding and updating users in Cloud Identity or Google Workspace. However, it's also important that users that are disabled or deleted in Active Directory be suspended or deleted in Cloud Identity or Google Workspace. As part of the provisioning process, GCDS generates a list of users in Cloud Identity or Google Workspace that don't have corresponding matches in the Active Directory LDAP query results. Because the LDAP query incorporates the clause (!(userAccountControl:1.2.840.113556.1.4.803:=2)), any users that have been disabled or deleted in Active Directory since the last provisioning was performed will be included in this list. The default behavior of GCDS is to delete these users in Cloud Identity or Google Workspace, but you can customize this behavior: In Configuration Manager, click User Accounts > User Attributes. Under Google Domain Users Deletion/Suspension Policy, ensure that Don't suspend or delete Google domain admins not found in LDAP is checked. This setting ensures that GCDS won't suspend or delete the super admin user that you used to configure your Cloud Identity or Google Workspace account. Optionally, change the deletion policy for non-administrator users. If you use multiple separate instances of GCDS to provision different domains or forests to a single Cloud Identity or Google Workspace account, make sure that the different GCDS instances don't interfere with one another. By default, users in Cloud Identity or Google Workspace that have been provisioned from a different source will wrongly be identified in Active Directory as having been deleted. To avoid this situation, you can move all users that are beyond the scope of the domain or forest that you're provisioning from to a single OU and then exclude that OU. In Configuration Manager, click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: Organization Complete Path Match Type: Exact Match Exclusion Rule: Enter the OU path and its name. For example: ROOT OU/EXCLUDED OU Replace ROOT OU/EXCLUDED OU with your OU path and the excluded OU's name. Click OK to create the rule. Alternatively, if excluding a single OU doesn't fit your business, you can exclude domain or forest base on users' email addresses. UPN In Configuration Manager, click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: User Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!UPN_SUFFIX_DOMAIN).*)$ Replace UPN_SUFFIX_DOMAIN with your UPN suffix domain, as in this example: .*@((?!corp.example.com).*)$ If you use more than one UPN suffix domain, extend the expression as shown: .*@((?!corp.example.com|branch.example.com).*)$ Click OK to create the rule. UPN: domain substitution In Configuration Manager, click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: User Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!SUBSTITUTION_DOMAIN).*)$ Replace SUBSTITUTION_DOMAIN with the domain that you use to replace the UPN suffix domain, as in this example: .*@((?!corp.example.com).*)$ Click OK to create the rule. Email In Configuration Manager, click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: User Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!MX_DOMAIN).*)$ Replace MX_DOMAIN with the domain name that you use in email addresses, as in this example: .*@((?!corp.example.com).*)$ If you use more than one UPN suffix domain, extend the expression as shown: .*@((?!corp.example.com|branch.example.com).*)$ Click OK to create the rule. Email: domain substitution In Configuration Manager, click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: User Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!SUBSTITUTION_DOMAIN).*)$ Replace SUBSTITUTION_DOMAIN with the domain that you use to replace the email domain, as in this example: .*@((?!corp.example.com).*)$ Click OK to create the rule. For further details on deletion and suspension settings, see Learn more about Configuration Manager options. Provision groups The next step is to configure how to map groups between Active Directory and Cloud Identity or Google Workspace. This process differs based on whether you plan to map groups by common name or by email address. Configure group mappings by common name First, you need to identify the types of security groups that you intend to provision, and then formulate an appropriate LDAP query. The following table contains common queries that you can use. Type LDAP query Domain local groups (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483652)) Global groups (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483650)) Universal groups (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483656)) Global and universal groups (&(objectCategory=group)(|(groupType:1.2.840.113556.1.4.803:=2147483650)(groupType:1.2.840.113556.1.4.803:=2147483656))) All groups (objectCategory=group) The query for global groups also covers Active Directory–defined groups such as domain controllers. You can filter these groups by restricting the search by organizational unit (ou). The remaining settings depend on whether you intend to use UPN or email address to map Active Directory to users in Cloud Identity or Google Workspace. UPN In Configuration Manager, click Groups > Search Rules. Click Use Defaults to add two default rules. Click the first rule edit icon. Edit Rule to replace the LDAP query. In the Groups box, enter the following settings: Group Email Address Attribute: cn User Email Address Attribute: userPrincipalName Click the Prefix-Suffix tab. In the Group Email Address box, enter the following settings: Suffix: @PRIMARY_DOMAIN, where you replace @PRIMARY_DOMAIN with the primary domain of your Cloud Identity or Google Workspace account. Although the setting seems redundant because GCDS appends the domain automatically, you must specify the setting explicitly to prevent multiple GCDS instances from erasing group members that they had not added. Example: @example.com Click OK. Click the second rule cross icon to delete that rule. Email In Configuration Manager, click Groups > Search Rules. Click Use Defaults to add a couple of default rules. Click the first rule edit icon. Edit Rule to replace the LDAP query. In the Groups box, edit Group Email Address Attribute to enter the setting cn. Click OK. The same settings also apply if you used domain substitution when mapping users. Configure group mappings by email address First, you need to identify the types of security groups that you intend to provision, and then formulate an appropriate LDAP query. The following table contains common queries that you can use. Type LDAP query Domain local groups with email address (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483652)(mail=*)) Global groups with email address (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483650)(mail=*)) Universal groups with email address (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483656)(mail=*)) Global and universal groups with email address (&(objectCategory=group)(|(groupType:1.2.840.113556.1.4.803:=2147483650)(groupType:1.2.840.113556.1.4.803:=2147483656))(mail=*)) All groups with email address (&(objectCategory=group)(mail=*)) The remaining settings depend on whether you intend to use UPN or email address to map Active Directory to users in Cloud Identity or Google Workspace. UPN In Configuration Manager, click Groups > Search Rules. Click Use Defaults to add two default rules. Click the first rule edit icon. Edit Rule to replace the LDAP query. In the Groups box, edit User Email Name Attribute to enter the setting userPrincipalName. Click OK. Click the second rule cross icon to delete that rule. Email In Configuration Manager, click Groups > Search Rules. Click Use Defaults to add a couple of default rules. Click the first rule edit icon. Edit Rule to replace the LDAP query. Click OK. Click the second rule cross icon to remove this rule. If you have enabled Replace domain names in LDAP email addresses with this domain name, it also applies to email addresses of groups and members. Deletion policy GCDS handles the deletion of groups similarly to the deletion of users. If you use multiple separate instances of GCDS to provision different domains or forests to a single Cloud Identity or Google Workspace account, make sure that the different GCDS instances don't interfere with one another. By default, a group member in Cloud Identity or Google Workspace that has been provisioned from a different source will wrongly be identified in Active Directory as having been deleted. To avoid this situation, configure GCDS to ignore all group members that are beyond the scope of the domain or forest that you're provisioning from. UPN Click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: Group Member Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!UPN_SUFFIX_DOMAIN).*)$ Replace UPN_SUFFIX_DOMAIN with your UPN suffix domain, as in the following example: .*@((?!corp.example.com).*)$ If you use more than one UPN suffix domain, extend the expression as shown: .*@((?!corp.example.com|branch.example.com).*)$ Click OK to create the rule. UPN: domain substitution Click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: Group Member Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!SUBSTITUTION_DOMAIN).*)$ Replace SUBSTITUTION_DOMAIN with the domain that you use to replace the UPN suffix domain, as in this example: .*@((?!corp.example.com).*)$ Click OK to create the rule. Email Click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: Group Member Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!MX_DOMAIN).*)$ Replace MX_DOMAIN with the domain name that you use in email addresses, as in the following example: .*@((?!corp.example.com).*)$ If you use more than one UPN suffix domain, extend the expression as shown: .*@((?!corp.example.com|branch.example.com).*)$ Click OK to create the rule. Email: domain substitution Click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: Group Member Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!SUBSTITUTION_DOMAIN).*)$ Replace SUBSTITUTION_DOMAIN with the domain that you use to replace the email domain, as in the following example: .*@((?!corp.example.com).*)$ Click OK to create the rule. For more information about group settings, see Learn more about Configuration Manager options. Configure logging and notifications Keeping users in sync requires that you run GCDS on a scheduled basis. To allow you to keep track of GCDS activity and potential problems, you can control how and when GCDS writes its log file: In Configuration Manager, click Logging. Set File name to PROGRAM_DATA\gcds\gcds_sync.#{timestamp}.log. Replace PROGRAM_DATA with the path to the ProgramData folder that the PowerShell command returned when you ran it earlier. Click File > Save to commit the configuration changes to disk, then click OK. Note: You can click File > Save or Save as to commit the configuration changes to disk when you complete each of the preceding steps. If the configuration is ready to be tested, you can click Go to the simulation tab. Otherwise, you can click Skip simulation. In addition to logging, GCDS can send notifications by email. To activate this service, click Notifications and provide connection information for your mail server. Simulate user provisioning You've completed the GCDS configuration. To verify that the configuration works as intended, you need to at first save the configuration to disk then simulate a user provisioning run. During simulation, GCDS won't perform any changes to your Cloud Identity or Google Workspace account, but will instead report which changes it would perform during a regular provision run. In Configuration Manager, click Sync. At the bottom of the screen, select Clear cache, and then click Simulate sync. After the process completes, review the Proposed changes section of the log that is shown in the lower half of the dialog and verify that there are no unwanted changes such as deleting or suspending any users or groups. Initial user provisioning You can now trigger the initial user provisioning: Warnings Triggering user provisioning will make permanent changes to users and groups in your Cloud Identity or Google Workspace account. If you have a large number of users to provision, consider temporarily changing the LDAP query to match a subset of these users only. Using this subset of users, you can then test the process and adjust settings if necessary. After you've successfully validated results, change back the LDAP query and provision the remaining users. Avoid repeatedly modifying or deleting a large number of users when testing because such actions might be flagged as abusive behavior. Trigger a provision run as follows: In Configuration Manager, click Sync. At the bottom of the screen, select Clear cache, and then click Sync & apply changes. A dialog appears showing the status. After the process completes, check the log that is shown in the lower half of the dialog: Under Successful user changes, verify that at least one user has been created. Under Failures, verify that no failures occurred. Schedule provisioning To ensure that changes performed in Active Directory are propagated to your Cloud Identity or Google Workspace account, set up a scheduled task that triggers a provisioning run every hour: Open a PowerShell console as Administrator. Check if the Active Directory PowerShell module is available on the system: import-module ActiveDirectory If the command fails, download and install the Remote Server Administration Tools and try again. In Notepad, create a file, copy the following content into it, and save the file to %ProgramData%\gcds\sync.ps1. When you're done, close the file. [CmdletBinding()] Param( [Parameter(Mandatory=$True)] [string]$config, [Parameter(Mandatory=$True)] [string]$gcdsInstallationDir ) import-module ActiveDirectory # Stop on error. $ErrorActionPreference ="stop" # Ensure it's an absolute path. $rawConfigPath = [System.IO.Path]::Combine((pwd).Path, $config) # Discover closest GC in current domain. $dc = Get-ADDomainController -discover -Service "GlobalCatalog" -NextClosestSite Write-Host ("Using GC server {0} of domain {1} as LDAP source" -f [string]$dc.HostName, $dc.Domain) # Load XML and replace the endpoint. $dom = [xml](Get-Content $rawConfigPath) $ldapConfigNode = $dom.SelectSingleNode("//plugin[@class='com.google.usersyncapp.plugin.ldap.LDAPPlugin']/config") # Tweak the endpoint. $ldapConfigNode.hostname = [string]$dc.HostName $ldapConfigNode.ldapCredMachineName = [string]$dc.HostName $ldapConfigNode.port = "3268" # Always use GC port # Tweak the tsv files location $googleConfigNode = $dom.SelectSingleNode("//plugin[@class='com.google.usersyncapp.plugin.google.GooglePlugin']/config") $googleConfigNode.nonAddressPrimaryKeyMapFile = [System.IO.Path]::Combine((pwd).Path, "nonAddressPrimaryKeyFile.tsv") $googleConfigNode.passwordTimestampFile = [System.IO.Path]::Combine((pwd).Path, "passwordTimestampCache.tsv") # Save resulting config. $targetConfigPath = $rawConfigPath + ".autodiscover" $writer = New-Object System.IO.StreamWriter($targetConfigPath, $False, (New-Object System.Text.UTF8Encoding($False))) $dom.Save($writer) $writer.Close() # Start provisioning. Start-Process -FilePath "$gcdsInstallationDir\sync-cmd" ` -Wait -ArgumentList "--apply --config ""$targetConfigPath""" Configuration Manager created a secret key to encrypt the credentials in the config file. To ensure that GCDS can still read the configuration when it's run as a scheduled task, run the following commands to copy that secret key from your own profile to the profile of NT AUTHORITY\LOCAL SERVICE: New-Item -Path Registry::HKEY_USERS\S-1-5-19\SOFTWARE\JavaSoft\Prefs\com\google\usersyncapp -Force; Copy-Item -Path Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\SOFTWARE\JavaSoft\Prefs\com\google\usersyncapp\util ` -Destination Microsoft.PowerShell.Core\Registry::HKEY_USERS\S-1-5-19\SOFTWARE\JavaSoft\Prefs\com\google\usersyncapp\util If the commands fail, ensure that you started the PowerShell console as Administrator. Create a scheduled task by running the following commands. The scheduled task will be triggered every hour and invokes the sync.ps1 script as NT AUTHORITY\LOCAL SERVICE. Warning: After it starts, the scheduled task will make permanent changes to your Cloud Identity or Google Workspace account. $taskName = "Synchronize to Cloud Identity" $gcdsDir = "$Env:ProgramData\gcds" $action = New-ScheduledTaskAction -Execute 'PowerShell.exe' ` -Argument "-ExecutionPolicy Bypass -NoProfile $gcdsDir\sync.ps1 -config $gcdsDir\config.xml -gcdsInstallationDir '$Env:Programfiles\Google Cloud Directory Sync'" ` -WorkingDirectory $gcdsDir $trigger = New-ScheduledTaskTrigger ` -Once ` -At (Get-Date) ` -RepetitionInterval (New-TimeSpan -Minutes 60) ` -RepetitionDuration (New-TimeSpan -Days (365 * 20)) $principal = New-ScheduledTaskPrincipal -UserID "NT AUTHORITY\LOCAL SERVICE" -LogonType ServiceAccount Register-ScheduledTask -Action $action -Trigger $trigger -Principal $principal -TaskName $taskName $task = Get-ScheduledTask -TaskName "$taskName" $task.Settings.ExecutionTimeLimit = "PT12H" Set-ScheduledTask $task For more information, see Schedule automatic synchronizations. Test user provisioning You've completed the installation and configuration of GCDS, and the scheduled task will trigger a provision run every hour. To trigger a provisioning run manually, switch to the PowerShell console and run the following command: Start-ScheduledTask "Synchronize to Cloud Identity" Clean up To remove GCDS, perform the following steps: Open Windows Control Panel and click Programs > Uninstall a program. Select Google Cloud Directory Sync, and click Uninstall/Change to launch the uninstall wizard. Then follow the instructions in the wizard. Open a PowerShell console and run the following command to remove the scheduled task: $taskName = "Synchronize to Cloud Identity" Unregister-ScheduledTask -TaskName $taskName -Confirm:$False Run the following command to delete the configuration and log files: Remove-Item -Recurse -Force "$Env:ProgramData\gcds" Remove-Item -Recurse -Path Registry::HKEY_USERS\S-1-5-19\SOFTWARE\JavaSoft\Prefs\com\google\usersyncapp What's next Configure single sign-on between Active Directory and Google Cloud. Review GCDS best practices and FAQ. Find out how to troubleshoot common GCDS issues. Read about best practices for planning accounts and organizations and best practices for federating Google Cloud with an external identity provider. Acquaint yourself with best practices for managing super admin users. Send feedback
|
Align_spending_with_business_value.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/framework/cost-optimization/align-cloud-spending-business-value
|
2 |
+
Date Scraped: 2025-02-23T11:43:49.288Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Align cloud spending with business value Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-18 UTC This principle in the cost optimization pillar of the Google Cloud Architecture Framework provides recommendations to align your use of Google Cloud resources with your organization's business goals. Principle overview To effectively manage cloud costs, you need to maximize the business value that the cloud resources provide and minimize the total cost of ownership (TCO). When you evaluate the resource options for your cloud workloads, consider not only the cost of provisioning and using the resources, but also the cost of managing them. For example, virtual machines (VMs) on Compute Engine might be a cost-effective option for hosting applications. However, when you consider the overhead to maintain, patch, and scale the VMs, the TCO can increase. On the other hand, serverless services like Cloud Run can offer greater business value. The lower operational overhead lets your team focus on core activities and helps to increase agility. To ensure that your cloud resources deliver optimal value, evaluate the following factors: Provisioning and usage costs: The expenses incurred when you purchase, provision, or consume resources. Management costs: The recurring expenses for operating and maintaining resources, including tasks like patching, monitoring and scaling. Indirect costs: The costs that you might incur to manage issues like downtime, data loss, or security breaches. Business impact: The potential benefits from the resources, like increased revenue, improved customer satisfaction, and faster time to market. By aligning cloud spending with business value, you get the following benefits: Value-driven decisions: Your teams are encouraged to prioritize solutions that deliver the greatest business value and to consider both short-term and long-term cost implications. Informed resource choice: Your teams have the information and knowledge that they need to assess the business value and TCO of various deployment options, so they choose resources that are cost-effective. Cross-team alignment: Cross-functional collaboration between business, finance, and technical teams ensures that cloud decisions are aligned with the overall objectives of the organization. Recommendations To align cloud spending with business objectives, consider the following recommendations. Prioritize managed services and serverless products Whenever possible, choose managed services and serverless products to reduce operational overhead and maintenance costs. This choice lets your teams concentrate on their core business activities. They can accelerate the delivery of new features and functionalities, and help drive innovation and value. The following are examples of how you can implement this recommendation: To run PostgreSQL, MySQL, or Microsoft SQL Server server databases, use Cloud SQL instead of deploying those databases on VMs. To run and manage Kubernetes clusters, use Google Kubernetes Engine (GKE) Autopilot instead of deploying containers on VMs. For your Apache Hadoop or Apache Spark processing needs, use Dataproc and Dataproc Serverless. Per-second billing can help to achieve significantly lower TCO when compared to on-premises data lakes. Balance cost efficiency with business agility Controlling costs and optimizing resource utilization are important goals. However, you must balance these goals with the need for flexible infrastructure that lets you innovate rapidly, respond quickly to changes, and deliver value faster. The following are examples of how you can achieve this balance: Adopt DORA metrics for software delivery performance. Metrics like change failure rate (CFR), time to detect (TTD), and time to restore (TTR) can help to identify and fix bottlenecks in your development and deployment processes. By reducing downtime and accelerating delivery, you can achieve both operational efficiency and business agility. Follow Site Reliability Engineering (SRE) practices to improve operational reliability. SRE's focus on automation, observability, and incident response can lead to reduced downtime, lower recovery time, and higher customer satisfaction. By minimizing downtime and improving operational reliability, you can prevent revenue loss and avoid the need to overprovision resources as a safety net to handle outages. Enable self-service optimization Encourage a culture of experimentation and exploration by providing your teams with self-service cost optimization tools, observability tools, and resource management platforms. Enable them to provision, manage, and optimize their cloud resources autonomously. This approach helps to foster a sense of ownership, accelerate innovation, and ensure that teams can respond quickly to changing needs while being mindful of cost efficiency. Adopt and implement FinOps Adopt FinOps to establish a collaborative environment where everyone is empowered to make informed decisions that balance cost and value. FinOps fosters financial accountability and promotes effective cost optimization in the cloud. Promote a value-driven and TCO-aware mindset Encourage your team members to adopt a holistic attitude toward cloud spending, with an emphasis on TCO and not just upfront costs. Use techniques like value stream mapping to visualize and analyze the flow of value through your software delivery process and to identify areas for improvement. Implement unit costing for your applications and services to gain a granular understanding of cost drivers and discover opportunities for cost optimization. For more information, see Maximize business value with cloud FinOps. Previous arrow_back Overview Next Foster a culture of cost awareness arrow_forward Send feedback
|
AlloyDB_for_PostgreSQL.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/alloydb
|
2 |
+
Date Scraped: 2025-02-23T12:03:54.417Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
The future of PostgreSQL is here, and it's built for you. Download our new ebook to discover how AlloyDB can transform your business.AlloyDB100% PostgreSQL-compatible database that runs anywherePower your most demanding enterprise workloads with superior performance, availability, and scale, and supercharge them with AI.Go to consoleDocumentationGet started with a 30-day AlloyDB free trial instance.In addition, new Google Cloud customers get $300 in free credits.Product highlightsEliminates your dependency on proprietary databasesScalable for applications of all sizes 99.99% availability SLA, inclusive of maintenanceWhat is AlloyDB?3:57FeaturesBetter price-performanceAlloyDB is more than 4x faster for transactional workloads and provides up to 2x better price-performance compared to self-managed PostgreSQL.* It's suitable for the most demanding enterprise workloads, including those that require high transaction throughput, large data sizes, or multiple read replicas. Read connections scale horizontally, backed by low lag, scale-out read replica pools, and support cross-region replicas. Committed use discounts offer additional savings for one to three year commitments.AlloyDB vs. self-managed PostgreSQLA price-performance comparisonRead the blogFully featured vector databaseAlloyDB AI can help you build a wide range of generative AI applications. It uses the Google ScaNN index, the technology that powers services like Google Search and YouTube, to scale to over a billion vectors and deliver up to 4 times faster vector queries than the HNSW index in standard PostgreSQL.* It also helps you generate vector embeddings from within your database, access data stored in open-source gen AI tools such as LangChain, and access AI models in Vertex AI and other platforms.Build gen AI apps with PostgreSQLCombine AI models with real-time operational dataRead the blogRuns anywhereAlloyDB Omni is a downloadable edition of AlloyDB, designed to run anywhere—in your data center, on your laptop, at the edge, and in any cloud. It’s powered by the same engine that underlies the cloud-based AlloyDB service and provides the same functionality. AlloyDB Omni is a fraction of the cost of legacy databases, so it’s an attractive way to modernize to an enterprise-grade version of PostgreSQL with support from a Tier 1 vendor.Quickstart5:34AI-driven development and operationsSimplify all aspects of the database journey with AI-powered assistance, helping your teams focus on what matters most. AlloyDB Studio offers a query editor where you can use Gemini to write SQL and analyze your data using natural language prompts. Database Center, in Preview, provides a comprehensive view of your database fleet with intelligent performance and security recommendations, and you can enable Gemini for an easy-to-use chat interface to ask questions and get optimization recommendations.Manage your data using AlloyDB StudioBuild database applications faster using natural languageRead documentationReal-time business insightsThanks to its built-in columnar engine, AlloyDB is up to 100x faster than standard PostgreSQL for analytical queries*, with zero impact on operational performance when running business intelligence, reporting, and hybrid transactional and analytical workloads (HTAP). You can also stream data or use federated queries with BigQuery and call machine learning models in Vertex AI directly within a query or transaction.Under the hood: columnar engineHow the AlloyDB columnar engine accelerates your analytical queriesRead the blogFully managed database serviceAlloyDB is a fully managed database service with cloud-scale architecture and industry-leading performance, availability, and scale. It automates administrative tasks such as backups, replication, patching, and capacity management, and uses adaptive algorithms and machine learning for PostgreSQL vacuum management, storage management, memory management, data tiering, and analytics acceleration, so you can focus on building your applications.Bayer Crop Science unlocks harvest data efficiency2:07PostgreSQL compatibilityPostgreSQL has emerged in recent years as a leading alternative to legacy, proprietary databases because of its rich functionality, ecosystem extensions, and enterprise readiness. AlloyDB is fully compatible with PostgreSQL, providing flexibility and true portability for your workloads. It’s easy to migrate existing workloads and bring your existing PostgreSQL skills over.Is AlloyDB compatible with PostgreSQL?2:37High availabilityAlloyDB offers a 99.99% uptime SLA, inclusive of maintenance. It automatically detects and recovers from most database failures within 60 seconds, independent of database size and load. The architecture supports non-disruptive instance resizing and database maintenance, with planned operations incurring less than 1 second of application downtime. Read pools are updated with zero downtime.ScalabilityScale AlloyDB instances up and down to support almost any workload and deploy up to 20 read replicas for read scalability. You can also create secondary clusters in different regions for disaster recovery, have them continuously replicated from the primary region, and promote them in case of an outage. Regional replicas deliver more than 25x lower replication lag than standard PostgreSQL for high throughput transactional workloads. They also improve read performance by bringing data closer to your users.Character.ai scales its growing gen AI platform2:46Secure access and connectivityAlloyDB encrypts data in transit and at rest. It supports private connectivity with Virtual Private Cloud (VPC) for secure connectivity to your applications, and allows you to access your database instances via private IP without going through the internet or using external IP addresses. You can manage your own encryption keys to encrypt data at rest for your database and your backups, and you can choose between Identity and Access Management (IAM) and PostgreSQL user roles for database authentication.Customer-friendly pricingEliminate your dependency on high-cost, proprietary databases, and take advantage of transparent and predictable pricing with no proprietary licensing or opaque I/O charges. Storage is automatically managed and you're only charged for what you use, with no additional storage costs for read replicas. An ultra-fast cache, automatically provisioned in addition to instance memory, allows you to maximize the price-performance ratio. Committed use discounts offer additional savings for one to three year commitments. And AlloyDB Omni offers another simple and affordable deployment option.FLUIDEFI addresses DeFi industry challengesAlloyDB boosts response speed and reduces costsRead the blogContinuous backup and recoveryProtect your business from data loss with point-in-time recovery to any point within your defined retention window. Restore the database to a specific date and time for development, testing, and auditing purposes, or recover your production database from user or application errors. You no longer need expensive hardware and software solutions to achieve enterprise-level data protection.Under the hood: business continuityBuild highly resilient applications with AlloyDBRead the blogEasy migrationsNo matter where your source database is located—whether on-premises, on Compute Engine, or in other clouds—Database Migration Service (DMS) can migrate it to AlloyDB securely and with minimal downtime. DMS leverages native PostgreSQL replication capabilities and automated schema and code conversion, as necessary, to maximize the reliability of your migration. Gemini adds AI-powered assistance for heterogeneous database migrations, helping you review and convert database-resident code and offering full explanability. For an assessment of your migration needs, try the Database Modernization Program.View all featuresCompare PostgreSQL options on Google CloudGoogle Cloud serviceOverviewKey benefitsAlloyDBFull PostgreSQL compatibility, with superior performance and scale and the most comprehensive generative AI supportTry AlloyDB free to enjoy:4x faster performance for transactional workloads*Hybrid transactional and analytical processing (HTAP)Scalability for the most demanding enterprise workloadsCloud SQLFully managed, standard version of open source PostgreSQL in the cloudLearn more about how Cloud SQL provides:Easiest lift and shift to the cloudSame management as the MySQL and SQL Server enginesLowest cost relational database optionSpannerCloud-native with unlimited scalability and PostgreSQL interface and toolingLearn more about Spanner if you need:Unlimited scale and global consistency99.999% availability SLASupport for relational and non-relational workloadsAlloyDBOverviewFull PostgreSQL compatibility, with superior performance and scale and the most comprehensive generative AI supportKey benefitsTry AlloyDB free to enjoy:4x faster performance for transactional workloads*Hybrid transactional and analytical processing (HTAP)Scalability for the most demanding enterprise workloadsCloud SQLOverviewFully managed, standard version of open source PostgreSQL in the cloudKey benefitsLearn more about how Cloud SQL provides:Easiest lift and shift to the cloudSame management as the MySQL and SQL Server enginesLowest cost relational database optionSpannerOverviewCloud-native with unlimited scalability and PostgreSQL interface and toolingKey benefitsLearn more about Spanner if you need:Unlimited scale and global consistency99.999% availability SLASupport for relational and non-relational workloadsHow It WorksAlloyDB provides full compatibility with open source PostgreSQL, including popular extensions, with higher performance and scalability. Each cluster has a primary instance and an optional read pool with multiple read nodes, and you can replicate to secondary clusters in separate regions. AlloyDB features continuous backup and offers high availability with non-disruptive maintenance for most workloads. It uses adaptive algorithms to eliminate many routine administration tasks.View documentationA deep dive into AlloyDB with Andi GutmansCommon UsesTransactional workloadsThe future of PostgreSQL is hereAlloyDB is fully PostgreSQL-compatible but is more than 4x faster for transactional workloads, making it ideal for e-commerce, financial applications, gaming, and any other workload that demands high throughput and fast response times. Independent scaling of compute and storage, an ultra-fast cache, and intelligent database management give your applications room to grow and eliminate tedious database administration tasks.Ebook: AlloyDB, the database built for youBlog: Learn how to use Index Advisor to optimize performanceQuickstart: Create and connect to a databaseLearning resourcesThe future of PostgreSQL is hereAlloyDB is fully PostgreSQL-compatible but is more than 4x faster for transactional workloads, making it ideal for e-commerce, financial applications, gaming, and any other workload that demands high throughput and fast response times. Independent scaling of compute and storage, an ultra-fast cache, and intelligent database management give your applications room to grow and eliminate tedious database administration tasks.Ebook: AlloyDB, the database built for youBlog: Learn how to use Index Advisor to optimize performanceQuickstart: Create and connect to a databaseAnalytical workloadsGet real-time business insightsAs a data analyst, you can get real-time insights by running queries directly on your AlloyDB operational database. With its built-in, automatically managed columnar engine, AlloyDB is up to 100x faster than standard PostgreSQL for analytical queries, making it ideal for real-time business intelligence dashboards, fraud detection, customer behavior analysis, inventory optimization, and personalization.Video: Learn how the columnar engine can boost performanceBlog: AlloyDB columnar engine under the hoodLab: Accelerating analytical queries with the columnar engineLearning resourcesGet real-time business insightsAs a data analyst, you can get real-time insights by running queries directly on your AlloyDB operational database. With its built-in, automatically managed columnar engine, AlloyDB is up to 100x faster than standard PostgreSQL for analytical queries, making it ideal for real-time business intelligence dashboards, fraud detection, customer behavior analysis, inventory optimization, and personalization.Video: Learn how the columnar engine can boost performanceBlog: AlloyDB columnar engine under the hoodLab: Accelerating analytical queries with the columnar engineGenerative AI applicationsBuild gen AI apps with state-of-the-art modelsAs generative AI gets integrated into every type of application, the operational database becomes essential for storing and searching vectors and grounding models in enterprise data. AlloyDB AI offers fast vector search, helps you generate vector embeddings from within your database, and offers access to AI models on Vertex AI and other platforms.Documentation: Query and index embeddings using pgvectorBlog: Build gen AI apps with LangChain and Google Cloud databasesBlog: AlloyDB supercharges PostgreSQL vector searchLearning resourcesBuild gen AI apps with state-of-the-art modelsAs generative AI gets integrated into every type of application, the operational database becomes essential for storing and searching vectors and grounding models in enterprise data. AlloyDB AI offers fast vector search, helps you generate vector embeddings from within your database, and offers access to AI models on Vertex AI and other platforms.Documentation: Query and index embeddings using pgvectorBlog: Build gen AI apps with LangChain and Google Cloud databasesBlog: AlloyDB supercharges PostgreSQL vector searchDatabase modernizationMove your apps to modern, fully managed databasesWhether you're an existing PostgreSQL user or you're looking to transition from legacy databases, you have a path to fully managed databases in the cloud. Database Migration Service (DMS) can help with assessment, conversion, and migration so you can experience the enhanced performance, scalability, and reliability of AlloyDB while maintaining full PostgreSQL compatibility.Documentation: An overview of migration to AlloyDBDemo: Build and modernize an e-commerce applicationBlog: How B4A achieves beautiful performance for its beauty platformLearning resourcesMove your apps to modern, fully managed databasesWhether you're an existing PostgreSQL user or you're looking to transition from legacy databases, you have a path to fully managed databases in the cloud. Database Migration Service (DMS) can help with assessment, conversion, and migration so you can experience the enhanced performance, scalability, and reliability of AlloyDB while maintaining full PostgreSQL compatibility.Documentation: An overview of migration to AlloyDBDemo: Build and modernize an e-commerce applicationBlog: How B4A achieves beautiful performance for its beauty platformMulticloud and hybrid cloudRun anywhere, at a fraction of the costWhile the cloud is the typical destination for database migrations, you may need to keep some workloads on premises to meet regulatory or data sovereignty requirements, and you might need to run them on other clouds or at the edge. AlloyDB Omni is a downloadable edition of AlloyDB that’s powered by the same engine as the cloud-based service and offers the same functionality, so you can standardize on a single database across all platforms.Blog: AlloyDB Omni, the downloadable edition of AlloyDBQuickstart: Learn how to install AlloyDB OmniVideo: AlloyDB Omni on Google Compute Engine QuickstartLearning resourcesRun anywhere, at a fraction of the costWhile the cloud is the typical destination for database migrations, you may need to keep some workloads on premises to meet regulatory or data sovereignty requirements, and you might need to run them on other clouds or at the edge. AlloyDB Omni is a downloadable edition of AlloyDB that’s powered by the same engine as the cloud-based service and offers the same functionality, so you can standardize on a single database across all platforms.Blog: AlloyDB Omni, the downloadable edition of AlloyDBQuickstart: Learn how to install AlloyDB OmniVideo: AlloyDB Omni on Google Compute Engine QuickstartPricingHow AlloyDB pricing worksPricing for AlloyDB for PostgreSQL is transparent and predictable with no expensive, proprietary licensing, and no opaque I/O charges.ServiceDescriptionPrice (USD)ComputevCPUsStarting at$0.06608per vCPU hourMemoryStarting at $0.0112per GB hourStorageRegional cluster storageStarting at $0.0004109per GB hourBackup storageStarting at$0.000137per GB hourNetworkingData transfer inFREEData transfer within regionFREEInter-region data transferStarting at$0.02per GBOutbound traffic into the internetStarting at$0.08per GBSpecialty networking servicesVariesGet full details on pricing and learn about committed use discounts.*Based on Google Cloud performance tests.How AlloyDB pricing worksPricing for AlloyDB for PostgreSQL is transparent and predictable with no expensive, proprietary licensing, and no opaque I/O charges.ComputeDescriptionvCPUsPrice (USD)Starting at$0.06608per vCPU hourMemoryDescriptionStarting at $0.0112per GB hourStorageDescriptionRegional cluster storagePrice (USD)Starting at $0.0004109per GB hourBackup storageDescriptionStarting at$0.000137per GB hourNetworkingDescriptionData transfer inPrice (USD)FREEData transfer within regionDescriptionFREEInter-region data transferDescriptionStarting at$0.02per GBOutbound traffic into the internetDescriptionStarting at$0.08per GBSpecialty networking servicesDescriptionVariesGet full details on pricing and learn about committed use discounts.*Based on Google Cloud performance tests.PRICING CALCULATOREstimate your monthly AlloyDB costs, including region specific pricing and fees.Estimate your costsCUSTOM QUOTEConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptGet started with a 30-day AlloyDB free trial instanceStart free trialLearn how to use AlloyDBView trainingLearn about the key benefits of AlloyDBWatch videoCompare AlloyDB to self-managed PostgreSQLLearn moreBuild gen AI applications with AlloyDBLearn moreBusiness CaseLearn from customers using AlloyDB"As our customer base grew, we ran into PostgreSQL vertical scalability limits and problems like CPU, memory, and connection exhaustion. We were thrilled that AlloyDB gave us a drop-in PostgreSQL replacement with much more efficient reads and writes. AlloyDB requires less CPUs to hit our throughput and latency goals, lowering our cost by 40-50% and preparing us for the next phase of customer growth."JP Grace, Chief Technology Officer, EndearRead customer storyRelated content FLUIDEFI nets 3x gains in processing speed with AlloyDBHow Bayer Crop Science unlocked harvest data efficiency with AlloyDBHow Character.ai uses AlloyDB to scale its growing Gen AI platformFeatured benefits and customersDemand more from your database? AlloyDB offers superior speed, reliability, and ease of use.With a downloadable version, AlloyDB can tackle any project, anywhere.AlloyDB is perfect for building gen AI apps and is equipped with its own impressive AI capabilities.Partners & IntegrationAccelerate your workloads by working with a partnerData integration and migrationBusiness intelligence and analyticsData governance, security, and observabilitySystems integration - globalSystems integration - regionalStreamline the process of moving to, building, and working with AlloyDB. Read about Google Cloud Ready - AlloyDB validated partners and visit the partner directory for a full list of AlloyDB partners.Google Accountjamzith [email protected]
|
Analyst_reports.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/analyst-reports
|
2 |
+
Date Scraped: 2025-02-23T11:57:37.341Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Analyst reportsLearn what top industry analyst firms are saying about Google Cloud.Contact usMore from Google CloudSolutionsWhitepapersExecutive insightsFilter byFiltersLearn more about Google Cloud’s momentumsearchsendLearn more about Google Cloud’s momentumRead what industry analysts are saying about Google Cloud. The reports listed here are written by third-party industry analysts that cover Google Cloud’s strategy, product portfolio, and differentiation. You can also learn more by reading whitepapers written by Google and the Google community.Google Cloud NGFW Enterprise Certified Secure Test ReportRead Miercom's test results on Google Cloud Next Generation Firewall Enterprise.Google Cloud NGFW Enterprise CyberRisk Validation ReportRead SecureIQlab's test results on Google Cloud Next Generation Firewall Enterprise.Google is a Leader in The Forrester Wave™: AI Foundation Models for Language, Q2 2024Access your complimentary copy of the report to learn why Google was named a Leader.Google is a Leader in the 2024 Gartner® Magic Quadrant™ for Cloud AI Developer Services (CAIDS) Access your complimentary copy of the report to learn why Google was named a Leader.Boston Consulting Group: Any company can become a resilient data championInsights from 700 global business leaders reveal the secrets to data maturity.Google is a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024Access your complimentary copy of the report to learn why Google was named a Leader.Google is a Leader in the 2023 Gartner® Magic Quadrant™ for Cloud Database Management Systems (DBMS)Access your complimentary copy of the report to learn why Google was named a leader and further in vision among vendors.Google Cloud is named a Leader in The Forrester Wave™: Streaming Data Platforms, Q4 2023Access your complimentary copy of the report to learn why Google Cloud was named a leader.Google is named a Leader in The Forrester Wave™: Cloud Data Warehouses, Q2 2023Access your complimentary copy of the report to learn why Google Cloud was named a leader.Forrester names Google Cloud a Leader in The Forrester Wave™: IaaS Platform Native Security Q2 2023Access your complimentary copy of the report to learn why Google Cloud was named a Leader.Forrester names Google Cloud a Leader in The Forrester Wave™: Data Security Platforms Q1 2023Access your complimentary copy of the report to learn why Google Cloud was named a LeaderGoogle is a Leader in the 2023 Gartner® Magic Quadrant™ for Enterprise Conversational AI PlatformsAccess your complimentary copy of the report to learn why Google was named a LeaderThe Total Economic Impact™ Of Google Cloud’s Operations SuiteCost Savings And Business Benefits Enabled By Google Cloud’s Operations Suite.The Forrester Wave™: Web Application Firewalls, Q3 2022Report shows how Cloud Armor stacks up against web application firewall (WAF) providers.Gartner® Magic Quadrant™ for Cloud Infrastructure and Platform Services (CIPS) 2022Google is a Leader in the 2022 Gartner® Magic Quadrant™ for Cloud Infrastructure and Platform Services (CIPS).Forrester’s Total Economic Impact of Google Cloud AnthosThis report outlines customers cost savings and business benefits enabled by Anthos.IDC: A Built-In Observability Tool Adoption Blueprint for Public CloudWhitepaper for DevOps, Development, Operations, and SRE Forrester’s Opportunity Snapshot: State Of Public Cloud Migration, 2022Get things right through your cloud migration journeyThe 2022 Gartner® Magic Quadrant ™ for Full Life Cycle API ManagementAccess your complimentary copy of the report to learn why Google Cloud Apigee was named a leader.The Forrester Wave™: Public Cloud Container Platforms, Q1 2022Access your complimentary copy of the report to learn why Google Cloud was named a Leader.Gartner® Magic Quadrant™ for Cloud AI Developer Services Gartner names Google a Leader in the 2022 Gartner® Magic Quadrant™ for Cloud AI Developer Services report.IDC names Google a Leader in the IDC MarketScape: Asia/Pacific (Excluding Japan) AI Life-Cycle Software Tools and Platforms 2022 Vendor AssessmentGet your complimentary copy of the report excerpt to learn why Google was named a Leader.Strategies for Migration to Public Clouds: Lessons Learned from Industry LeadersIDC surveyed 204 US-based IT decision makers with experience in successfully migrating.The Forrester Wave™: Document-Oriented Text Analytics Platforms, Q2 2022Access your complimentary copy of the report to learn why Google was named a Leader The Forrester Wave™: API Management Solutions, 2022Forrester names Google a Leader in the 2022 Forrester Wave™ for API Management Solutions WaveGoogle Cloud wins Frost & Sullivan Technology Innovation Award 2022Solve your complex healthcare challenges with us.Gartner® names Google Cloud a leader in the 2022 Cloud Database Management Systems Magic QuadrantLearn where the cloud database and analytics market stands and why Google Cloud was named a Leader.Forrester Research names Google as a Leader in The Forrester Wave™: AI Infrastructure, Q4 2021Access your complimentary copy of the report to learn why Google was named a Leader in AI Infrastructure.Forrester's Total Economic Impact of Cloud RunThis report outlines cost savings and business benefits enabled by Cloud Run.The Total Economic Impact™ Of Migrating Expensive OSes and Software to Google CloudCost savings and benefits by modernizing with open platforms and managed services in Google Cloud.Modernize With AIOps To Maximize Your ImpactEstablish your competitive edge by leveraging AIOps to address cloud operations challenges.Forrrester's Total Economic Impact of GKEThe report outlines cost savings and business benefits enabled by Google Kubernetes Engine (GKE).Forrester Research names Google Cloud a Leader among Public Cloud Development and Infrastructure PlatformsIn this Forrester Wave™: Public Cloud Development and Infrastructure Platforms report, Forrester evaluated the top six cloud vendors and identified Google as a Leader in this year’s report.Forrester Research names Google Cloud a Leader in The Forrester Wave™: Streaming Analytics, Q2 2021Download your copy of this report to explore how Dataflow empowers customers like you to process and enrich data at scale for streaming analytics.Forrester Research names Google Cloud a Leader in The Forrester Wave™: Unstructured Data Security Platforms, Q2 2021 reportFor this report, Forrester evaluated the capabilities of 11 providers that help customers secure and protect their unstructured data, naming Google Cloud a Leader and rated the highest in current offering. ISG Provider Lens™ Quadrant Report for Mainframe services and solutionsDownload your copy of the report to explore the strengths that help modernize mainframes451 Research, S&P Global Market IntelligenceAccess your complimentary copy of the Spanner Market Insight Report which showcases Spanner and how Google Cloud continues to innovate on the product.ESG Technical Validation: Google Cloud for GamingAnalyst report validating the scalable, secure, and reliable gaming infrastructure.The Forrester Wave™: Cloud Data Warehouse, Q1 2021Download your copy of the report to explore the strengths that help empower our customers to create big opportunities with big data.Gartner® names Google a Leader in the 2021 Magic Quadrant for Cloud AI Developer Services reportAccess your complimentary copy of the MQ to learn why Google was named as a Leader in this evaluation for the second year in a row.Gartner® names Google Cloud a leader in the 2020 Cloud Database Management Systems Magic QuadrantAccess your complimentary copy of the Cloud DBMS MQ report to learn why Google Cloud was named a Leader when compared to 17 vendors in the space.IDC MarketScape names Google a Leader in Cloud Data Analytics Platforms in Asia PacificIDC MarketScape evaluated vendors in Asia Pacific and noted that Google Cloud is built for cloud-native agility and outcome-based digital innovation, and is steadily expanding its customer base from digital natives to very large organizations actively working to become Intelligent Enterprises.Google is named a Leader in 2020 Magic Quadrant for Cloud Infrastructure and Platform ServicesGoogle is named a Leader in the Gartner® Magic Quadrant for Cloud Infrastructure and Platform Services for the third year in a row.Forrester Research Names Google Cloud a Leader among Public Cloud Development and Infrastructure Platforms in ANZIn this Forrester Wave™: Public Cloud Development and Infrastructure Platforms Australia/New Zealand, Q3 2020 report, Forrester evaluated seven top cloud vendors and identified Google as a Leader in this report.The Forrester Wave™: Data Management for Analytics, Q1 2020Forrester names Google Cloud a Leader in the 2020 Data Management for Analytics Forrester Wave™. Google Cloud received the highest score possible in categories such as: roadmap, performance, high availability, scalability, data ingestion, data storage, data security, and customer use cases.IDC Research: The Power of the Database for the Cloud: SaaS Developer PerspectivesCurious about how SaaS developers are evaluating database technology? This whitepaper examines SaaS developer perspectives, wants, and behaviors through IDC’s qualitative and quantitative research method.Gartner® 2020 Magic Quadrant for Cloud AI Developer ServicesGartner positions Google Cloud as a Leader in Cloud AI Developer Services.Forrester New Wave™: Computer Vision (CV) Platforms Q4, 2019Forrester positions Google Cloud a Leader in Computer Vision Platforms. Google Cloud received the highest score among the vendors evaluated and was also the only provider to receive the highest possible score of “differentiated” across all 10 evaluation criteria.Gartner® 2019 Magic Quadrant for Operational Database Management SystemsGartner names Google a Leader in the 2019 Magic Quadrant for Operational Database Management Systems.The Forrester Wave™: Streaming Analytics, Q3 2019Forrester names Google Cloud a Leader in its evaluation for stream analytics solutions. Forrester identified the most significant providers for evaluation, with Google receiving the highest score possible in 11 different categories.The Forrester Wave™: Cloud Native Continuous Integration Tools, Q3 2019Google comes out on top and named a Leader in the cloud native continuous integration tools market. Forrester identified the 10 most significant providers and evaluated them against 27 key criteria.Google Cloud industries: AI Acceleration among ManufacturersLearn how the pandemic may have sparked an increase in the use of AI among manufacturers.IDC MarketScape names Google a Leader in Vision AI Software Platforms in Asia PacificGet your complimentary copy excerpt of the report to learn why Google was named a leader.Gartner® Critical Capabilities for Cloud AI Developer ServicesDownload your copy of the report to explore Gartner’s analysis of this market.IDC whitepaper: Deploy faster and reduce costs for MySQL and PostgreSQL databasesMigrating your databases to Cloud SQL can lower costs, boost agility, and speed up deployments. Get details in this IDC report.Google is a Leader in the 2023 Gartner® Magic Quadrant™ for Cloud AI Developer ServicesAccess your complimentary copy of the report to learn why Google was named a Leader.The Forrester Wave™: Data Lakehouses, Q2 2024Google is named a leader in The Forrester Wave™: Data Lakehouses Q2 2024 report. The 2024 Gartner® Magic Quadrant™ for Analytics and Business Intelligence PlatformsAccess your complimentary copy of the report to learn why Google was named a leader.Google is named a Leader in The Forrester Wave™: Data Lakehouses, Q2 2024Access your complimentary copy of the report to learn why Google was named a leader.The 2024 Gartner® Magic Quadrant™ for Data Science and Machine Learning PlatformsAccess your complimentary copy of the report to learn why Google was named a Leader.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith [email protected]
|
Analytics_Hub.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/analytics-hub
|
2 |
+
Date Scraped: 2025-02-23T12:03:49.209Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Learn more about privacy-centric data sharing with our recent announcement of BigQuery data clean rooms.Jump to Analytics HubAnalytics HubAnalytics Hub is a data exchange that allows you to efficiently and securely exchange data assets across organizations to address challenges of data reliability and cost. Curate a library of internal and external assets, including unique datasets like Google Trends, backed by the power of BigQuery.Go to consoleIncrease the ROI of data initiatives by exchanging data, ML models, or other analytics assetsDrive innovation with unique datasets from Google, commercial data providers, or your partnersSave time publishing or subscribing to shared datasets in a secure and privacy-safe environment2:01Analytics Hub in a minute: Learn how to share analytics assets with easeBenefitsSave costs and efficiently share and exchange data Analytics Hub builds on the scalability and flexibility of BigQuery to streamline how you publish, discover, and subscribe to data exchanges and incorporate into your analysis, without the need to move data. Centralized management of data and analytics assetsAnalytics Hub streamlines the accessibility of data and analytics assets. In addition to internal datasets, access public, industry, and Google datasets, like Looker Blocks, or Google Trends data.Privacy-safe, secure data sharing with governanceData shared within Analytics Hub automatically includes in-depth governance, encryption, and security from BigQuery, Cloud KMS, Cloud IAM, VPC Security Controls, and more.Key featuresThe Analytics Hub differenceBuilt on a decade of data sharing in BigQuerySince 2010, BigQuery has supported always-live, in-place data sharing within an organization’s security perimeter (intra-organizational sharing) as well as data sharing across boundaries to external organizations, like vendor or partner ecosystems. Looking at usage over a one week period in September 2022, more than 6,000 organizations shared over 275 petabytes of data in BigQuery, not accounting for intra-organizational sharing. Analytics Hub makes the administration of sharing assets across any boundary even easier and more scalable, while retaining access to key capabilities of BigQuery like its built-in ML, real-time, and geospatial analytics.Privacy-centric sharing with data clean roomsCreate a low-trust environment for you and your partners to collaborate without copying or moving the underlying data right within BigQuery. This allows you to perform privacy-enhancing transformations in BigQuery SQL interfaces and monitor usage to detect privacy threats on shared data. Benefit from BigQuery scale without needing to manage any infrastructure and built-in BI and AI/ML. Explore use cases for data clean rooms.Curated exchanges with subscription management and governanceExchanges are collections of data and analytics assets designed for sharing. Administrators can easily curate an exchange by managing the dataset listings within the exchange. Rich metadata can help subscribers find the data they're looking for, and even leverage analytics assets associated with that data. Exchanges within Analytics Hub are private by default, but granular roles and permissions can be set easily for you to deliver data at scale to exactly the right audiences. Data publishers can now easily view and manage subscriptions for all their shared datasets. Administrators can now monitor the usage of Analytics Hub through Audit Logging and Information Schema, while enforcing VPC Service Controls to securely share data.A sharing model for scalability, security, and flexibility Shared datasets are collections of tables and views in BigQuery defined by a data publisher and make up the unit of cross-project/cross-organizational sharing. Data subscribers get an opaque, read-only, linked dataset inside their project and VPC perimeter that they can combine with their own datasets and connect to solutions from Google Cloud or our partners. For example, a retailer might create a single exchange to share demand forecasts to the thousands of vendors in their supply chain—having joined historical sales data with weather, web clickstream, and Google Trends data in their own BigQuery project—then sharing real-time outputs via Analytics Hub. The publisher can add metadata, track subscribers, and see aggregated usage metrics.Search and discovery for internal, public, or commercial datasetsExplore the revamped search experience to browse and quickly find relevant datasets. In addition to easily finding your organization's internal datasets in Analytics Hub, this also includes Google datasets like Google Trends and Earth Engine, commercial datasets from our partners like Crux, and public datasets available in Google Cloud Marketplace.CustomersDelivering business value and innovation through secure data sharingVideoLearn how TelevisaUnivision uses Analytics Hub for secure internal data sharing20:00Blog postHear from Navin Warerkar, Managing Director and US Google Cloud Data & Analytics GTM Lead5-min readBlog postHear from Di Mayze, Global Head of Data and Artificial Intelligence5-min readBlog postHear from Will Freiberg, Chief Executive Officer5-min readSee all customersWe are excited to partner with Google to leverage Analytics Hub and BigQuery to deliver data to over 400 statisticians and data modelers as well as securely sharing data with our partner financial institutions.Kumar Menon, SVP Data Fabric and Decision Science, EquifaxLearn moreWhat's newWant to learn more about Analytics Hub? Blog postSoundCommerce: Power Retail Profitable Growth with Analytics Hub Read the blogVideoWatch Analytics Hub demo and more from Data Cloud Summit 2022Watch videoBlog postSecurely exchange data at scale with Analytics Hub, now available in PreviewRead the blogBlog postInternational Google Trends dataset now in Analytics HubRead the blogDocumentationDocumentationArchitectureIntroduction to Analytics HubWith Analytics Hub, you can discover and access a data library curated by various data providers. Explore architecture for publisher and subscriber workflows.Learn moreGoogle Cloud BasicsManage data exchangesGet started by learning how to create, update, or delete a data exchange and manage Analytics Hub users.Learn moreGoogle Cloud BasicsManage listingsA listing is a reference to a shared dataset that a publisher lists in a data exchange. Learn how to manage listings as an Analytics Hub publisher.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.PricingSimple and logical pricingPricing for Analytics Hub is based on the underlying pricing structure of BigQuery, with the following distinctions for data publishers and data subscribers.Organizations publishing data into an exchange pay for the storage of that data according to BigQuery storage pricing.Organizations subscribing to data from an exchange only pay for query processing from within their organization, and according to their BigQuery pricing plan (flat-rate or on-demand).For detailed pricing information, please view the BigQuery pricing guide.View pricing detailsPartnersThousands of datasets made available via public and commercial data providersIf you're interested in becoming a data provider or learning about our Data Gravity initiative, please contact Google Cloud sales. See all data analytics partnersExplore public datasets in the marketplaceThis product is in early access. For more information on our product launch stages, see hereTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith [email protected]
|
Analytics_hybrid_and_multicloud_patterns.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices/analytics-hybrid-multicloud-pattern
|
2 |
+
Date Scraped: 2025-02-23T11:50:05.163Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Analytics hybrid and multicloud pattern Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-27 UTC This document discusses that the objective of the analytics hybrid and multicloud pattern is to capitalize on the split between transactional and analytics workloads. In enterprise systems, most workloads fall into these categories: Transactional workloads include interactive applications like sales, financial processing, enterprise resource planning, or communication. Analytics workloads include applications that transform, analyze, refine, or visualize data to aid decision-making processes. Analytics systems obtain their data from transactional systems by either querying APIs or accessing databases. In most enterprises, analytics and transactional systems tend to be separate and loosely coupled. The objective of the analytics hybrid and multicloud pattern is to capitalize on this pre-existing split by running transactional and analytics workloads in two different computing environments. Raw data is first extracted from workloads that are running in the private computing environment and then loaded into Google Cloud, where it's used for analytical processing. Some of the results might then be fed back to transactional systems. The following diagram illustrates conceptually possible architectures by showing potential data pipelines. Each path/arrow represents a possible data movement and transformation pipeline option that can be based on ETL or ELT, depending on the available data quality and targeted use case. To move your data into Google Cloud and unlock value from it, use data movement services, a complete suite of data ingestion, integration, and replication services. As shown in the preceding diagram, connecting Google Cloud with on-premises environments and other cloud environments can enable various data analytics use cases, such as data streaming and database backups. To power the foundational transport of a hybrid and multicloud analytics pattern that requires a high volume of data transfer, Cloud Interconnect and Cross-Cloud Interconnect provide dedicated connectivity to on-premises and other cloud providers. Advantages Running analytics workloads in the cloud has several key advantages: Inbound traffic—moving data from your private computing environment or other clouds to Google Cloud—might be free of charge. Analytics workloads often need to process substantial amounts of data and can be bursty, so they're especially well suited to being deployed in a public cloud environment. By dynamically scaling compute resources, you can quickly process large datasets while avoiding upfront investments or having to overprovision computing equipment. Google Cloud provides a rich set of services to manage data throughout its entire lifecycle, ranging from initial acquisition through processing and analyzing to final visualization. Data movement services on Google Cloud provide a complete suite of products to move, integrate, and transform data seamlessly in different ways. Cloud Storage is well suited for building a data lake. Google Cloud helps you to modernize and optimize your data platform to break down data silos. Using a data lakehouse helps to standardize across different storage formats. It can also provide the flexibility, scalability, and agility needed to help ensure that your data generates value for your business, rather than inefficiencies. For more information, see BigLake. BigQuery Omni, provides compute power that runs locally to the storage on AWS or Azure. It also helps you query your own data stored in Amazon Simple Storage Service (Amazon S3) or Azure Blob Storage. This multicloud analytics capability lets data teams break down data silos. For more information about querying data stored outside of BigQuery, see Introduction to external data sources. Best practices To implement the analytics hybrid and multicloud architecture pattern, consider the following general best practices: Use the handover networking pattern to enable the ingestion of data. If analytical results need to be fed back to transactional systems, you might combine both the handover and the gated egress pattern. Use Pub/Sub queues or Cloud Storage buckets to hand over data to Google Cloud from transactional systems that are running in your private computing environment. These queues or buckets can then serve as sources for data-processing pipelines and workloads. To deploy ETL and ELT data pipelines, consider using Cloud Data Fusion or Dataflow depending on your specific use case requirements. Both are fully managed, cloud-first data processing services for building and managing data pipelines. To discover, classify, and protect your valuable data assets, consider using Google Cloud Sensitive Data Protection capabilities, like de-identification techniques. These techniques let you mask, encrypt, and replace sensitive data—like personally identifiable information (PII)—using a randomly generated or pre-determined key, where applicable and compliant. When you have existing Hadoop or Spark workloads, consider migrating jobs to Dataproc and migrating existing HDFS data to Cloud Storage. When you're performing an initial data transfer from your private computing environment to Google Cloud, choose the transfer approach that is best suited for your dataset size and available bandwidth. For more information, see Migration to Google Cloud: Transferring your large datasets. If data transfer or exchange between Google Cloud and other clouds is required for the long term with high traffic volume, you should evaluate using Google Cloud Cross-Cloud Interconnect to help you establish high-bandwidth dedicated connectivity between Google Cloud and other cloud service providers (available in certain locations). If encryption is required at the connectivity layer, various options are available based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cross-Cloud Interconnect. Use consistent tooling and processes across environments. In an analytics hybrid scenario, this practice can help increase operational efficiency, although it's not a prerequisite. Previous arrow_back Partitioned multicloud pattern Next Edge hybrid pattern arrow_forward Send feedback
|
Analytics_lakehouse.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/big-data-analytics/analytics-lakehouse
|
2 |
+
Date Scraped: 2025-02-23T11:48:50.653Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Analytics lakehouse Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-13 UTC This guide helps you understand, deploy, and use the Analytics lakehouse Jump Start Solution. This solution demonstrates how you can unify data lakes and data warehouses by creating an analytics lakehouse to store, process, analyze, and activate data using a unified data stack. Common use cases for building an analytics lakehouse include the following: Large scale analysis of telemetry data combined with reporting data. Unifying structured and unstructured data analysis. Providing real-time analytics capabilities for a data warehouse. This document is intended for developers who have some background with data analysis and have used a database or data lake to perform an analysis. It assumes that you're familiar with basic cloud concepts, though not necessarily Google Cloud. Experience with Terraform is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives Learn how to set up an analytics lakehouse. Secure an analytics lakehouse using a common governance layer. Build dashboards from the data to perform data analysis. Create a machine learning model to predict data values over time. Products used The solution uses the following Google Cloud products: BigQuery: A fully managed, highly scalable data warehouse with built-in machine learning capabilities. Dataproc: A fully managed service for data lake modernization, ETL, and secure data science, at scale. Looker Studio: Self-service business intelligence platform that helps you create and share data insights. Dataplex: Centrally discover, manage, monitor, and govern data at scale. Cloud Storage: An enterprise-ready service that provides low-cost, no-limit object storage for diverse data types. Data is accessible from within and outside of Google Cloud and is replicated geo-redundantly. BigLake: BigLake is a storage engine that unifies data warehouses and lakes by enabling BigQuery and open source frameworks like Spark to access data with fine-grained access control. The following Google Cloud products are used to stage data in the solution for first use: Workflows: A fully managed orchestration platform that executes services in a specified order as a workflow. Workflows can combine services, including custom services hosted on Cloud Run or Cloud Run functions, Google Cloud services such as BigQuery, and any HTTP-based API. Architecture The example lakehouse architecture that this solution deploys analyzes an ecommerce dataset to understand a retailer's performance over time. The following diagram shows the architecture of the Google Cloud resources that the solution deploys. Solution flow The architecture represents a common data flow to populate and transform data in an analytics lakehouse architecture: Data lands in Cloud Storage buckets. A data lake is created in Dataplex. Data in the buckets are organized into entities, or tables, in the data lake. Tables in the data lake are immediately available in BigQuery as BigLake: tables. Data transformations using Dataproc or BigQuery, and using open file formats including Apache Iceberg. Data can be secured using policy tags and row access policies. Machine learning can be applied on the tables. Dashboards are created from the data to perform more analytics by using Looker Studio. Cost For an estimate of the cost of the Google Cloud resources that the analytics lakehouse solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/biglake.admin roles/bigquery.admin roles/compute.admin roles/datalineage.viewer roles/dataplex.admin roles/dataproc.admin roles/iam.serviceAccountAdmin roles/iam.serviceAccountUser roles/resourcemanager.projectIamAdmin roles/servicenetworking.serviceAgent roles/serviceusage.serviceUsageViewer roles/vpcaccess.admin roles/storage.admin roles/workflows.admin Deploy the solution This section guides you through the process of deploying the solution. Note: To ensure the solution deploys successfully, make sure that the organizational policy constraint constraints/compute.requireOsLogin is not enforced in the project you want to deploy to. Go to the Policy details page for your project, and confirm that the Status is Not enforced. To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Analytics lakehouse solution. Go to the Analytics lakehouse solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To view and use the solution, return to the Solution deployments page in the console. Click the more_vert Actions menu. Select View Looker Studio Dashboard to open a dashboard that's built on top of the sample data that's transformed by using the solution. Select Open BigQuery Editor to run queries and build machine learning (ML) models using the sample data in the solution. Select View Colab to run queries in a notebook environment. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse/. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-google-analytics-lakehouse/ Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse/. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-google-analytics-lakehouse/ directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" # Google Cloud region where you want to deploy the solution # Example: us-central1 region = "REGION" # Whether or not to enable underlying apis in this solution. # Example: true enable_apis = true # Whether or not to protect Cloud Storage and BigQuery resources from deletion when solution is modified or changed. # Example: false force_destroy = false Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse/. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse/. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! The Terraform output also lists the following additional information that you'll need: The Looker Studio URL of the dashboard that was deployed. The link to open the BigQuery editor for some sample queries. The link to open the Colab tutorial. The following example shows what the output looks like: lookerstudio_report_url = "https://lookerstudio.google.com/reporting/create?c.reportId=79675b4f-9ed8-4ee4-bb35-709b8fd5306a&ds.ds0.datasourceName=vw_ecommerce&ds.ds0.projectId=${var.project_id}&ds.ds0.type=TABLE&ds.ds0.datasetId=gcp_lakehouse_ds&ds.ds0.tableId=view_ecommerce" bigquery_editor_url = "https://console.cloud.google.com/bigquery?project=my-cloud-project&ws=!1m5!1m4!6m3!1smy-cloud-project!2sds_edw!3ssp_sample_queries" lakehouse_colab_url = "https://colab.research.google.com/github/GoogleCloudPlatform/terraform-google-analytics-lakehouse/blob/main/assets/ipynb/exploratory-analysis.ipynb" To view and use the dashboard and to run queries in BigQuery, copy the output URLs from the previous step and open the URLs in new browser tabs. The dashboard, notebook, and BigQuery editors appear in the new tabs. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Customize the solution This section provides information that Terraform developers can use to modify the analytics lakehouse solution in order to meet their own technical and business requirements. The guidance in this section is relevant only if you deploy the solution by using the Terraform CLI. Note: Changing the Terraform code for this solution requires familiarity with the Terraform configuration language. If you modify the Google-provided Terraform configuration, and then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. After you've seen how the solution works with the sample data, you might want to work with your own data. To use your own data, you put it into the Cloud Storage bucket named edw-raw-hash. The hash is a random set of 8 characters that's generated during the deployment. You can change the Terraform code in the following ways: Dataset ID. Change the Terraform code so that when the code creates the BigQuery dataset, it uses the dataset ID that you want to use for your data. Schema. Change the Terraform code so that it creates the BigQuery table ID that you want to use to store your data. This includes the external table schema so that BigQuery can read the data from Cloud Storage. Zone. Create the lake zones that match your business need (usually a two or three tier zoning based on data quality and usage). Looker dashboards. Change the Terraform code that creates a Looker dashboard so that the dashboard reflects the data that you're using. PySpark jobs. Change the Terraform code to execute PySpark jobs using Dataproc. The following are common analytics lakehouse objects, showing the Terraform example code in main.tf. BigQuery dataset: The schema where database objects are grouped and stored. resource "google_bigquery_dataset" "ds_edw" { project = module.project-services.project_id dataset_id = "DATASET_PHYSICAL_ID" friendly_name = "DATASET_LOGICAL_NAME" description = "DATASET_DESCRIPTION" location = "REGION" labels = var.labels delete_contents_on_destroy = var.force_destroy } BigQuery table: A database object that represents data that's stored in BigQuery or that represents a data schema that's stored in Cloud Storage. resource "google_bigquery_table" "tbl_edw_taxi" { dataset_id = google_bigquery_dataset.ds_edw.dataset_id table_id = "TABLE_NAME" project = module.project-services.project_id deletion_protection = var.deletion_protection ... } BigQuery stored procedure: A database object that represents one or more SQL statements to be executed when called. This could be to transform data from one table to another or to load data from an external table into a standard table. resource "google_bigquery_routine" "sp_sample_translation_queries" { project = module.project-services.project_id dataset_id = google_bigquery_dataset.ds_edw.dataset_id routine_id = "sp_sample_translation_queries" routine_type = "PROCEDURE" language = "SQL" definition_body = templatefile("${path.module}/assets/sql/sp_sample_translation_queries.sql", { project_id = module.project-services.project_id }) } Cloud Workflows workflow: A Workflows workflow represents a combination of steps to be executed in a specific order. This could be used to set up data or perform data transformations along with other execution steps. resource "google_workflows_workflow" "copy_data" { name = "copy_data" project = module.project-services.project_id region = var.region description = "Copies data and performs project setup" service_account = google_service_account.workflows_sa.email source_contents = templatefile("${path.module}/src/yaml/copy-data.yaml", { public_data_bucket = var.public_data_bucket, textocr_images_bucket = google_storage_bucket.textocr_images_bucket.name, ga4_images_bucket = google_storage_bucket.ga4_images_bucket.name, tables_bucket = google_storage_bucket.tables_bucket.name, dataplex_bucket = google_storage_bucket.dataplex_bucket.name, images_zone_name = google_dataplex_zone.gcp_primary_raw.name, tables_zone_name = google_dataplex_zone.gcp_primary_staging.name, lake_name = google_dataplex_lake.gcp_primary.name }) } To customize the solution, complete the following steps in Cloud Shell: Verify that the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse. If it isn't, go to that directory: cd $HOME/cloudshell_open/terraform-google-analytics-lakehouse Open main.tf and make the changes you want to make. For more information about the effects of such customization on reliability, security, performance, cost, and operations, see Design recommendations. Validate and review the Terraform configuration. Provision the resources. Design recommendations This section provides recommendations for using the analytics lakehouse solution to develop an architecture that meets your requirements for security, reliability, cost, and performance. As you begin to scale your lakehouse solution, you have available a number of ways to help improve your query performance and to reduce your total spend. These methods include changing how your data is physically stored, modifying your SQL queries, and changing how your queries are executed using different technologies. To learn more about methods for optimizing your Spark workloads, see Dataproc best practices for production. Note the following: Before you make any design changes, assess the cost impact and consider potential trade-offs with other features. You can assess the cost impact of design changes by using the Google Cloud Pricing Calculator. To implement design changes in the solution, you need expertise in Terraform coding and advanced knowledge of the Google Cloud services that are used in the solution. If you modify the Google-provided Terraform configuration and if you then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For more information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Delete the solution deployment When you no longer need the solution deployment, to avoid continued billing for the resources that you created, delete the deployment. Delete the deployment through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete the deployment using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse/. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying the solution through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying the solution using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Cannot assign requested address error When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Errors accessing data in BigQuery or Looker Studio There is a provisioning step that runs after the Terraform provisioning steps that loads data to the environment. If you get an error when the data is being loaded into the Looker Studio dashboard, or if there are no objects when you start exploring BigQuery, wait a few minutes and try again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/ Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next Create a data lake using Dataplex Create BigLake external tables for Apache Iceberg Use Apache Spark on Google Cloud Learn about BigQuery Send feedback
|
Analyze_FHIR_data_in_BigQuery.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/analyzing-fhir-data-in-bigquery
|
2 |
+
Date Scraped: 2025-02-23T11:49:19.830Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Analyzing FHIR data in BigQuery Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-29 UTC This document explains to researchers, data scientists, and business analysts the processes and considerations for analyzing Fast Healthcare Interoperability Resources (FHIR) data in BigQuery. Specifically, this document focuses on patient resource data that is exported from the FHIR store in the Cloud Healthcare API. This document also steps through a series of queries that demonstrate how FHIR schema data works in a relational format, and shows you how to access these queries for reuse through views. Using BigQuery for analyzing FHIR data The FHIR-specific API of the Cloud Healthcare API is designed for real-time transactional interaction with FHIR data at the level of a single FHIR resource or a collection of FHIR resources. However, the FHIR API is not designed for analytics use cases. For these use cases, we recommend exporting your data from the FHIR API to BigQuery. BigQuery is a serverless, scalable data warehouse that lets you analyze large quantities of data retrospectively or prospectively. Additionally, BigQuery conforms to ANSI:2011 SQL, which makes data accessible to data scientists and business analysts through tools that they typically use, such as Tableau, Looker, or Vertex AI Workbench. In some applications, such as Vertex AI Workbench, you get access through built-in clients, such as the Python Client library for BigQuery. In these cases, the data returned to the application is available through built-in language data structures. Accessing BigQuery You can access BigQuery through the BigQuery web UI in the Google Cloud console, and also with the following tools: The BigQuery command-line tool The BigQuery REST API or client libraries ODBC and JDBC drivers By using these tools, you can integrate BigQuery into almost any application. Working with the FHIR data structure The built-in FHIR standard data structure is complex, with nested and embedded FHIR data types throughout any FHIR resource. These embeddable FHIR data types are referred to as complex data types. Arrays and structures are also referred to as complex data types in relational databases. The built-in FHIR standard data structure works well serialized as XML or JSON files in a document-oriented system, but the structure can be challenging to work with when translated into relational databases. The following screenshot shows a partial view of a FHIR patient resource data type that illustrates the complex nature of the built-in FHIR standard data structure. The preceding screenshot shows the primary components of a FHIR patient resource data type. For example, the cardinality column (indicated in the table as Card.) shows several items that can have zero, one, or more than one entries. The Type column shows the Identifier, HumanName, and Address data types, which are examples of complex data types that comprise the patient resource data type. Each of these rows can be recorded multiple times, as an array of structures. Using arrays and structures BigQuery supports arrays and STRUCT data types—nested, repeated data structures—as they are represented in FHIR resources, which makes data conversion from FHIR to BigQuery possible. In BigQuery, an array is an ordered list consisting of zero or more values of the same data type. You can construct arrays of simple data types, such as the INT64 data type, and complex data types, such as the STRUCT data type. The exception is the ARRAY data type, because arrays of arrays are not currently supported. In BigQuery, an array of structures appears as a repeatable record. You can specify nested data, or nested and repeated data, in the BigQuery UI or in a JSON schema file. To specify nested columns, or nested and repeated columns, use the RECORD (STRUCT) data type. The Cloud Healthcare API supports the SQL on FHIR schema in BigQuery. This analytics schema is the default schema on the ExportResources() method and is supported by the FHIR community. BigQuery supports denormalized data. This means that when you store your data, instead of creating a relational schema such as a star or snowflake schema, you can denormalize your data and use nested and repeated columns. Nested and repeated columns maintain relationships between data elements without the performance impact of preserving a relational (normalized) schema. Accessing your data through the UNNEST operator Every FHIR resource in the FHIR API is exported into BigQuery as one row of data. You can think of an array or structure inside any row as an embedded table. You can access data in that "table" in either the SELECT clause or the WHERE clause of your query by flattening the array or structure by using the UNNEST operator. The UNNEST operator takes an array and returns a table with a single row for each element in the array. For more information, see working with arrays in standard SQL. The UNNEST operation doesn't preserve the order of the array elements, but you can reorder the table by using the optional WITH OFFSET clause. This returns an additional column with the OFFSET clause for each array element. You can then use the ORDER BY clause to order the rows by their offset. When joining unnested data, BigQuery uses a correlated CROSS JOIN operation that references the column of arrays from each item in the array with the source table, which is the table that directly precedes the call to UNNEST in the FROM clause. For each row in the source table, the UNNEST operation flattens the array from that row into a set of rows containing the array elements. The correlated CROSS JOIN operation joins this new set of rows with the single row from the source table. Investigating the schema with queries To query FHIR data in BigQuery, it's important to understand the schema that's created through the export process. BigQuery lets you inspect the column structure of every table in the dataset through the INFORMATION_SCHEMA feature, a series of views that display metadata. The remainder of this document refers to the SQL on FHIR schema, which is designed to be accessible for retrieving data. Note: To view tables in BigQuery, we recommend using the preview mode instead of SELECT * commands. For more information, see BigQuery best practices. The following sample query explores the column details for the patient table in the SQL on FHIR schema. The query references the Synthea Generated Synthetic Data in FHIR public dataset, which hosts over 1 million synthetic patient records generated in the Synthea and FHIR formats. When you query the INFORMATION_SCHEMA.COLUMNS view, the query results contain one row for each column (field) in a table. The following query returns all columns in the patient table: SELECT * FROM `bigquery-public-data.fhir_synthea.INFORMATION_SCHEMA.COLUMNS` WHERE table_name='patient' The following screenshot of the query result shows the identifier data type, and the array within the data type that contains the STRUCT data types. Using the FHIR patient resource in BigQuery The patient medical record number (MRN), a critical piece of information stored in your FHIR data, is used throughout an organization's clinical and operational data systems for all patients. Any method of accessing data for an individual patient or a set of patients must filter for or return the MRN, or do both. The following sample query returns the internal FHIR server identifier to the patient resource itself, including the MRN and the date of birth for all patients. The filter to query on a specific MRN is also included, but is commented out in this example. In this query, you unnest the identifier complex data type twice. You also use correlated CROSS JOIN operations to join unnested data with its source table. The bigquery-public-data.fhir_synthea.patient table in the query was created by using the SQL on FHIR schema version of the FHIR to BigQuery export. SELECT id, i.value as MRN, birthDate FROM `bigquery-public-data.fhir_synthea.patient` #This is a correlated cross join ,UNNEST(identifier) i ,UNNEST(i.type.coding) it WHERE # identifier.type.coding.code it.code = "MR" #uncomment to get data for one patient, this MRN exists #AND i.value = "a55c8c2f-474b-4dbd-9c84-effe5c0aed5b" The output is similar to the following: In the preceding query, the identifier.type.coding.code value set is the FHIR identifier value set that enumerates available identity data types, such as the MRN (MR identity data type), driver's license (DL identity data type), and passport number (PPN identity data type). Because the identifier.type.coding value set is an array, there can be any number of identifiers listed for a patient. But in this case, you want the MRN (MR identity data type). Joining the patient table with other tables Building on the patient table query, you can join the patient table with other tables in this dataset, such as the conditions table. The conditions table is where patient diagnoses are recorded. The following sample query retrieves all entries for the medical condition of hypertension. SELECT abatement.dateTime as abatement_dateTime, assertedDate, category, clinicalStatus, code, onset.dateTime as onset_dateTime, subject.patientid FROM `bigquery-public-data.fhir_synthea.condition` ,UNNEST(code.coding) as code WHERE code.system = 'http://snomed.info/sct' #snomed code for Hypertension AND code.code = '38341003' The output is similar to the following: In the preceding query, you reuse the UNNEST method to flatten the code.coding field. The abatement.dateTime and onset.dateTime code elements in the SELECT statement are aliased because they both end in dateTime, which would result in ambiguous column names in the output of a SELECT statement. When you select the Hypertension code, you also need to declare the terminology system that the code comes from—in this case, the SNOMED CT clinical terminology system. As the final step, you use the subject.patientid key to join the condition table with the patient table. This key points to the identifier of the patient resource itself within the FHIR server. Note: The patient resource identifier is different from the patient MRN, which is defined earlier in this document. Bringing the queries together In the following sample query, you use the queries from the two preceding sections and join them by using the WITH clause, while performing some simple calculations. WITH patient AS ( SELECT id as patientid, i.value as MRN, birthDate FROM `bigquery-public-data.fhir_synthea.patient` #This is a correlated cross join ,UNNEST(identifier) i ,UNNEST(i.type.coding) it WHERE # identifier.type.coding.code it.code = "MR" #uncomment to get data for one patient, this MRN exists #AND i.value = "a55c8c2f-474b-4dbd-9c84-effe5c0aed5b" ), condition AS ( SELECT abatement.dateTime as abatement_dateTime, assertedDate, category, clinicalStatus, code, onset.dateTime as onset_dateTime, subject.patientid FROM `bigquery-public-data.fhir_synthea.condition` ,UNNEST(code.coding) as code WHERE code.system = 'http://snomed.info/sct' #snomed code for Hypertension AND code.code = '38341003' ) SELECT patient.patientid, patient.MRN, patient.birthDate as birthDate_string, #current patient age. now - birthdate CAST(DATE_DIFF(CURRENT_DATE(),CAST(patient.birthDate AS DATE),MONTH)/12 AS INT) as patient_current_age_years, CAST(DATE_DIFF(CURRENT_DATE(),CAST(patient.birthDate AS DATE),MONTH) AS INT) as patient_current_age_months, CAST(DATE_DIFF(CURRENT_DATE(),CAST(patient.birthDate AS DATE),DAY) AS INT) as patient_current_age_days, #age at onset. onset date - birthdate DATE_DIFF(CAST(SUBSTR(condition.onset_dateTime,1,10) AS DATE),CAST(patient.birthDate AS DATE),YEAR)as patient_age_at_onset, condition.onset_dateTime, condition.code.code, condition.code.display, condition.code.system FROM patient JOIN condition ON patient.patientid = condition.patientid The output is similar to the following: In the preceding sample query, the WITH clause lets you isolate subqueries into their own defined segments. This approach can help with legibility, which becomes more important as your query grows larger. In this query, you isolate the subquery for patients and conditions into their own WITH segments, and then join them in the main SELECT segment. You can also apply calculations to raw data. The following sample code, a SELECT statement, shows how to calculate patient's age at disease onset. DATE_DIFF(CAST(SUBSTR(condition.onset_dateTime,1,10) AS DATE),CAST(patient.birthDate AS DATE),YEAR)as patient_age_at_onset As indicated in the preceding code sample, you can perform a number of operations on the supplied dateTime string, condition.onset_dateTime. First, you select the date component of the string with the SUBSTR value. Then you convert the string into a DATE data type by using the CAST syntax. You also convert the patient.birthDate field to the DATEfield. Finally, you calculate the difference between the two dates by using the DATE_DIFF function. What's next Analyze clinical data using BigQuery and AI Platform Notebooks. Visualizing BigQuery data in a Jupyter notebook. Cloud Healthcare API security. BigQuery access control. Healthcare and life sciences solutions in Google Cloud Marketplace. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback
|
Anti_Money_Laundering_AI.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/anti-money-laundering-ai
|
2 |
+
Date Scraped: 2025-02-23T12:05:00.404Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Jump to Anti Money Laundering AIDetect suspicious, potential money laundering activity faster and more precisely with AI.Contact usFocuses on retail and commercial banking Designed to support model governance requirements in financial servicesExplainable to analysts, risk managers, and auditorsAdopted in production as system of record in multiple jurisdictions for transaction monitoringSupports customer extensible data and featuresCelent names HSBC the Model Risk Manager of the Year 2023 for its AML AI implementationGet the reportBenefitsIncreased risk detectionDetect nearly 2-4x1 more confirmed suspicious activity, strengthening your anti-money laundering program.1As measured by HSBCLower operational costsEliminate over 60% of false positives1 and focus investigation time on high-risk, actionable alerts.1As measured by HSBCRobust governance and defensibilityGain auditable and explainable outputs to support regulatory compliance and internal risk management.Key featuresBring your data out from hiding—and AML risk to the surfaceGenerate ML-powered risk scoresAI-powered transaction monitoring can replace the manually defined, rules-based approach and harness the power of financial institutions’ own data to train advanced machine learning (ML) models to provide a comprehensive view of risk scores.Pinpoint the highest weighted risksTapping into a holistic view of your data, the model directs you to the highest weighted money laundering risks by examining transaction, account, customer relationship, company, and other data to identify patterns, instances, groups, anomalies, and networks for retail and commercial banks.Make explaining risk scores easierEach score provides a breakdown of key risk indicators, enabling business users to easily explain risk scores, expedite the investigation workflow, and facilitate reporting across risk typologies.News"Google Cloud Launches Anti-Money-Laundering Tool for Banks, Betting on the Power of AI", Wall Street JournalWhat's newSee the latest updates about AML AISign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoLearn more about AML AILearn moreNewsGoogle Cloud launches AI-powered AML product for financial institutionsLearn moreBlog postAI in financial services: Applying model risk management guidance in a new worldRead the blogReportApplying the existing AI/ML model risk management guidance in financial servicesRead reportDocumentationDocumentationTutorialSet up AML AISee how to incorporate AML AI into your AML process.Learn moreArchitectureSecurity architecture overviewAML AI is designed with your customers' data in mind. It supports security features like data residency and access transparency.Learn moreAPIs & LibrariesFinancial services REST APIAML AI provides a simple JSON HTTP interface that you can call directly.Learn moreAPIs & LibrariesAML input data modelLearn more about the schema and data input requirements for AML AI.Learn moreNot seeing what you’re looking for?View all product documentationPricingAML AI pricing detailsAnti Money Laundering AI has two pricing components:1) AML risk scoring is based on the number of banking customers the service is used for, billed on a daily basis2) Model training and tuning is based on the number of banking customers used in the input datasetsContact sales for full pricing details.PartnersTrusted partner ecosystemOur large ecosystem of trusted industry partners can help financial services institutions solve their complex business challenges.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Contact us about AML AINeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith [email protected]
|
Apigee_API_Management(1).txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/apigee
|
2 |
+
Date Scraped: 2025-02-23T12:04:49.900Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Learn how gen AI can help streamline software delivery, application migration, and modernization. Register now.Apigee API ManagementManage APIs with unmatched scale, security, and performanceGoogle Cloud’s native API management tool to build, manage, and secure APIs—for any use case, environment, or scale.Get startedContact usExplore Apigee for free in your own sandbox for 60 days.Jumpstart your development with helpful resources.Product highlightsAutomated controls powered by AI/ML to build and secure APIsSupport for REST, SOAP, GraphQL, or gRPC architectural stylesFlexibility to choose pay-as-you-go or subscription pricingProduct overviewLearn Apigee in 5 minutesFeaturesUsing Gemini Code Assist in Apigee API ManagementCreate consistent quality APIs without any specialized expertise. You can create new API specifications with prompts in Apigee plugin, integrated into Cloud Code. In Apigee, Gemini Code Assist considers your prompt and existing artifacts such as security schema or API objects, to create a specification compliant with your enterprise. You can further slash development time by generating mock servers for parallel development and collaborative testing. Lastly, Gemini also assists you in turning your API specifications into proxies or extensions for your AI applications.Explore Gemini Code Assist in ApigeeUniversal catalog for your APIsConsolidate API specifications—built or deployed anywhere—into API hub. Built on open standards, API hub is a universal catalog that allows developers to access APIs and govern them to a consistent quality. Using autogenerated recommendations provided by Gemini, you can create assets like API proxies, integrations, or even plugin extensions that can be deployed to Vertex AI or ChatGPT.Organize your API information in API hubAutomated API Security with ML based abuse detectionAdvanced API Security detects undocumented and unmanaged APIs linked to Google Cloud L7 Load Balancers. Advanced API Security also regularly assesses managed APIs, surfaces API proxies that do not meet security standards, and provides recommended actions when issues are detected. ML-powered dashboards accurately identify critical API abuses by finding patterns within the large number of bot alerts, reducing the time to act on important incidents.Get started with Advanced API SecurityAutomated API Security with ML based abuse detectionHigh-performance API proxiesOrchestrate and manage traffic for demanding applications with unparalleled control and reliability. Apigee supports styles like REST, gRPC, SOAP, and GraphQL, providing flexibility to implement any architecture. Using Apigee, you can also proxy internal microservices managed in a service mesh as REST APIs to enhance security. Get started with your first API proxy todayVIDEOBuild your first API proxy with Apigee7:47Hybrid/multicloud deploymentsAchieve the architectural freedom to deploy your APIs anywhere—in your own data center or public cloud of your choice—by configuring Apigee hybrid. Host and manage containerized runtime services in your own Kubernetes cluster for greater agility and interoperability while managing APIs consistently with Apigee.Learn more about Apigee hybridVIDEOChoose your own deployment environment3:13Traffic management and control policiesApigee uses policies on API proxies to program API behavior without writing any code. Policies provided by Apigee allow you to add common functionality like security, rate limiting, transformation, and mediation. You can configure from a robust set of 50+ policies to gain control on behavior, traffic, security, and QoS of every API. You can even write custom scripts and code (such as JavaScript applications) to extend API functionality.Add your first policy to your APIVIDEOHow to add policies to your APIs in Apigee?5:53Developer portals integrated into API life cycleBundle individual APIs or resources into API products—a logical unit that can address a specific use case for a developer. Publish these API products in out-of-the-box integrated developer portals or customized experiences built on Drupal. Drive adoption of your API products with easy onboarding of partners/developers, secure access to your APIs, and engaging experiences without any administrative overhead.Onboard developers to your APIsVIDEOHow to create a developer portal in 5 minutes?5:33Built-in and custom API analytics dashboardsUse built-in dashboards to investigate spikes, improve performance, and identify improvement opportunities by analyzing critical information from your API traffic. Build custom dashboards to analyze API quality and developer engagement to make informed decisions.Start gaining insights from your APIsVIDEOExplore Apigee API Analytics7:13Near real-time API monitoringInvestigate every detail of your API transaction within the console or in any distributed tracing solution by debugging an API proxy flow. Isolate problem areas quickly by monitoring their performance or latency. Use Advanced API Operations to identify anomalous traffic patterns and get notified on unpredictable behaviors without any alert fatigue or overheads.Start monitoring your APIs todayVIDEOHow to debug your APIs?5:42API monetizationCreate rate plans for your API products to monetize your API channels. Implementing business models of any complexity by configuring billing, payment model, or revenue share with granular details.Enable API monetization in Apigee todayVIDEOHow to monetize your datasets using Apigee?5:06View all featuresOptions tableProductDescriptionWhen to use this product?ApigeeFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleManaging high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridComprehensive API management for use in any environment—on-premises or any cloudMaintaining and processing API traffic within your own kubernetes clusterCloud EndpointsCustomer managed service to run co-located gateway or private networkingManaging gRPC services with locally hosted gateway for private networkingAPI GatewayFully managed service to package serverless functions as REST APIsBuilding proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.ApigeeDescriptionFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleWhen to use this product?Managing high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridDescriptionComprehensive API management for use in any environment—on-premises or any cloudWhen to use this product?Maintaining and processing API traffic within your own kubernetes clusterCloud EndpointsDescriptionCustomer managed service to run co-located gateway or private networkingWhen to use this product?Managing gRPC services with locally hosted gateway for private networkingAPI GatewayDescriptionFully managed service to package serverless functions as REST APIsWhen to use this product?Building proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.How It WorksApigee provides an abstraction or facade for your backend services by fronting services with API proxies. Using these proxies you can control traffic to your backend services with granular controls like security, rate limiting, quotas, and much more.Build an API proxyGet started with ApigeeCommon UsesCloud-first application developmentUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshTutorials, quickstarts, & labsUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshModernize legacy apps and architecturesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceLearning resourcesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceNew business channels and opportunitiesPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesTutorials, quickstarts, & labsPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesUniform hybrid or multicloud operationsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsTutorials, quickstarts, & labsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsWeb application and API securityImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsTutorials, quickstarts, & labsImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsPricingHow Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsPricing modelDescriptionPrice (USD)EvaluationExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysFreePay-as-you-goAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyStarting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environments$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityStarting at$20 per 1M API callsSubscriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemContact us for a custom quote or any further questionsCheck out this pricing page for further details.How Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsEvaluationDescriptionExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysPrice (USD)FreePay-as-you-goDescriptionAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyPrice (USD)Starting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveDescriptionStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environmentsDescription$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityDescriptionStarting at$20 per 1M API callsSubscriptionDescriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemPrice (USD)Contact us for a custom quote or any further questionsCheck out this pricing page for further details.Pricing calculatorEstimate your monthly costs, including network usage costs.Estimate your costsCustom QuoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptExplore Apigee in your own sandbox Try Apigee for freeStart using Apigee with no commitmentGo to consoleBuild your first API proxy on Apigee todayQuickstartExplore helpful resources and examplesResourcesJoin our Google Cloud Innovator communityBecome an Apigee innovatorFAQExpand allWhy is an API used?APIs enable seamless communication between applications, servers, and users in today's tech-driven world. As their numbers grow, API management has become crucial, encompassing design, development, testing, deployment, governance, security, monitoring, and monetization within the software development life cycle.What is a RESTful API?RESTful APIs adhere to REST (Representational State Transfer) architecture constraints. Following these architecture constraints enables APIs to offer scalability, speed, and data versatility. An API of this kind accesses data by using HTTP requests and is the most common type of API used in modern applications today.What makes Apigee different from other API management solutions?Apigee is Google Cloud’s fully managed API management solution. Trusted by enterprises across the globe, Apigee is developer-friendly and provides comprehensive capabilities to support diverse API architectural styles, deployment environments, and use cases. Apigee also provides flexible pricing options for every business to get started and become successful on the platform.Which API protocols does Apigee support?Apigee currently supports REST, SOAP, GraphQL, gRPC, or OpenAPI protocols.Why should you secure your APIs?Companies worldwide rely on application programming interfaces, or APIs, to facilitate digital experiences and unleash the potential energy of their own data and processes. But the proliferation and importance of APIs comes with a risk. As a gateway to a wealth of information and systems, APIs have become a favorite target for hackers. Due to the prevalence of such attacks, there is a need for a proactive approach to secure APIs.What makes an API secure?Apigee helps organizations stay ahead of security threats by offering protection in three layers:1. Robust policies that protect every API transaction from unauthorized users.2. Advanced API security provides automated controls to identify API misconfigurations, malicious bot attacks, and anomalous traffic patterns without overhead and alert fatigue. Web application and API security based on the same technology used by Google to protect its public-facing services against web application vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. Google Cloud WAAP combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats and fraud. Explore resources and examplesResources pageHave questions about Apigee?Google Accountjamzith [email protected]
|
Apigee_API_Management(2).txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/apigee
|
2 |
+
Date Scraped: 2025-02-23T12:05:20.817Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Learn how gen AI can help streamline software delivery, application migration, and modernization. Register now.Apigee API ManagementManage APIs with unmatched scale, security, and performanceGoogle Cloud’s native API management tool to build, manage, and secure APIs—for any use case, environment, or scale.Get startedContact usExplore Apigee for free in your own sandbox for 60 days.Jumpstart your development with helpful resources.Product highlightsAutomated controls powered by AI/ML to build and secure APIsSupport for REST, SOAP, GraphQL, or gRPC architectural stylesFlexibility to choose pay-as-you-go or subscription pricingProduct overviewLearn Apigee in 5 minutesFeaturesUsing Gemini Code Assist in Apigee API ManagementCreate consistent quality APIs without any specialized expertise. You can create new API specifications with prompts in Apigee plugin, integrated into Cloud Code. In Apigee, Gemini Code Assist considers your prompt and existing artifacts such as security schema or API objects, to create a specification compliant with your enterprise. You can further slash development time by generating mock servers for parallel development and collaborative testing. Lastly, Gemini also assists you in turning your API specifications into proxies or extensions for your AI applications.Explore Gemini Code Assist in ApigeeUniversal catalog for your APIsConsolidate API specifications—built or deployed anywhere—into API hub. Built on open standards, API hub is a universal catalog that allows developers to access APIs and govern them to a consistent quality. Using autogenerated recommendations provided by Gemini, you can create assets like API proxies, integrations, or even plugin extensions that can be deployed to Vertex AI or ChatGPT.Organize your API information in API hubAutomated API Security with ML based abuse detectionAdvanced API Security detects undocumented and unmanaged APIs linked to Google Cloud L7 Load Balancers. Advanced API Security also regularly assesses managed APIs, surfaces API proxies that do not meet security standards, and provides recommended actions when issues are detected. ML-powered dashboards accurately identify critical API abuses by finding patterns within the large number of bot alerts, reducing the time to act on important incidents.Get started with Advanced API SecurityAutomated API Security with ML based abuse detectionHigh-performance API proxiesOrchestrate and manage traffic for demanding applications with unparalleled control and reliability. Apigee supports styles like REST, gRPC, SOAP, and GraphQL, providing flexibility to implement any architecture. Using Apigee, you can also proxy internal microservices managed in a service mesh as REST APIs to enhance security. Get started with your first API proxy todayVIDEOBuild your first API proxy with Apigee7:47Hybrid/multicloud deploymentsAchieve the architectural freedom to deploy your APIs anywhere—in your own data center or public cloud of your choice—by configuring Apigee hybrid. Host and manage containerized runtime services in your own Kubernetes cluster for greater agility and interoperability while managing APIs consistently with Apigee.Learn more about Apigee hybridVIDEOChoose your own deployment environment3:13Traffic management and control policiesApigee uses policies on API proxies to program API behavior without writing any code. Policies provided by Apigee allow you to add common functionality like security, rate limiting, transformation, and mediation. You can configure from a robust set of 50+ policies to gain control on behavior, traffic, security, and QoS of every API. You can even write custom scripts and code (such as JavaScript applications) to extend API functionality.Add your first policy to your APIVIDEOHow to add policies to your APIs in Apigee?5:53Developer portals integrated into API life cycleBundle individual APIs or resources into API products—a logical unit that can address a specific use case for a developer. Publish these API products in out-of-the-box integrated developer portals or customized experiences built on Drupal. Drive adoption of your API products with easy onboarding of partners/developers, secure access to your APIs, and engaging experiences without any administrative overhead.Onboard developers to your APIsVIDEOHow to create a developer portal in 5 minutes?5:33Built-in and custom API analytics dashboardsUse built-in dashboards to investigate spikes, improve performance, and identify improvement opportunities by analyzing critical information from your API traffic. Build custom dashboards to analyze API quality and developer engagement to make informed decisions.Start gaining insights from your APIsVIDEOExplore Apigee API Analytics7:13Near real-time API monitoringInvestigate every detail of your API transaction within the console or in any distributed tracing solution by debugging an API proxy flow. Isolate problem areas quickly by monitoring their performance or latency. Use Advanced API Operations to identify anomalous traffic patterns and get notified on unpredictable behaviors without any alert fatigue or overheads.Start monitoring your APIs todayVIDEOHow to debug your APIs?5:42API monetizationCreate rate plans for your API products to monetize your API channels. Implementing business models of any complexity by configuring billing, payment model, or revenue share with granular details.Enable API monetization in Apigee todayVIDEOHow to monetize your datasets using Apigee?5:06View all featuresOptions tableProductDescriptionWhen to use this product?ApigeeFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleManaging high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridComprehensive API management for use in any environment—on-premises or any cloudMaintaining and processing API traffic within your own kubernetes clusterCloud EndpointsCustomer managed service to run co-located gateway or private networkingManaging gRPC services with locally hosted gateway for private networkingAPI GatewayFully managed service to package serverless functions as REST APIsBuilding proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.ApigeeDescriptionFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleWhen to use this product?Managing high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridDescriptionComprehensive API management for use in any environment—on-premises or any cloudWhen to use this product?Maintaining and processing API traffic within your own kubernetes clusterCloud EndpointsDescriptionCustomer managed service to run co-located gateway or private networkingWhen to use this product?Managing gRPC services with locally hosted gateway for private networkingAPI GatewayDescriptionFully managed service to package serverless functions as REST APIsWhen to use this product?Building proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.How It WorksApigee provides an abstraction or facade for your backend services by fronting services with API proxies. Using these proxies you can control traffic to your backend services with granular controls like security, rate limiting, quotas, and much more.Build an API proxyGet started with ApigeeCommon UsesCloud-first application developmentUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshTutorials, quickstarts, & labsUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshModernize legacy apps and architecturesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceLearning resourcesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceNew business channels and opportunitiesPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesTutorials, quickstarts, & labsPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesUniform hybrid or multicloud operationsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsTutorials, quickstarts, & labsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsWeb application and API securityImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsTutorials, quickstarts, & labsImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsPricingHow Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsPricing modelDescriptionPrice (USD)EvaluationExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysFreePay-as-you-goAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyStarting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environments$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityStarting at$20 per 1M API callsSubscriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemContact us for a custom quote or any further questionsCheck out this pricing page for further details.How Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsEvaluationDescriptionExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysPrice (USD)FreePay-as-you-goDescriptionAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyPrice (USD)Starting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveDescriptionStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environmentsDescription$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityDescriptionStarting at$20 per 1M API callsSubscriptionDescriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemPrice (USD)Contact us for a custom quote or any further questionsCheck out this pricing page for further details.Pricing calculatorEstimate your monthly costs, including network usage costs.Estimate your costsCustom QuoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptExplore Apigee in your own sandbox Try Apigee for freeStart using Apigee with no commitmentGo to consoleBuild your first API proxy on Apigee todayQuickstartExplore helpful resources and examplesResourcesJoin our Google Cloud Innovator communityBecome an Apigee innovatorFAQExpand allWhy is an API used?APIs enable seamless communication between applications, servers, and users in today's tech-driven world. As their numbers grow, API management has become crucial, encompassing design, development, testing, deployment, governance, security, monitoring, and monetization within the software development life cycle.What is a RESTful API?RESTful APIs adhere to REST (Representational State Transfer) architecture constraints. Following these architecture constraints enables APIs to offer scalability, speed, and data versatility. An API of this kind accesses data by using HTTP requests and is the most common type of API used in modern applications today.What makes Apigee different from other API management solutions?Apigee is Google Cloud’s fully managed API management solution. Trusted by enterprises across the globe, Apigee is developer-friendly and provides comprehensive capabilities to support diverse API architectural styles, deployment environments, and use cases. Apigee also provides flexible pricing options for every business to get started and become successful on the platform.Which API protocols does Apigee support?Apigee currently supports REST, SOAP, GraphQL, gRPC, or OpenAPI protocols.Why should you secure your APIs?Companies worldwide rely on application programming interfaces, or APIs, to facilitate digital experiences and unleash the potential energy of their own data and processes. But the proliferation and importance of APIs comes with a risk. As a gateway to a wealth of information and systems, APIs have become a favorite target for hackers. Due to the prevalence of such attacks, there is a need for a proactive approach to secure APIs.What makes an API secure?Apigee helps organizations stay ahead of security threats by offering protection in three layers:1. Robust policies that protect every API transaction from unauthorized users.2. Advanced API security provides automated controls to identify API misconfigurations, malicious bot attacks, and anomalous traffic patterns without overhead and alert fatigue. Web application and API security based on the same technology used by Google to protect its public-facing services against web application vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. Google Cloud WAAP combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats and fraud. Explore resources and examplesResources pageHave questions about Apigee?Google Accountjamzith [email protected]
|
Apigee_API_Management.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/apigee
|
2 |
+
Date Scraped: 2025-02-23T12:01:43.328Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Learn how gen AI can help streamline software delivery, application migration, and modernization. Register now.Apigee API ManagementManage APIs with unmatched scale, security, and performanceGoogle Cloud’s native API management tool to build, manage, and secure APIs—for any use case, environment, or scale.Get startedContact usExplore Apigee for free in your own sandbox for 60 days.Jumpstart your development with helpful resources.Product highlightsAutomated controls powered by AI/ML to build and secure APIsSupport for REST, SOAP, GraphQL, or gRPC architectural stylesFlexibility to choose pay-as-you-go or subscription pricingProduct overviewLearn Apigee in 5 minutesFeaturesUsing Gemini Code Assist in Apigee API ManagementCreate consistent quality APIs without any specialized expertise. You can create new API specifications with prompts in Apigee plugin, integrated into Cloud Code. In Apigee, Gemini Code Assist considers your prompt and existing artifacts such as security schema or API objects, to create a specification compliant with your enterprise. You can further slash development time by generating mock servers for parallel development and collaborative testing. Lastly, Gemini also assists you in turning your API specifications into proxies or extensions for your AI applications.Explore Gemini Code Assist in ApigeeUniversal catalog for your APIsConsolidate API specifications—built or deployed anywhere—into API hub. Built on open standards, API hub is a universal catalog that allows developers to access APIs and govern them to a consistent quality. Using autogenerated recommendations provided by Gemini, you can create assets like API proxies, integrations, or even plugin extensions that can be deployed to Vertex AI or ChatGPT.Organize your API information in API hubAutomated API Security with ML based abuse detectionAdvanced API Security detects undocumented and unmanaged APIs linked to Google Cloud L7 Load Balancers. Advanced API Security also regularly assesses managed APIs, surfaces API proxies that do not meet security standards, and provides recommended actions when issues are detected. ML-powered dashboards accurately identify critical API abuses by finding patterns within the large number of bot alerts, reducing the time to act on important incidents.Get started with Advanced API SecurityAutomated API Security with ML based abuse detectionHigh-performance API proxiesOrchestrate and manage traffic for demanding applications with unparalleled control and reliability. Apigee supports styles like REST, gRPC, SOAP, and GraphQL, providing flexibility to implement any architecture. Using Apigee, you can also proxy internal microservices managed in a service mesh as REST APIs to enhance security. Get started with your first API proxy todayVIDEOBuild your first API proxy with Apigee7:47Hybrid/multicloud deploymentsAchieve the architectural freedom to deploy your APIs anywhere—in your own data center or public cloud of your choice—by configuring Apigee hybrid. Host and manage containerized runtime services in your own Kubernetes cluster for greater agility and interoperability while managing APIs consistently with Apigee.Learn more about Apigee hybridVIDEOChoose your own deployment environment3:13Traffic management and control policiesApigee uses policies on API proxies to program API behavior without writing any code. Policies provided by Apigee allow you to add common functionality like security, rate limiting, transformation, and mediation. You can configure from a robust set of 50+ policies to gain control on behavior, traffic, security, and QoS of every API. You can even write custom scripts and code (such as JavaScript applications) to extend API functionality.Add your first policy to your APIVIDEOHow to add policies to your APIs in Apigee?5:53Developer portals integrated into API life cycleBundle individual APIs or resources into API products—a logical unit that can address a specific use case for a developer. Publish these API products in out-of-the-box integrated developer portals or customized experiences built on Drupal. Drive adoption of your API products with easy onboarding of partners/developers, secure access to your APIs, and engaging experiences without any administrative overhead.Onboard developers to your APIsVIDEOHow to create a developer portal in 5 minutes?5:33Built-in and custom API analytics dashboardsUse built-in dashboards to investigate spikes, improve performance, and identify improvement opportunities by analyzing critical information from your API traffic. Build custom dashboards to analyze API quality and developer engagement to make informed decisions.Start gaining insights from your APIsVIDEOExplore Apigee API Analytics7:13Near real-time API monitoringInvestigate every detail of your API transaction within the console or in any distributed tracing solution by debugging an API proxy flow. Isolate problem areas quickly by monitoring their performance or latency. Use Advanced API Operations to identify anomalous traffic patterns and get notified on unpredictable behaviors without any alert fatigue or overheads.Start monitoring your APIs todayVIDEOHow to debug your APIs?5:42API monetizationCreate rate plans for your API products to monetize your API channels. Implementing business models of any complexity by configuring billing, payment model, or revenue share with granular details.Enable API monetization in Apigee todayVIDEOHow to monetize your datasets using Apigee?5:06View all featuresOptions tableProductDescriptionWhen to use this product?ApigeeFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleManaging high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridComprehensive API management for use in any environment—on-premises or any cloudMaintaining and processing API traffic within your own kubernetes clusterCloud EndpointsCustomer managed service to run co-located gateway or private networkingManaging gRPC services with locally hosted gateway for private networkingAPI GatewayFully managed service to package serverless functions as REST APIsBuilding proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.ApigeeDescriptionFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleWhen to use this product?Managing high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridDescriptionComprehensive API management for use in any environment—on-premises or any cloudWhen to use this product?Maintaining and processing API traffic within your own kubernetes clusterCloud EndpointsDescriptionCustomer managed service to run co-located gateway or private networkingWhen to use this product?Managing gRPC services with locally hosted gateway for private networkingAPI GatewayDescriptionFully managed service to package serverless functions as REST APIsWhen to use this product?Building proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.How It WorksApigee provides an abstraction or facade for your backend services by fronting services with API proxies. Using these proxies you can control traffic to your backend services with granular controls like security, rate limiting, quotas, and much more.Build an API proxyGet started with ApigeeCommon UsesCloud-first application developmentUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshTutorials, quickstarts, & labsUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshModernize legacy apps and architecturesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceLearning resourcesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceNew business channels and opportunitiesPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesTutorials, quickstarts, & labsPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesUniform hybrid or multicloud operationsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsTutorials, quickstarts, & labsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsWeb application and API securityImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsTutorials, quickstarts, & labsImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsPricingHow Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsPricing modelDescriptionPrice (USD)EvaluationExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysFreePay-as-you-goAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyStarting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environments$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityStarting at$20 per 1M API callsSubscriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemContact us for a custom quote or any further questionsCheck out this pricing page for further details.How Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsEvaluationDescriptionExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysPrice (USD)FreePay-as-you-goDescriptionAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyPrice (USD)Starting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveDescriptionStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environmentsDescription$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityDescriptionStarting at$20 per 1M API callsSubscriptionDescriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemPrice (USD)Contact us for a custom quote or any further questionsCheck out this pricing page for further details.Pricing calculatorEstimate your monthly costs, including network usage costs.Estimate your costsCustom QuoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptExplore Apigee in your own sandbox Try Apigee for freeStart using Apigee with no commitmentGo to consoleBuild your first API proxy on Apigee todayQuickstartExplore helpful resources and examplesResourcesJoin our Google Cloud Innovator communityBecome an Apigee innovatorFAQExpand allWhy is an API used?APIs enable seamless communication between applications, servers, and users in today's tech-driven world. As their numbers grow, API management has become crucial, encompassing design, development, testing, deployment, governance, security, monitoring, and monetization within the software development life cycle.What is a RESTful API?RESTful APIs adhere to REST (Representational State Transfer) architecture constraints. Following these architecture constraints enables APIs to offer scalability, speed, and data versatility. An API of this kind accesses data by using HTTP requests and is the most common type of API used in modern applications today.What makes Apigee different from other API management solutions?Apigee is Google Cloud’s fully managed API management solution. Trusted by enterprises across the globe, Apigee is developer-friendly and provides comprehensive capabilities to support diverse API architectural styles, deployment environments, and use cases. Apigee also provides flexible pricing options for every business to get started and become successful on the platform.Which API protocols does Apigee support?Apigee currently supports REST, SOAP, GraphQL, gRPC, or OpenAPI protocols.Why should you secure your APIs?Companies worldwide rely on application programming interfaces, or APIs, to facilitate digital experiences and unleash the potential energy of their own data and processes. But the proliferation and importance of APIs comes with a risk. As a gateway to a wealth of information and systems, APIs have become a favorite target for hackers. Due to the prevalence of such attacks, there is a need for a proactive approach to secure APIs.What makes an API secure?Apigee helps organizations stay ahead of security threats by offering protection in three layers:1. Robust policies that protect every API transaction from unauthorized users.2. Advanced API security provides automated controls to identify API misconfigurations, malicious bot attacks, and anomalous traffic patterns without overhead and alert fatigue. Web application and API security based on the same technology used by Google to protect its public-facing services against web application vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. Google Cloud WAAP combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats and fraud. Explore resources and examplesResources pageHave questions about Apigee?Google Accountjamzith [email protected]
|
AppSheet_Automation.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/appsheet/automation
|
2 |
+
Date Scraped: 2025-02-23T12:07:59.386Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Jump to AppSheet AutomationAppSheet AutomationReclaim time—and talent—with no-code automation.Contact usImprove efficiency by removing unnecessary barriersMaintain IT governance and security with citizen-led developmentQuickly and easily create custom automations and applications with an open cloudBenefitsFocus on impactReclaim time for high-impact work rather than manual tasks.Reduce context switchingBuild automations and applications on a unified platform. Improve collaborationStreamline processes, such as approvals and onboarding, across your organization. Key featuresProcess automation Intelligent document processingLeverage the power of Google Cloud Document AI to automatically extract data from unstructured sources like W-9s and receipts to run processes more efficiently. Data change eventsConfigure bots to detect data changes on work in concert with external sources, such as Google Sheets and salesforce, to trigger processes and approvals.ModularityCreate automation bots from completely reusable components—events, processes, and tasks.Seamless connectivityConnect directly with APIs, data sources, webhooks, and legacy software, or use data export to export, back up, or sync application data with external platforms. DocumentationFind resources and documentation for AppSheet AutomationQuickstartAppSheet Automation: the essentialsExplore the fundamentals of creating automations with no-code.Learn moreQuickstartCreating a botAutomations begin with the configuration of a bot.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesRead about the latest releases for AppSheetPricingPricingPricing for AppSheet is based on the number of users rather than the number of automations or applications. To learn more, click the link below or start creating AppSheet apps and automations for free.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Try it freeNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith [email protected]
|
App_Engine(1).txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/appengine
|
2 |
+
Date Scraped: 2025-02-23T12:09:41.388Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayJump to App EngineApp EngineBuild monolithic server-side rendered websites. App Engine supports popular development languages with a range of developer tools.Go to consoleContact salesFree up your developers with zero server management and zero configuration deploymentsStay agile with support for popular development languages and a range of developer toolsExplore more products in our serverless portfolioLooking to host or build scalable web applications and websites?Try Cloud RunKey featuresKey featuresPopular programming languagesBuild your application in Node.js, Java, Ruby, C#, Go, Python, or PHP.Fully managedA fully managed environment lets you focus on code while App Engine manages infrastructure concerns.DocumentationDocumentationGoogle Cloud BasicsChoosing the right App Engine environmentLearn how to run your applications in App Engine using the flexible environment, standard environment, or both.Learn moreGoogle Cloud BasicsApp Engine standard environmentSee how the App Engine standard environment makes it easy to build and deploy an application that runs reliably even under heavy load and with large amounts of data.Learn moreGoogle Cloud BasicsApp Engine flexible environmentFind out how App Engine allows developers to focus on what they do best: writing code.Learn morePatternLooking for other serverless products?If your desired runtime is not supported by App Engine, take a look at Cloud Run.Learn moreNot seeing what you’re looking for?View all product documentationAll featuresAll featuresPopular languagesBuild your application in Node.js, Java, Ruby, C#, Go, Python, or PHP.Fully managedA fully managed environment lets you focus on code while App Engine manages infrastructure concerns.Powerful application diagnosticsUse Cloud Monitoring and Cloud Logging to monitor the health and performance of your app and Error Reporting to diagnose and fix bugs quickly.Application versioningEasily host different versions of your app, and easily create development, test, staging, and production environments.Application securityHelp safeguard your application by defining access rules with App Engine firewall and leverage managed SSL/TLS certificates by default on your custom domain at no additional cost.Services ecosystemTap a growing ecosystem of Google Cloud services from your app including an excellent suite of cloud developer tools.PricingPricingApp Engine has competitive cloud pricing that scales with your app’s usage. There are a few basic components you will see in the App Engine billing model such as standard environment instances, flexible environment instances, and App Engine APIs and services. To get an estimate of your bill, please refer to our pricing calculator.App Engine runs as instances within either the standard environment or the flexible environment.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith [email protected]
|
App_Engine.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/appengine
|
2 |
+
Date Scraped: 2025-02-23T12:02:35.371Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayJump to App EngineApp EngineBuild monolithic server-side rendered websites. App Engine supports popular development languages with a range of developer tools.Go to consoleContact salesFree up your developers with zero server management and zero configuration deploymentsStay agile with support for popular development languages and a range of developer toolsExplore more products in our serverless portfolioLooking to host or build scalable web applications and websites?Try Cloud RunKey featuresKey featuresPopular programming languagesBuild your application in Node.js, Java, Ruby, C#, Go, Python, or PHP.Fully managedA fully managed environment lets you focus on code while App Engine manages infrastructure concerns.DocumentationDocumentationGoogle Cloud BasicsChoosing the right App Engine environmentLearn how to run your applications in App Engine using the flexible environment, standard environment, or both.Learn moreGoogle Cloud BasicsApp Engine standard environmentSee how the App Engine standard environment makes it easy to build and deploy an application that runs reliably even under heavy load and with large amounts of data.Learn moreGoogle Cloud BasicsApp Engine flexible environmentFind out how App Engine allows developers to focus on what they do best: writing code.Learn morePatternLooking for other serverless products?If your desired runtime is not supported by App Engine, take a look at Cloud Run.Learn moreNot seeing what you’re looking for?View all product documentationAll featuresAll featuresPopular languagesBuild your application in Node.js, Java, Ruby, C#, Go, Python, or PHP.Fully managedA fully managed environment lets you focus on code while App Engine manages infrastructure concerns.Powerful application diagnosticsUse Cloud Monitoring and Cloud Logging to monitor the health and performance of your app and Error Reporting to diagnose and fix bugs quickly.Application versioningEasily host different versions of your app, and easily create development, test, staging, and production environments.Application securityHelp safeguard your application by defining access rules with App Engine firewall and leverage managed SSL/TLS certificates by default on your custom domain at no additional cost.Services ecosystemTap a growing ecosystem of Google Cloud services from your app including an excellent suite of cloud developer tools.PricingPricingApp Engine has competitive cloud pricing that scales with your app’s usage. There are a few basic components you will see in the App Engine billing model such as standard environment instances, flexible environment instances, and App Engine APIs and services. To get an estimate of your bill, please refer to our pricing calculator.App Engine runs as instances within either the standard environment or the flexible environment.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith [email protected]
|
Application_Integration.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/application-integration
|
2 |
+
Date Scraped: 2025-02-23T12:05:17.637Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Explore how you can use Gemini Code Assist in Application Integration to build automations for your use case, without toil.Application IntegrationConnect your applications visually, without codeIntegration Platform as a Service (iPaaS) to automate business processes by connecting any application with point-and-click configurationsGo to consoleContact usExplore Gemini in Application Integration by joining our trusted tester program. Jumpstart your development with our quickstarts and use cases.Product highlightsBuild integrations with natural language, no coding requiredReady-to-use connectors for Google or third-party appsGet started for free with no financial commitment What is Application Integration?FeaturesUsing Gemini Code Assist in Application IntegrationAutomate your SaaS workflows with just clicks or prompts, such as "Update a case in Salesforce when a new issue is created in JIRA." Based on the prompt and existing enterprise context, such as APIs or applications, Gemini suggests multiple flows tailored for your use case. Gemini automatically creates variables and pre-configures tasks, making the integration ready for immediate use. Gemini doesn't just respond to prompts, it intelligently analyzes your flow and proactively suggests optimizations, such as replacing connectors or fine-tuning REST endpoint calls.Using Gemini Code Assist in Application IntegrationPlug and play connectorsOur 90+ pre-built connectors make it easy to connect to any data source, whether it's a Google Cloud service (for example, BigQuery, Pub/Sub) or another business application (such as Salesforce, MongoDB, MySQL). With our connectors, you can quickly and easily connect to a growing pool of applications and systems without the need for protocol-specific knowledge or the use of custom code.Learn how to provision and configure connectorsVisual integration designerIntuitive drag-and-drop interface allows anyone to build workflows quickly and easily without the need for complex coding or manual processes. With the visual integration designer, anyone can simply drag-and-drop individual control elements (such as edges, forks, and joins) to build custom integration patterns of any complexity. Learn how to use the visual integration designerAutomated event-driven triggersAs an entry point to integrations—the event bound to the trigger initiates the execution of tasks in the integration. Application Integration offers a wide range of triggers out-of-the-box, including API, Cloud Pub/Sub, Schedule, Salesforce, and Cloud Scheduler triggers. Associate one or more triggers to different tasks, to automate your integrations. Learn more about event-driven triggersData transformationsTransform and modify the data in your workflow with accuracy and efficiency. Intuitive drag-and-drop interface in the Data Mapping Editor makes it easy to map data fields within your integration, eliminating the need for coding. Comprehensive mapping functions enable you to reduce development time and address tailored business requirements. Learn how to build data mapping across applicationsIntegration performance and usage monitoring Proactively detect issues and ensure smooth operations with pre-built monitoring dashboards and detailed execution log messages. Monitoring dashboards provide a graphical overview of integration performance and usage, customizable by different attributes or timeframes. Execution log messages provide valuable insights into the status of each step in an integration, or to troubleshoot a failure.Learn how to proactively identify potential problems Versatile integration tasksUse tasks to define individual actions within a workflow to facilitate seamless data transfer, communication, and synchronization between applications. Automate and streamline business processes using tasks like data mapping, API call integrations, REST API integrations, email notifications, control flow, approval, connectors, and much more.Learn more about ready-to-use integration tasksView all featuresIntegration servicesProduct nameDescriptionDocumentationCategoryApplication IntegrationConnect to third-party applications and enable data consistency without codeRetrieve API payload and send an emailApplications and services integrationWorkflowsCombine Google Cloud services and APIs to build applications and data pipelinesCreate a workflowApplications and services integrationEventarcBuild an event-driven architecture that can connect any serviceReceive direct events from Cloud StorageApplications and services integrationDataflowUnified stream and batch data processing that is serverless, fast, and cost-effectiveCreate a Dataflow pipeline using PythonData integrationPub/SubIngest data without any transformation to get raw data into Google CloudPublish and receive messages in Pub/Sub by using a client libraryData ingestionApplication IntegrationDescriptionConnect to third-party applications and enable data consistency without codeDocumentationRetrieve API payload and send an emailCategoryApplications and services integrationWorkflowsDescriptionCombine Google Cloud services and APIs to build applications and data pipelinesDocumentationCreate a workflowCategoryApplications and services integrationEventarcDescriptionBuild an event-driven architecture that can connect any serviceDocumentationReceive direct events from Cloud StorageCategoryApplications and services integrationDataflowDescriptionUnified stream and batch data processing that is serverless, fast, and cost-effectiveDocumentationCreate a Dataflow pipeline using PythonCategoryData integrationPub/SubDescriptionIngest data without any transformation to get raw data into Google CloudDocumentationPublish and receive messages in Pub/Sub by using a client libraryCategoryData ingestionHow It WorksApplication Integration offers comprehensive tools to connect applications (Google Cloud and others). With an intuitive drag-and-drop designer, out-of-the-box triggers, and plug-and-play connectors, you can create integrations to automate business processes.Explore an exampleCommon UsesBusiness process or workflow automationAutomate sequences of tasks in line with business operationsStreamline business processes by mapping workflows in areas like lead management, procurement, or supply chain management. Initiate and automate task sequences based on a schedule or on a defined external event. Leverage built-in tools to achieve complex configurations like looping, parallel execution, conditional routing, manual approvals, or much more. Map workflows in visual designerStore Salesforce opportunity details in Cloud SQLExplore connectors for business process automationData Mapping taskLearning resourcesAutomate sequences of tasks in line with business operationsStreamline business processes by mapping workflows in areas like lead management, procurement, or supply chain management. Initiate and automate task sequences based on a schedule or on a defined external event. Leverage built-in tools to achieve complex configurations like looping, parallel execution, conditional routing, manual approvals, or much more. Map workflows in visual designerStore Salesforce opportunity details in Cloud SQLExplore connectors for business process automationData Mapping task360 degree view of your customer Centralize customer data spread across diverse sourcesSynchronize data across CRM systems (for example, Salesforce), marketing automation platforms (for example, HubSpot), customer support tools (for example, Zendesk), and much more to maintain a consistent, up-to-date view of customer information, enhancing customer relationships and communication. Map and transform data using predefined tasks to ensure ease of use, accuracy, and consistency. Learning resourcesCentralize customer data spread across diverse sourcesSynchronize data across CRM systems (for example, Salesforce), marketing automation platforms (for example, HubSpot), customer support tools (for example, Zendesk), and much more to maintain a consistent, up-to-date view of customer information, enhancing customer relationships and communication. Map and transform data using predefined tasks to ensure ease of use, accuracy, and consistency. Cloud-first application developmentStreamline access to siloed data and capabilitiesConnect applications with external APIs, data sources, and third-party services to enrich application functionality, access external data, or leverage specialized services, such as payment processing, raising support tickets, or much more. Leverage APIs built in Apigee or events from Pub/Sub to orchestrate communication and data exchange between different application components.Retrieve API payload and send an email Listen to Cloud Pub/Sub topic and send an emailCloud Function taskLearning resourcesStreamline access to siloed data and capabilitiesConnect applications with external APIs, data sources, and third-party services to enrich application functionality, access external data, or leverage specialized services, such as payment processing, raising support tickets, or much more. Leverage APIs built in Apigee or events from Pub/Sub to orchestrate communication and data exchange between different application components.Retrieve API payload and send an email Listen to Cloud Pub/Sub topic and send an emailCloud Function taskPricingHow Application Integration pricing worksThe pricing model is based on the number of integrations executed, infrastructure used to process messages, and data processed. Pricing modelDescriptionPrice (USD)Free tierUp to 400 integration executions and 20 GiB data processed is free per month along with two connection nodesFree tier is limited to integrations with Google Cloud services onlyFreePay-as-you-goIntegration executionsNumber of integrations processed, whether they are successful or not$0.5For every 1,000 executionsConnection nodes (third-party application)Number of connection nodes used every minute (unit of infrastructure that processes messages to third-party target systems)Billed for a minimum of one minStarting at$0.7per node, per hourConnection nodes (Google Cloud applications)Number of connection nodes used every minute (unit of infrastructure that processes messages to Google Cloud target systems)Billed for a minimum of one min$0.35per node, per hourData processed(Sum of total number of bytes received and sent through Application Integration and connections) / 2^10$10per GiBNetworking usageData transfer and other services when moving, copying, accessing data in Cloud Storage or between Google Cloud servicesCheck your network product for pricing informationSubscriptionStandardMaintain predictable costs while building integrations at scaleContact us for a custom quote or any further questionsPricing details for Application Integration and Integration ConnectorsHow Application Integration pricing worksThe pricing model is based on the number of integrations executed, infrastructure used to process messages, and data processed. Free tierDescriptionUp to 400 integration executions and 20 GiB data processed is free per month along with two connection nodesFree tier is limited to integrations with Google Cloud services onlyPrice (USD)FreePay-as-you-goDescriptionIntegration executionsNumber of integrations processed, whether they are successful or notPrice (USD)$0.5For every 1,000 executionsConnection nodes (third-party application)Number of connection nodes used every minute (unit of infrastructure that processes messages to third-party target systems)Billed for a minimum of one minDescriptionStarting at$0.7per node, per hourConnection nodes (Google Cloud applications)Number of connection nodes used every minute (unit of infrastructure that processes messages to Google Cloud target systems)Billed for a minimum of one minDescription$0.35per node, per hourData processed(Sum of total number of bytes received and sent through Application Integration and connections) / 2^10Description$10per GiBNetworking usageData transfer and other services when moving, copying, accessing data in Cloud Storage or between Google Cloud servicesDescriptionCheck your network product for pricing informationSubscriptionDescriptionStandardMaintain predictable costs while building integrations at scalePrice (USD)Contact us for a custom quote or any further questionsPricing details for Application Integration and Integration ConnectorsPricing calculatorEstimate your monthly costs, including network usage costs.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptBuild your first integrationGet startedTry the sample integrationQuickstartGet started with your first integrationQuickstartsAccelerate your delivery with samplesExplore code samplesAsk questions or connect with our community Ask an expertGoogle Accountjamzith [email protected]
|
Application_Migration(1).txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/solutions/application-migration
|
2 |
+
Date Scraped: 2025-02-23T12:06:42.952Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Application migrationMigrating to Google Cloud gives your applications the performance, scalability, reliability, and security they need to deliver the high-quality experience that customers expect in today's digital world.Try Migration CenterFree migration cost assessmentLearn more about migrating your applications to Google Cloud1:45BenefitsMigration is about more than your applications. It's about your business.Improve customer experiencesWith things like price-performance optimized VM families (Tau), automatic sizing recommendations, easy scalability, and custom machine types, every application is empowered to deliver a world-class experience. Migrate quicker and easier than you thought possibleWhether you're moving 1 or 1,000 applications, we've got the automated tools and migration expertise to make sure everything is fast, easy, and low risk.We're your cloud migration partnerWith a true partnership from beginning to end, our migration experts focus on building and executing the right migration plan for you, and delivering the technological and business outcomes that matter to you.Looking to migrate your applications quickly, easily, and efficiently? Google Cloud Migration Center is the unified platform that helps you accelerate your end-to-end cloud journey from your current on-premises or cloud environments to Google Cloud. With features like cloud spend estimation, asset discovery of your current environment, and a variety of tooling for different migration scenarios, Migration Center provides you with what you need for your migration. Start using it today, or watch this short video to learn more. Craft the migration journey that works best for youMigrate workloads to the public cloud: an essential guide & checklistRead the guide and checklistHow to use Migration Center to get your applications into Google CloudExplore Migration Center demoUnderstand your cloud capabilities and identify new competencies for migration successStart assessmentCustomersCustomers see tangible gains when migrating applications to Google CloudCase studyReshaping Flipkart’s technological landscape with a mammoth cloud migration5-min readCase studyMajor League Baseball migrates to Google Cloud to drive fan engagement, increase operational efficiency.10-min readBlog postGoogle's chip design team moves to Google Cloud, increases daily job submissions by 170%. 5-min readCase studyViant partners with Slalom to migrate a data center with 600 VMs and 200+ TB of data to Google Cloud.9-min readVideoCardinal Health successfully performed a large-scale app migration to Google Cloud.49:07Case studySee how Loblaw reduced compute resources by one-third with Google Cloud.7-min readSee all customersPartnersFind the right experts to drive your success with cloud migration servicesExpand allCloud migration services at Google CloudFull-service migration partnersSee all partnersRelated servicesMigrate your applications to Google CloudPick the right migration tools and strategies for your unique set of applications and workloads. Migration CenterReduce complexity, time, and cost with Migration Center's centralized, integrated migration and modernization experience.Google Cloud VMware EngineLift and shift VMware workloads to a VMware SDDC in Google Cloud, providing a fast, easy migration path for VMware.Migrate to Virtual MachinesMigrate applications from on-premises or other clouds into Compute Engine with speed and simplicity, plus built-in testing, rightsizing, and rollback.Migrate to ContainersModernize traditional applications away from virtual machines and into native containers on GKE. Rapid Migration and Modernization Program (RaMP)Our holistic, end-to-end migration program to help you simplify and accelerate your success, starting with a free assessment of your IT landscape.Architecture CenterDiscover migration reference architectures, guidance, and best practices for building or migrating your workloads on Google Cloud.*The Migrate to Virtual Machines and Migrate to Containers products/services are at no charge; consumption of Google Cloud resources will be billed at standard rates.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith [email protected]
|
Application_Migration.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/solutions/application-migration
|
2 |
+
Date Scraped: 2025-02-23T11:59:34.709Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Application migrationMigrating to Google Cloud gives your applications the performance, scalability, reliability, and security they need to deliver the high-quality experience that customers expect in today's digital world.Try Migration CenterFree migration cost assessmentLearn more about migrating your applications to Google Cloud1:45BenefitsMigration is about more than your applications. It's about your business.Improve customer experiencesWith things like price-performance optimized VM families (Tau), automatic sizing recommendations, easy scalability, and custom machine types, every application is empowered to deliver a world-class experience. Migrate quicker and easier than you thought possibleWhether you're moving 1 or 1,000 applications, we've got the automated tools and migration expertise to make sure everything is fast, easy, and low risk.We're your cloud migration partnerWith a true partnership from beginning to end, our migration experts focus on building and executing the right migration plan for you, and delivering the technological and business outcomes that matter to you.Looking to migrate your applications quickly, easily, and efficiently? Google Cloud Migration Center is the unified platform that helps you accelerate your end-to-end cloud journey from your current on-premises or cloud environments to Google Cloud. With features like cloud spend estimation, asset discovery of your current environment, and a variety of tooling for different migration scenarios, Migration Center provides you with what you need for your migration. Start using it today, or watch this short video to learn more. Craft the migration journey that works best for youMigrate workloads to the public cloud: an essential guide & checklistRead the guide and checklistHow to use Migration Center to get your applications into Google CloudExplore Migration Center demoUnderstand your cloud capabilities and identify new competencies for migration successStart assessmentCustomersCustomers see tangible gains when migrating applications to Google CloudCase studyReshaping Flipkart’s technological landscape with a mammoth cloud migration5-min readCase studyMajor League Baseball migrates to Google Cloud to drive fan engagement, increase operational efficiency.10-min readBlog postGoogle's chip design team moves to Google Cloud, increases daily job submissions by 170%. 5-min readCase studyViant partners with Slalom to migrate a data center with 600 VMs and 200+ TB of data to Google Cloud.9-min readVideoCardinal Health successfully performed a large-scale app migration to Google Cloud.49:07Case studySee how Loblaw reduced compute resources by one-third with Google Cloud.7-min readSee all customersPartnersFind the right experts to drive your success with cloud migration servicesExpand allCloud migration services at Google CloudFull-service migration partnersSee all partnersRelated servicesMigrate your applications to Google CloudPick the right migration tools and strategies for your unique set of applications and workloads. Migration CenterReduce complexity, time, and cost with Migration Center's centralized, integrated migration and modernization experience.Google Cloud VMware EngineLift and shift VMware workloads to a VMware SDDC in Google Cloud, providing a fast, easy migration path for VMware.Migrate to Virtual MachinesMigrate applications from on-premises or other clouds into Compute Engine with speed and simplicity, plus built-in testing, rightsizing, and rollback.Migrate to ContainersModernize traditional applications away from virtual machines and into native containers on GKE. Rapid Migration and Modernization Program (RaMP)Our holistic, end-to-end migration program to help you simplify and accelerate your success, starting with a free assessment of your IT landscape.Architecture CenterDiscover migration reference architectures, guidance, and best practices for building or migrating your workloads on Google Cloud.*The Migrate to Virtual Machines and Migrate to Containers products/services are at no charge; consumption of Google Cloud resources will be billed at standard rates.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith [email protected]
|
Application_Modernization.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/solutions/camp
|
2 |
+
Date Scraped: 2025-02-23T11:58:08.720Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Cloud App Modernization Program (CAMP)CAMP has been designed as an end-to-end framework to help guide organizations through their modernization journey by assessing where they are today and identifying their most effective path forward. Download CAMP WhitepaperBook a consultationCAMP overviewWatch a quick intro to CAMPSolutionsCommon patterns for modernizationApp Mod efforts fall into three common categories and CAMP provides customers with best practices and tooling for each in addition to assessments that guide them on where to start.Move and improveModernize traditional applicationsAnalyze, categorize, and get started with cloud migration on traditional workloads.Explore modernization options for traditional applications like Java and .NET.Migrate from PaaS: Cloud Foundry, OpenshiftMove your containers into Google's managed container services for a more stable and flexible experience.Learn about tools and migration options for modernizing your existing platforms.Unlock legacy with ApigeeAdapt to changing market needs while leveraging legacy systems.Learn how to connect legacy and modern services seamlessly.Migrate from mainframeMove on from proprietary mainframes and innovate with cloud-native services. Learn about Google's automated tools and prescriptive guidance for moving to the cloud.Build and operateModernize software delivery: Secure software supply chain, CI/CD best practices, Developer productivityOptimize your application development environment, improve your software delivery with modern CI/CD, and secure your software supply chain.Learn about modern and secure cloud-based software development environments. DevOps best practicesAchieve elite performance in your software development and delivery.Learn about industry best practices that can help improve your technical and cultural capabilities to drive improved performance. SRE principles Strike the balance between speed and reliability with proven SRE principals.Learn how Google Cloud helps you implement SRE principles through tooling, professional services, and other resources.Day 2 operations for GKESimplify your GKE platform operations and build an effective strategy for managing and monitoring activities.Learn how to create a unified approach for managing all of your GKE clusters for reduced risk and increased efficiency. FinOps and optimization of GKEContinuously deliver business value by running reliable, performant, and cost efficient applications on GKE.Learn how to make signal driven decisions and scale your GKE clusters based on actual usage and industry best practices.Cloud and beyondRun applications at the edgeUse Google's hardware agnostic edge solution to deploy and govern consistent, localized, and low latency applications.Learn how to enhance your customer experience and employee productivity using an edge strategy.Architect for multicloudManage workloads across multiple clouds with a consistent platform. Learn how Google allows for a flexible approach to multicloud environments for container management and application delivery.Go serverlessEasily build enterprise grade applications with Google Cloud Serverless technologies.Learn how to use tools like Cloud Build, Cloud Run, Cloud Functions, and more to speedup your application delivery. API management Leverage API life cycle management to support new business growth and empower your ecosystem.Learn about usage of APIs as a power tool for a flexible and expandable modern application environment. Guided assessments DevOps best practicesDORA assessmentCompare your DevOps capabilities to that of the industry based on the DORA research and find out how to improve. Learn about the DORA research and contact us to see if a DevOps assessment is right for you. Modernizing traditional apps mFit assessmentPlatform owners can leverage our fit assessment tool to evaluate large VMware workloads and determine if they are good candidates for containerization. Learn about mFit and Google cloud's container migration options and schedule a consultation with us to review your strategy. CAST assessmentThis code level analysis of your traditional applications allows you to identify your best per application modernization approach. Learn more about CAST and contact us to see if this is the right assessment for you.Modernizing mainframe platformsMainFrame application portfolio assessment (MAPA)This assessment is designed to help customers build a financial and strategic plan for their migration based on complexity, risk and cost for each application.Learn about this survey based application level assessment and contact us to start your mainframe migration today. Feeling inspired? Let’s solve your challenges togetherAre you ready to learn about the latest application development trends? Contact usCloud on Air: Watch this webinar to learn how you can build enterprise ready serverless applications. Watch the webinarCustomersCustomer storiesVideoHow British Telecom is leveraging loosely coupled architecture to transform their businessVideo (2:35)VideoHow CoreLogic is replatforming 10,000+ Cloud Foundry app-instances with GoogleVideo (16:16)VideoHow Schlumberger is using DORA recommendations to improve their software delivery and monitoring. Video (2:10)Case studyGordon Food Services goes from four to 2,920 deployments a year using GKE 5-min readSee all customersPartnersOur partnersTo help facilitate our customers' modernization journey, Google works closely with a set of experienced partners globally. See all partnersTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all solutionsGoogle Accountjamzith [email protected]
|
Architect_for_Multicloud.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/solutions/architect-multicloud
|
2 |
+
Date Scraped: 2025-02-23T11:58:32.360Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Architect for multicloudUnderstand various multicloud patterns and practical approaches to implementing them using Anthos.Contact usBenefitsGet hands-on as you explore multicloudPersona based user journeysThe workshop focuses on user journeys tailored to specific roles.Applications and services across cloudsLeveraging a service mesh to manage services across clusters and clouds.Opinionated automationDeep dive on an opinionated way for automating and deploying infrastructure resources across clouds.Key featuresGet more out of the workshopInsights into your workloadsHighlighting key SRE golden signals.Keep your applications and services runningSee how deploying workloads across clusters can help maximize reliability.Ready to get started? Contact usTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith [email protected]
|
Architect_your_workloads.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/migrate-across-regions/architect-workloads
|
2 |
+
Date Scraped: 2025-02-23T11:52:20.556Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Architect your workloads Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-24 UTC This document helps you design workloads in a way that minimizes the impact of a future expansion and migration of workloads to other Google Cloud regions, or the impact of a migration of workloads across regions. This document is useful if you're planning to do any of these activities or if you're evaluating the opportunity to do so in the future and want to explore what the work might look like. This document is part of a series: Get started Design resilient single-region environments on Google Cloud Architect your workloads (this document) Prepare data and batch workloads for the migration The guidance in this series is also useful if you didn't plan for a migration across regions or for an expansion to multiple regions in advance. In this case, you might need to spend additional effort to prepare your infrastructure, workloads, and data for the migration across regions and for the expansion to multiple regions. This document helps you to do the following: Prepare your landing zone Prepare your workloads for a migration across regions Prepare your computing resources Prepare your data storage resources Prepare for decommissioning the source environment Prepare your landing zone This section focuses on the considerations that you must make to extend a landing zone (also called a cloud foundation) when migrating across regions. The first step is to re-evaluate the different aspects of any existing landing zone. Before you can migrate any workload, you must already have a landing zone in place. Although you might already have a landing zone in place for the region that's hosting the workloads, the landing zone might not support the deployment of workloads in a different region, so it must be extended to the target region. Some landing zones that are already in place might have a design that can support another region without significant rework to the landing zone (for example, identity and access management or resource management). However, additional factors such as network or data might require that you do some planning for the extension. Your re-evaluation process should take into account the major requirements of your workloads to allow you to set up a generic foundation that can be specialized later during the migration. Enterprise considerations When it comes to aspects such as industry and government standards, regulations, and certifications, moving workloads to another region can have different requirements. Workloads running on Google regions that are physically located in different countries must follow the laws and regulations of that country. In addition, different industry standards might have particular requirements for workloads running abroad (especially in terms of security). Because Google Cloud regions are built to run resources in a single country, sometimes workloads are migrated from another Google region to that country to adhere to specific regulations. When you perform these "in-country" migrations, it's important to re-evaluate data running on-premise to check if the new region allows the migration of your data to Google Cloud. Identity and access management When you are planning a migration, you probably don't have to plan for many identity and access changes for regions that are already on Google Cloud. Identity decisions on Google Cloud and access to resources are usually based on the nature of the resources rather than the region where the resources are running. Some considerations that you might need to make are as follows: Design of teams: Some companies are structured to have different teams to handle different resources. When a workload is migrated to another region, due to change in structure of the resources, a different team may be the best candidate to be responsible for certain resources, in which case, accesses should be adjusted accordingly. Naming conventions: Although naming conventions might not have any technical impact on the functionalities, some consideration might be needed if there are resources defined with name conventions that refer to the specific region. One typical example is when there are already multiple replicated regions in place, such as Compute Engine virtual machines (VMs), which are named with the region as prefix, for example, europe-west1-backend-1. During the migration process, to avoid confusion or, worse, breaking pipelines that rely on a specific naming convention, it's important to change names to reflect the new region. Connectivity and networking Your network design impacts multiple aspects of how the migration is executed, so it's important to address this design before you plan how to move workloads. Keep in mind that on-premises connectivity with Google Cloud is one of the factors that you must re-evaluate in the migration, since it can be designed to be region specific. One example of this factor is Cloud Interconnect, which is connected to Google Cloud through a VLAN attachment to specific regions. You must change the region where the VLAN attachment is connected before dismissing that region to avoid region-to-region traffic. Another factor to consider is that if you're using Partner Interconnect, migrating the region can help you select a different physical location on which to connect your VLAN attachments to Google Cloud. This consideration is also relevant if you use a Cloud VPN and decide to change subnet addresses in the migration: you must reconfigure your routers to reflect the new networking. While virtual private clouds (VPCs) on Google Cloud are global resources, single subnets are always bound to a region, which means you can't use the same subnet for the workloads after migration. Since subnets can't be overlapping IPs, to maintain the same addresses, you should create a new VPC. This process is simplified if you're using Cloud DNS, which can exploit features like DNS peering to route traffic for the migrated workloads before dismissing the old region. For more information about building a foundation on Google Cloud, see Migrate to Google Cloud: Plan and build your foundation. Prepare your workloads for a migration across regions Whether you're setting up your infrastructure on Google Cloud and you plan to later migrate to another region, or you're already on Google Cloud and you need to migrate to another region, you must make sure that your workloads can be migrated in the most straightforward way to reduce effort and minimize risks. To help you ensure that all the workloads are in a state that allows a path to the migration, we recommend that you take the following approach: Prefer network designs that are easily replicable and loosely coupled from the specific network topology. Google Cloud offers different products that can help you to decouple your current network configuration from the resources using that network. An example of such a product is Cloud DNS, which lets you decouple internal subnet IPs from VMs. Set up products that support multi-region or global configurations. Products that support a configuration that involves more than one region, usually simplify the process of migrating them to another region. Consider managed services with managed cross region replicas for data. As described in the following sections of this document, some managed services allow you to create a replica in a different region, usually for backup or high availability purposes. This feature can be important to migrate data from one region to another. Some Google Cloud services are designed to support multi-region deployments or global deployment. You don't need to migrate these services, although you might need to adjust some configurations. Prepare your computing resources This section provides an overview of the compute resources on Google Cloud and design principles to prepare for a migration to another region. This document focuses on the following Google Cloud computing products: Compute Engine Google Kubernetes Engine Cloud Run VMware Engine Compute Engine Compute Engine is Google Cloud's service that provides VMs to customers. To migrate Compute Engine resources from one region to another, you must evaluate different factors in addition to networking considerations. We recommend that you do the following: Check compute resources: One of the first limitations you can encounter when changing the hosting region of a VM is the availability of the CPU platform in the new target region. If you have to change a machine series during the migration, check that the operating system of your current VM is supported for the series. Generally speaking, this problem can be extended to every Google Cloud computing service (some new regions may not have services like Cloud Run or Cloud GPU), so before you plan the migration, make sure that all the compute services that you require are available in the destination region. Configure load balancing and scaling: Compute Engine supports load balancing traffic between Compute Engine instances and autoscaling to automatically add or remove virtual machines from MIGs, according to demand. We recommend that you configure load balancing and autoscaling to increase the reliability and the flexibility of your environments, avoiding the management burden of self-managed solutions. For more information about configuring load balancing and scaling for Compute Engine, see Load balancing and scaling. Use zonal DNS names: To mitigate the risk of cross-regional outages, we recommend that you use zonal DNS names to uniquely identify virtual machines using DNS names in your environments. Google Cloud uses zonal DNS names for Compute Engine virtual machines by default. For more information about how the Compute Engine internal DNS works, see Overview of internal DNS. To facilitate a future migration across regions, and to make your configuration more maintainable, we recommend that you consider zonal DNS names as configuration parameters that you can eventually change in the future. Use the same managed instance groups (MIGs) template: Compute Engine lets you create regional MIGs that automatically provision virtual machines across multiple zones in a region automatically. If you're using a template in your old region, you can use the same template to deploy the MIGs in the new region. GKE Google Kubernetes Engine (GKE) helps you deploy, manage, and scale containerized workloads on Kubernetes. To prepare your GKE workloads for a migration, consider the following design points and GKE features: Cloud Service Mesh: A managed implementation of Istio mesh. Adopting Cloud Service Mesh for your cluster lets you have a greater level of control on the network traffic into the cluster. One of the key features of Cloud Service Mesh is that it lets you create a service mesh between two clusters. You can use this feature to plan the migration from one region to another by creating the GKE cluster in the new region and adding it to the service mesh. By using this approach, it's possible to start deploying workloads in the new cluster and routing traffic to them gradually, allowing you to test the new deploy while having the option to rollback by editing mesh routing. Config Sync: A GitOps service built on an open source core that lets cluster operators and platform administrators deploy configurations from a single source. Config Sync can support one or many clusters, allowing you to use a single source of truth to configure of the clusters. You can use this Config Sync function to replicate the configuration of the existing cluster on the cluster for the new region, and potentially customize a specific resource for the region. Backup for GKE: This feature lets you back up your cluster persistent data periodically and restore the data to the same cluster or to a new one. Cloud Run Cloud Run offers a lightweight approach to deploy containers on Google Cloud. Cloud Run services are regional resources, and are replicated across multiple zones in the region they are in. When you deploy a Cloud Run service, you can choose a region where to deploy the instance, and then use this feature to deploy the workload in a different region. VMware Engine Google Cloud VMware Engine is a fully managed service that lets you run the VMware platform in Google Cloud. The VMware environment runs natively on Google Cloud bare metal infrastructure in Google Cloud locations including vSphere, vCenter, vSAN, NSX-T, HCX, and corresponding tools. To migrate VMware Engine instances to a different region you should create your private cloud in the new region and then use VMware tools to move the instances. You should also consider DNS and load balancing in Compute Engine environments when you plan your migration. VMware Engine uses Google Cloud DNS, which is a managed DNS hosting service that provides authoritative DNS hosting published to the public internet, private zones visible to VPC networks, and DNS forwarding and peering for managing name resolution on VPC networks. Your migration plan can support testing of multi-region load balancing and DNS configurations. Prepare your data storage resources This section provides an overview of the data storage resources on Google Cloud and the basics on how to prepare for a migration to another region. The presence of the data already on Google Cloud simplifies the migration, because it implies that a solution to host them without any transformation exists or can be hosted on Google Cloud. The ability to copy database data into a different region and restore the data elsewhere is a common pattern in Disaster Recovery (DR). For this reason, some of the patterns described in this document rely on DR mechanisms such as database backup and recovery. The following managed services are described in this document: Cloud Storage Filestore Bigtable Firestore This document assumes that the storage solutions that you are using are regional instances which are co-located with compute resources. Cloud Storage Cloud Storage offers Storage Transfer Service, which automates the transfer of files from different systems to Cloud Storage. It can be used to replicate data to a different region for backup, and also for region to region migration. Cloud SQL Cloud SQL offers a relational database service to host different types of databases. Cloud SQL offers a cross-region replication functionality that allows instance data to be replicated in a different region. This feature is a common pattern for backup and restore of Cloud SQL instances, but also lets you promote the second replica in the other region to the main replica. You can use this feature to create a read replica in the second region and then promote it to the main replica once you migrate workloads. In general, for databases, managed services simplify the process of data replication, to make it easier to create a replica in the new region during migration. Another way to handle the migration is by using Database Migration Service, which lets you migrate SQL databases from different sources to Google Cloud. Among the supported sources there is also another Cloud SQL instance, with the only limitation that you can migrate to a different region, but not to a different project. Filestore As explained earlier in this document, the backup and restore feature of Filestore lets you create a backup of a file share that can be restored to another region. This feature can be used to perform region to region migration. Bigtable As with Cloud SQL, Bigtable supports replication. You can use this feature to replicate the same pattern described. Check in the Bigtable location list if the service is available in the destination region. In addition, as with Filestore, Bigtable supports backup and restore. This feature can be used, as with Filestore, to implement the migration by creating a backup and restoring it in another instance in the new region. The last option is exporting tables, for example, on Cloud Storage. These exports will host data in another service, and the data is then available to import to the instance in the region. Firestore Firestore locations might be bound to the presence of App Engine in your project, which in some scenarios forces the Firestore instance to be multi-region. In these migration scenarios, it's also necessary to take into account App Engine to design the right solution for Firestore. In fact, if you already have an App Engine app with a location of either us-central or europe-west, your Firestore database is considered multi-regional. If you have a regional location and you want to migrate to a different location, the managed export and import service lets you import and export Firestore entities by using a Cloud Storage bucket. This method can be used to move instances from one region to another. The other option is to use the Firestore backup and restore feature. This option is less expensive and more straightforward than import and export. Prepare for decommissioning the source environment You must prepare in advance before you decommission your source environment and switch to the new one. At a high level, you should consider the following before you decommission the source environment: New environment tests: Before you switch the traffic from the old environment to the new environment, you can do tests to validate the correctness of the applications. Other than the classic unit and integration tests that can be done on newly migrated applications, there are different strategies of testing. The new environment can be treated as a new version of the software and the migration of traffic can be implemented with common patterns such as A/B testing used for validation. Another approach is to replicate the incoming traffic in the source environment and in the new environment to check that functions are preserved. Downtime planning: If you select a strategy of migration like blue-green, where you switch traffic from an environment to another, consider the adoption of planned downtime. The downtime allows the transition to be better monitored and to avoid unpredictable errors on the client side. Rollback: Depending on the strategies adopted for migrating the traffic, it might be necessary to implement a rollback in the case of errors or misconfiguration of the new environment. To be able to rollback the environment, you must have a monitoring infrastructure in place to detect the status of the new environment. It's only possible to shut down services in the first region after you perform extended tests in the new region and go live in the new region without error. We recommend that you keep backups of the first region for a limited amount of time, until you're sure that there are no issues in the newly migrated region. You should also consider if you want to promote the old region to a disaster recovery site, assuming there isn't already a solution in place. This approach requires additional design to ensure that the site is reliable. For more information on how to correctly design and plan for DR, see the Disaster recovery planning guide. What's Next For more general design principles for designing reliable single and multi-region environments and about how Google achieves better reliability with regional and multi-region services, see Architecting disaster recovery for cloud infrastructure outages: Common themes. Learn more about the Google Cloud products used in this design guide: Compute Engine GKE Cloud Run VMware Engine Cloud Storage Filestore Bigtable Firestore For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Valerio Ponza | Technical Solution ConsultantOther contributors: Marco Ferrari | Cloud Solutions ArchitectTravis Webb | Solution ArchitectLee Gates | Group Product ManagerRodd Zurcher | Solutions Architect Send feedback
|
Architecting_for_cloud_infrastructure_outages.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
Architecting_for_locality-restricted_workloads.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/architecting-disaster-recovery-for-locality-restricted-workloads
|
2 |
+
Date Scraped: 2025-02-23T11:54:37.340Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Architecting disaster recovery for locality-restricted workloads Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-20 UTC This document discusses how you can use Google Cloud to architect for disaster recovery (DR) to meet location-specific requirements. For some regulated industries, workloads must adhere to these requirements. In this scenario, one or more of the following requirements apply: Data at rest must be restricted to a specified location. Data must be processed in the location where it resides. Workloads are accessible only from predefined locations. Data must be encrypted by using keys that the customer manages. If you are using cloud services, each cloud service must provide a minimum of two locations that are redundant to each other. For an example of location redundancy requirements, see the Cloud Computing Compliance Criteria Catalogue (C5). The series consists of these parts: Disaster recovery planning guide Disaster recovery building blocks Disaster recovery scenarios for data Disaster recovery scenarios for applications Architecting disaster recovery for locality-restricted workloads (this document) Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages Terminology Before you begin architecting for DR for locality-restricted workloads, it's a good idea to review locality terminology used in Google Cloud. Google Cloud provides services in regions throughout the Americas, Europe and the Middle East, and Asia Pacific. For example, London (europe-west2) is a region in Europe, and Oregon (us-west1) is a region in North America. Some Google Cloud products group multiple regions into a specific multi-region location which is accessible in the same way that you would use a region. Regions are further divided into zones where you deploy certain Google Cloud resources such as virtual machines, Kubernetes clusters, or Cloud SQL databases. Resources on Google Cloud are multi-regional, regional, or zonal. Some resources and products that are by default designated as multi-regional can also be restricted to a region. The different types of resources are explained as follows: Multi-regional resources are designed by Google Cloud to be redundant and distributed in and across regions. Multi-regional resources are resilient to the failure of a single region. Regional resources are redundantly deployed across multiple zones in a region, and are resilient to the failure of a zone within the region. Note: For more information about region-specific considerations, see Geography and regions. Zonal resources operate in a single zone. If a zone becomes unavailable, all zonal resources in that zone are unavailable until service is restored. Consider a zone as a single-failure domain. You need to architect your applications to mitigate the effects of a single zone becoming unavailable. For more information, see Geography and regions. Planning for DR for locality-restricted workloads The approach you take to designing your application depends on the type of workload and the locality requirements you must meet. Also consider why you must meet those requirements because what you decide directly influences your DR architecture. Start by reading the Google Cloud disaster recovery planning guide. And as you consider locality-restricted workloads, focus on the requirements discussed in this planning section. Define your locality requirements Before you start your design, define your locality requirements by answering these questions: Where is the data at rest? The answer dictates what services you can use and the high availability (HA) and DR methods you can employ to achieve your RTO/RPO values. Use the Cloud locations page to determine what products are in scope. Can you use encryption techniques to mitigate the requirement? If you are able to mitigate locality requirements by employing encryption techniques using Cloud External Key Manager and Cloud Key Management Service, you can use multi-regional and dual-regional services and follow the standard HA/DR techniques outlined in Disaster recovery scenarios for data. Can data be processed outside of where it rests? You can use products such as GKE Enterprise to provide a hybrid environment to address your requirements or implement product-specific controls such as load-balancing Compute Engine instances across multiple zones in a region. Use the Organization policy Resource Location constraint to restrict where resources can be deployed . If data can be processed outside of where it needs to be at rest, you can design the "processing" parts of your application by following the guidance in Disaster recovery building blocks and Disaster recovery scenarios for applications. Configure a VPC Security Controls perimeter to control who can access the data and to restrict what resources can process the data. Can you use more than one region? If you can use more than one region, you can use many of the techniques outlined in the Disaster Recovery series. Check the multi-region and region constraints for Google Cloud products. Do you need to restrict who can access your application? Google Cloud has several products and features that help you restrict who can access your applications: Identity-Aware Proxy (IAP). Verifies a user's identity and then determines whether that user should be permitted to access an application. Organization policy uses the domain-restricted sharing constraint to define the allowed Cloud Identity or Google Workspace IDs that are permitted in IAM policies. Product-specific locality controls. Refer to each product you want to use in your architecture for appropriate locality constraints. For example, if you're using Cloud Storage, create buckets in specified regions. Identify the services that you can use Identify what services can be used based on your locality and regional granularity requirements. Designing applications that are subject to locality restrictions requires understanding what products can be restricted to what region and what controls can be applied to enforce location restriction requirements. Identify the regional granularity for your application and data Identify the regional granularity for your application and data by answering these questions: Can you use multi-regional services in your design? By using multi-regional services, you can create highly available resilient architectures. Does access to your application have location restrictions? Use these Google Cloud products to help enforce where your applications can be accessed from: Google Cloud Armor. Lets you implement IP and geo-based constraints. VPC Service Controls. Provides context-based perimeter security. Is your data at rest restricted to a specific region? If you use managed services, ensure that the services you are using can be configured so that your data stored in the service is restricted to a specific region. For example, use BigQuery locality restrictions to dictate where your datasets are stored and backed up to. What regions do you need to restrict your application to? Some Google Cloud products do not have regional restrictions. Use the Cloud locations page and the product-specific pages to validate what regions you can use the product in and what mitigating features if any are available to restrict your application to a specific region. Meeting locality restrictions using Google Cloud products This section details features and mitigating techniques for using Google Cloud products as part of your DR strategy for locality-restricted workloads. We recommend reading this section along with Disaster recovery building blocks. Organization policies The Organization Policy Service gives you centralized control over your Google Cloud resources. Using organization policies, you can configure restrictions across your entire resource hierarchy. Consider the following policy constraints when architecting for locality-restricted workloads: Domain-restricted sharing: By default, all user identities are allowed to be added to IAM policies. The allowed/denied list must specify one or more Cloud Identity or Google Workspace customer identities. If this constraint is active, only identities in the allowed list are eligible to be added to IAM policies. Location-restricted resources: This constraint refers to the set of locations where location-based Google Cloud resources can be created. Policies for this constraint can specify as allowed or denied locations any of the following: multi-regions such as Asia and Europe, regions such as us-east1 or europe-west1, or individual zones such as europe-west1-b. For a list of supported services, see Resource locations supported services. Encryption If your data locality requirements concern restricting who can access the data, then implementing encryption methods might be an applicable strategy. By using external key management systems to manage keys that you supply outside of Google Cloud, you might be able to deploy a multi-region architecture to meet your locality requirements. Without the keys available, the data cannot be decrypted. Google Cloud has two products that let you use keys that you manage: Cloud External Key Manager (Cloud EKM): Cloud EKM lets you encrypt data in BigQuery and Compute Engine with encryption keys that are stored and managed in a third-party key management system that's deployed outside Google's infrastructure. Customer-supplied encryption keys (CSEK): You can use CSEK with Cloud Storage and Compute Engine. Google uses your key to protect the Google-generated keys that are used to encrypt and decrypt your data. If you provide a customer-supplied encryption key, Google does not permanently store your key on Google's servers or otherwise manage your key. Instead, you provide your key for each operation, and your key is purged from Google's servers after the operation is complete. When managing your own key infrastructure, you must carefully consider latency and reliability issues and ensure that you implement appropriate HA and recovery processes for your external key manager. You must also understand your RTO requirements. The keys are integral to writing the data, so RPO isn't the critical concern because no data can be safely written without the keys. The real concern is RTO because without your keys you cannot unencrypt or safely write data. Storage When architecting DR for locality-restricted workloads, you must ensure that data at rest is located in the region you require. You can configure Google Cloud object and file store services to meet your requirements Cloud Storage You can create Cloud Storage buckets that meet locality restrictions. Beyond the features discussed in the Cloud Storage section of the Disaster Recovery Building Blocks article, when you architect for DR for locality-restricted workloads, consider whether redundancy across regions is a requirement: objects stored in multi-regions and dual-regions are stored in at least two geographically separate areas, regardless of their storage class. This redundancy ensures maximum availability of your data, even during large-scale disruptions, such as natural disasters. Dual-regions achieve this redundancy by using a pair of regions that you choose. Multi-regions achieve this redundancy by using any combination of data centers in the specified multi-region, which might include data centers that are not explicitly listed as available regions. Data synchronization between the buckets occurs asynchronously. If you need a high degree of confidence that the data has been written to an alternative region to meet your RTO and RPO values, one strategy is to use two single-region buckets. You can then either dual-write the object or write to one bucket and have Cloud Storage copy it to the second bucket. Single-region mitigation strategies when using Cloud Storage If your requirements restrict you to using a single region, then you can't implement an architecture that is redundant across geographic locations using Google Cloud alone. In this scenario, consider using one or more of the following techniques: Adopt a multi-cloud or hybrid strategy. This approach lets you choose another cloud or on-premises solution in the same geographic area as your Google Cloud region. You can store copies of your data in Cloud Storage buckets on-premises, or alternatively, use Cloud Storage as the target for your backup data. To use this approach, do the following: Ensure that distance requirements are met. If you are using AWS as your other cloud provider, refer to the Cloud Storage interoperability guide for how to configure access to Amazon S3 using Google Cloud tools. For other clouds and on-premises solutions, consider open source solutions such as minIO and Ceph to provide an on-premises object store. Consider using Cloud Composer with the gcloud storage command-line utility to transfer data from an on-premises object store to Cloud Storage. Use the Transfer service for on-premises data to copy data stored on-premises to Cloud Storage. Implement encryption techniques. If your locality requirements permit using encryption techniques as a workaround, you can then use multi-region or dual-region buckets. Filestore Filestore provides managed file storage that you can deploy in regions and zones according to your locality restriction requirements. Managed databases Disaster recovery scenarios for data describes methods for implementing backup and recovery strategies for Google Cloud managed database services. In addition to using these methods, you must also consider locality restrictions for each managed database service that you use in your architecture—for example: Bigtable is available in zonal locations in a region. Production instances have a minimum of two clusters, which must be in unique zones in the region. Replication between clusters in a Bigtable instance is automatically managed by Google. Bigtable synchronizes your data between the clusters, creating a separate, independent copy of your data in each zone where your instance has a cluster. Replication makes it possible for incoming traffic to fail over to another cluster in the same instance. BigQuery has locality restrictions that dictate where your datasets are stored. Dataset locations can be regional or multi-regional. To provide resilience during a regional disaster, you need to back up data to another geographic location. In the case of BigQuery multi-regions, we recommend that you avoid backing up to regions within the scope of the multi-region. If you select the EU multi-region, you exclude Zürich and London from being part of the multi-region configuration. For guidance on implementing a DR solution for BigQuery that addresses the unlikely event of a physical regional loss, see Loss of region. To understand the implications of adopting single-region or multi-region BigQuery configurations, see the BigQuery documentation. You can use Firestore to store your Firestore data in either a multi-region location or a regional location. Data in a multi-region location operates in a multi-zone and multi-region replicated configuration. Select a multi-region location if your locality restriction requirements permit it and you want to maximize the availability and durability of your database. multi-region locations can withstand loss of entire regions and maintain availability without data loss. Data in a regional location operates in a multi-zone replicated configuration. You can configure Cloud SQL for high availability. A Cloud SQL instance configured for HA is also called a regional instance and is located in a primary and secondary zone in the configured region. In a regional instance, the configuration is made up of a primary instance and a standby instance. Ensure that you understand the typical failover time from the primary to the standby instance. If your requirements permit, you can configure Cloud SQL with cross-region replicas. If a disaster occurs, the read replica in a different region can be promoted. Because read replicas can be configured for HA in advance, they don't need to go through additional changes after that promotion for HA. You can also configure read replicas to have their own cross-region replicas that can offer immediate protection from regional failures after replica promotion. You can configure Spanner as either regional or multi-region. For any regional configuration, Spanner maintains three read-write replicas, each in a different Google Cloud zone in that region. Each read-write replica contains a full copy of your operational database that is able to serve read/write and read-only requests. Spanner uses replicas in different zones so that if a single-zone failure occurs, your database remains available. A Spanner multi-region deployment provides a consistent environment across multiple regions, including two read-write regions and one witness region containing a witness replica. You must validate that the locations of all the regions meet your locality restriction requirements. Compute Engine Compute Engine resources are global, regional, or zonal. Compute Engine resources such as virtual machine instances or zonal persistent disks are referred to as zonal resources. Other resources, such as static external IP addresses, are regional. Regional resources can be used by any resources in that region, regardless of zone, while zonal resources can only be used by other resources in the same zone. Putting resources in different zones in a region isolates those resources from most types of physical infrastructure failure and infrastructure software-service failures. Also, putting resources in different regions provides an even higher degree of failure independence. This approach lets you design robust systems with resources spread across different failure domains. For more information, see regions and zones. Using on-premises or another cloud as a production site You might be using a Google Cloud region that prevents you from using dual or multi-region combinations for your DR architecture. To meet locality restrictions in this case, consider using your own data center or another cloud as the production site or as the failover site. This section discusses Google Cloud products that are optimized for hybrid workloads. DR architectures that use on-premises and Google Cloud are discussed in Disaster recovery scenarios for applications. GKE Enterprise GKE Enterprise is Google Cloud's open hybrid and multi-cloud application platform that helps you securely run your container-based workloads anywhere. GKE Enterprise enables consistency between on-premises and cloud environments, letting you have a consistent operating model and a single view of your Google Kubernetes Engine (GKE) clusters, no matter where you are running them. As part of your DR strategy, GKE Enterprise simplifies the configuration and operation of HA and failover architectures across dissimilar environments (between Google Cloud and on-premises or another cloud). You can run your production GKE Enterprise clusters on-premises and if a disaster occurs, you can fail over to run the same workloads on GKE Enterprise clusters in Google Cloud. GKE Enterprise on Google Cloud has three types of clusters: Single-zone cluster. A single-zone cluster has a single control plane running in one zone. This control plane manages workloads on nodes that are running in the same zone. Multi-zonal cluster. A multi-zonal cluster has a single replica of the control plane running in a single zone, and has nodes running in multiple zones Regional cluster. Regional clusters replicate cluster primaries and nodes across multiple zones in a single region. For example, a regional cluster in the us-east1 region creates replicas of the control plane and nodes in three us-east1 zones: us-east1-b, us-east1-c, and us-east1-d. Regional clusters are the most resilient to zonal outages. Note: For more information about region-specific considerations, see Geography and regions. Google Cloud VMware Engine Google Cloud VMware Engine lets you run VMware workloads in the cloud. If your on-premises workloads are VMware based, you can architect your DR solution to run on the same virtualization solution that you are running on-premises. You can select the region that meets your locality requirements. Networking When your DR plan is based on moving data from on-premises to Google Cloud or from another cloud provider to Google Cloud, then you must address your networking strategy. For more information, see the Transferring data to and from Google Cloud section of the "Disaster recovery building blocks" document. VPC Service Controls When planning your DR strategy, you must ensure that the security controls that apply to your production environment also extend to your failover environment. By using VPC Service Controls, you can define a security perimeter from on-premises networks to your projects in Google Cloud. VPC Service Controls enables a context-aware access approach to controlling your cloud resources. You can create granular access control policies in Google Cloud based on attributes like user identity and IP address. These policies help ensure that the appropriate security controls are in place in your on-premises and Google Cloud environments. What's next Read other articles in this DR series: Disaster recovery planning guide Disaster recovery building blocks Disaster recovery scenarios for data Disaster recovery scenarios for applications Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages Read the whitepaper Data residency, operational transparency, and privacy for European customers on Google Cloud (PDF). For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback
|
Architectural_approaches.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/network-architecture
|
2 |
+
Date Scraped: 2025-02-23T11:52:54.677Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Designing networks for migrating enterprise workloads: Architectural approaches Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-13 UTC This document introduces a series that describes networking and security architectures for enterprises that are migrating data center workloads to Google Cloud. These architectures emphasize advanced connectivity, zero-trust security principles, and manageability across a hybrid environment. As described in an accompanying document, Architectures for Protecting Cloud Data Planes, enterprises deploy a spectrum of architectures that factor in connectivity and security needs in the cloud. We classify these architectures into three distinct architectural patterns: lift-and-shift, hybrid services, and zero-trust distributed. The current document considers different security approaches, depending on which architecture an enterprise has chosen. It also describes how to realize those approaches using the building blocks provided by Google Cloud. You should use these security guidances in conjunction with other architectural guidances covering reliability, availability, scale, performance, and governance. This document is designed to help systems architects, network administrators, and security administrators who are planning to migrate on-premises workloads to the cloud. It assumes the following: You are familiar with data center networking and security concepts. You have existing workloads in your on-premises data center and are familiar with what they do and who their users are. You have at least some workloads that you plan to migrate. You are generally familiar with the concepts described in Architectures for Protecting Cloud Data Planes. The series consists of the following documents: Designing networks for migrating enterprise workloads: Architectural approaches (this document) Networking for secure intra-cloud access: Reference architectures Networking for internet-facing application delivery: Reference architectures Networking for hybrid and multi-cloud workloads: Reference architectures This document summarizes the three primary architectural patterns and introduces the resource building blocks that you can use to create your infrastructure. Finally, it describes how to assemble the building blocks into a series of reference architectures that match the patterns. You can use these reference architectures to guide your own architecture. This document mentions virtual machines (VMs) as examples of workload resources. The information applies to other resources that use VPC networks, like Cloud SQL instances and Google Kubernetes Engine nodes. Overview of architectural patterns Typically, network engineers have focused on building the physical networking infrastructure and security infrastructure in on-premises data centers. The journey to the cloud has changed this approach because cloud networking constructs are software-defined. In the cloud, application owners have limited control of the underlying infrastructure stack. They need a model that has a secure perimeter and provides isolation for their workloads. In this series, we consider three common architectural patterns. These patterns build on one another, and they can be seen as a spectrum rather than a strict choice. Lift-and-shift pattern In the lift-and-shift architectural pattern, enterprise application owners migrate their workloads to the cloud without refactoring those workloads. Network and security engineers use Layer 3 and Layer 4 controls to provide protection using a combination of network virtual appliances that mimic on-premises physical devices and cloud firewall rules in the VPC network. Workload owners deploy their services in VPC networks. Hybrid services pattern Workloads that are built using lift-and-shift might need access to cloud services such as BigQuery or Cloud SQL. Typically, access to such cloud services is at Layer 4 and Layer 7. In this context, isolation and security cannot be done strictly at Layer 3. Therefore, service networking and VPC Service Controls are used to provide connectivity and security, based on the identities of the service that's being accessed and the service that's requesting access. In this model, it's possible to express rich access-control policies. Zero-trust distributed pattern In a zero-trust architecture, enterprise applications extend security enforcement beyond perimeter controls. Inside the perimeter, workloads can communicate with other workloads only if their IAM identity has specific permission, which is denied by default. In a Zero Trust Distributed Architecture, trust is identity-based and enforced for each application. Workloads are built as microservices that have centrally issued identities. That way, services can validate their callers and make policy-based decisions for each request about whether that access is acceptable. This architecture is often implemented using distributed proxies (a service mesh) instead of using centralized gateways. Enterprises can enforce zero-trust access from users and devices to enterprise applications by configuring Identity-Aware Proxy (IAP). IAP provides identity- and context-based controls for user traffic from the internet or intranet. Combining patterns Enterprises that are building or migrating their business applications to the cloud usually use a combination of all three architectural patterns. Google Cloud offers a portfolio of products and services that serve as building blocks to implement the cloud data plane that powers the architectural patterns. These building blocks are discussed later in this document. The combination of controls that are provided in the cloud data plane, together with administrative controls to manage cloud resources, form the foundation of an end-to-end security perimeter. The perimeter that's created by this combination lets you govern, deploy, and operate your workloads in the cloud. Resource hierarchy and administrative controls This section presents a summary of the administrative controls that Google Cloud provides as resource containers. The controls include Google Cloud organization resources, folders, and projects that let you group and hierarchically organize cloud resources. This hierarchical organization provides you with an ownership structure and with anchor points for applying policy and controls. A Google organization resource is the root node in the hierarchy and is the foundation for creating deployments in the cloud. An organization resource can have folders and projects as children. A folder has projects or other folders as children. All other cloud resources are the children of projects. You use folders as a method of grouping projects. Projects form the basis for creating, enabling, and using all Google Cloud services. Projects let you manage APIs, enable billing, add and remove collaborators, and manage permissions. Using Google Identity and Access Management (IAM), you can assign roles and define access policies and permissions at all resource hierarchy levels. IAM policies are inherited by resources lower in the hierarchy. These policies can't be altered by resource owners who are lower in the hierarchy. In some cases, the identity and access management is provided at a more granular level, for example at the scope of objects in a namespace or cluster as in Google Kubernetes Engine. Design considerations for Google Virtual Private Cloud networks When you're designing a migration strategy to the cloud, it's important to develop a strategy for how your enterprise will use VPC networks. You can think of a VPC network as a virtual version of your traditional physical network. It is a completely isolated, private network partition. By default, workloads or services that are deployed in one VPC network cannot communicate with jobs in another VPC network. VPC networks therefore enable workload isolation by forming a security boundary. Because each VPC network in the cloud is a fully virtual network, each has its own private IP address space. You can therefore use the same IP address in multiple VPC networks without conflict. A typical on-premises deployment might consume a large portion of the RFC 1918 private IP address space. On the other hand, if you have workloads both on-premises and in VPC networks, you can reuse the same address ranges in different VPC networks, as long as those networks aren't connected or peered, thus using up IP address space less quickly. VPC networks are global VPC networks in Google Cloud are global, which means that resources deployed in a project that has a VPC network can communicate with each other directly using Google's private backbone. As figure 1 shows, you can have a VPC network in your project that contains subnetworks in different regions that span multiple zones. The VMs in any region can communicate privately with each other using the local VPC routes. Figure 1. Google Cloud global VPC network implementation with subnetworks configured in different regions. Sharing a network using Shared VPC Shared VPC lets an organization resource connect multiple projects to a common VPC network so that they can communicate with each other securely using internal IP addresses from the shared network. Network administrators for that shared network apply and enforce centralized control over network resources. When you use Shared VPC, you designate a project as a host project and attach one or more service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use subnets in the Shared VPC network. Enterprises typically use Shared VPC networks when they need network and security administrators to centralize management of network resources such as subnets and routes. At the same time, Shared VPC networks let application and development teams create and delete VM instances and deploy workloads in designated subnets using the service projects. Isolating environments by using VPC networks Using VPC networks to isolate environments has a number of advantages, but you need to consider a few disadvantages as well. This section addresses these tradeoffs and describes common patterns for implementing isolation. Reasons to isolate environments Because VPC networks represent an isolation domain, many enterprises use them to keep environments or business units in separate domains. Common reasons to create VPC-level isolation are the following: An enterprise wants to establish default-deny communications between one VPC network and another, because these networks represent an organizationally meaningful distinction. For more information, see Common VPC network isolation patterns later in this document. An enterprise needs to have overlapping IP address ranges because of pre-existing on-premises environments, because of acquisitions, or because of deployments to other cloud environments. An enterprise wants to delegate full administrative control of a network to a portion of the enterprise. Disadvantages of isolating environments Creating isolated environments with VPC networks can have some disadvantages. Having multiple VPC networks can increase the administrative overhead of managing the services that span multiple networks. This document discusses techniques that you can use to manage this complexity. Common VPC network isolation patterns There are some common patterns for isolating VPC networks: Isolate development, staging, and production environments. This pattern lets enterprises fully segregate their development, staging, and production environments from each other. In effect, this structure maintains multiple complete copies of applications, with progressive rollout between each environment. In this pattern, VPC networks are used as security boundaries. Developers have a high degree of access to development VPC networks to do their day-to-day work. When development is finished, an engineering production team or a QA team can migrate the changes to a staging environment, where the changes can be tested in an integrated fashion. When the changes are ready to be deployed, they are sent to a production environment. Isolate business units. Some enterprises want to impose a high degree of isolation between business units, especially in the case of units that were acquired or ones that demand a high degree of autonomy and isolation. In this pattern, enterprises often create a VPC network for each business unit and delegate control of that VPC to the business unit's administrators. The enterprise uses techniques that are described later in this document to expose services that span the enterprise or to host user-facing applications that span multiple business units. Recommendation for creating isolated environments We recommend that you design your VPC networks to have the broadest domain that aligns with the administrative and security boundaries of your enterprise. You can achieve additional isolation between workloads that run in the same VPC network by using security controls such as firewalls. For more information about designing and building an isolation strategy for your organization, see Best practices and reference architectures for VPC design and Networking in the Google Cloud enterprise foundations blueprint. Building blocks for cloud networking This section discusses the important building blocks for network connectivity, network security, service networking, and service security. Figure 2 shows how these building blocks relate to one another. You can use one or more of the products that are listed in a given row. Figure 2. Building blocks in the realm of cloud network connectivity and security. The following sections discuss each of the building blocks and which Google Cloud services you can use for each of the blocks. Network connectivity The network connectivity block is at the base of the hierarchy. It's responsible for connecting Google Cloud resources to on-premises data centers or other clouds. Depending on your needs, you might need only one of these products, or you might use all of them to handle different use cases. Cloud VPN Cloud VPN lets you connect your remote branch offices or other cloud providers to Google VPC networks through IPsec VPN connections. Traffic traveling between the two networks is encrypted by one VPN gateway and then decrypted by the other VPN gateway, thereby helping to protect data as it traverses the internet. Cloud VPN lets you connect your on-premises environment and Google Cloud for less cost, but lower bandwidth, than Cloud Interconnect (described in the next section). You can provision an HA VPN to meet an SLA requirement of up to 99.99% availability if you have the conforming architecture. For example, Cloud VPN is a good choice for non-mission-critical use cases or for extending connectivity to other cloud providers. Cloud Interconnect Cloud Interconnect provides enterprise-grade dedicated connectivity to Google Cloud that has higher throughput and more reliable network performance compared to using VPN or internet ingress. Dedicated Interconnect provides direct physical connectivity to Google's network from your routers. Partner Interconnect provides dedicated connectivity through an extensive network of partners, who might offer broader reach or other bandwidth options than Dedicated Interconnect does. Cross-Cloud Interconnect provides dedicated direct connectivity from your VPC networks to other cloud providers. Dedicated Interconnect requires that you connect at a colocation facility where Google has a presence, but Partner Interconnect does not. Cloud Interconnect ensures that the traffic between your on-premises network or other cloud network and your VPC network doesn't traverse the public internet. You can provision these Cloud Interconnect connections to meet an SLA requirement of up to 99.99% availability if you provision the appropriate architecture. You can consider using Cloud Interconnect to support workloads that require low latency, high bandwidth, and predictable performance while ensuring that all of your traffic stays private. Network Connectivity Center for hybrid Network Connectivity Center provides site-to-site connectivity among your on-premises and other cloud networks. It does this using Google's backbone network to deliver reliable connectivity among your sites. Additionally, you can extend your existing SD-WAN overlay network to Google Cloud by configuring a VM or a third-party vendor router appliance as a logical spoke attachment. You can access resources inside the VPC networks using the router appliance, VPN, or Cloud Interconnect network as spoke attachments. You can use Network Connectivity Center to consolidate connectivity between your on-premises sites, your presences in other clouds, and Google Cloud and manage it all using a single view. Network Connectivity Center for VPC networks Network Connectivity Center also lets you create a mesh or star topology among many VPC networks using VPC spokes. You can connect the hub to on-premises or other clouds using Network Connectivity Center hybrid spokes. VPC Network Peering VPC Network Peering lets you connect Google VPC networks so that workloads in different VPC networks can communicate internally regardless of whether they belong to the same project or to the same organization resource. Traffic stays within Google's network and doesn't traverse the public internet. VPC Network Peering requires that the networks to be peered don't have overlapping IP addresses. Network security The network security block sits on top of the network connectivity block. It's responsible for allowing or denying access to resources based on the characteristics of IP packets. Cloud NGFW Cloud Next Generation Firewall (Cloud NGFW) is a distributed firewall service that lets you apply firewall policies at the organization, folder, and network level. Enabled firewall rules are always enforced, protecting your instances regardless of their configuration or the operating system, or even whether the VMs have fully booted. The rules are applied on a per-instance basis, meaning that the rules protect connections between VMs within a given network as well connections to outside the network. Rule application can be governed using IAM-governed Tags, which allow you to control which VMs are covered by particular rules. Cloud NGFW also offers the option to do L7 inspection of packets. Packet mirroring Packet mirroring clones the traffic of specific instances in your VPC network and forwards it to collectors for examination. Packet mirroring captures all traffic and packet data, including payloads and headers. You can configure mirroring for both ingress and egress traffic, for only ingress traffic, or for only egress traffic. The mirroring happens on the VM instances, not on the network. Network virtual appliance Network virtual appliances let you apply security and compliance controls to the virtual network that are consistent with controls in the on-premises environment. You can do this by deploying VM images that are available in the Google Cloud Marketplace to VMs that have multiple network interfaces, each attached to a different VPC network, to perform a variety of network virtual functions. Typical use cases for virtual appliances are as follows: Next-generation firewall (NGFW). NGFW NVAs deliver protection in situations not covered by Cloud NGFW or to provide management consistency with on-premises NGFW installations. Intrusion detection system/intrusion prevention system (IDS/IPS). A network-based IDS provides visibility into potentially malicious traffic. To prevent intrusions, IPS devices can block malicious traffic from reaching its destination. Google Cloud offers Cloud Intrusion Detection System (Cloud IDS) as a managed service. Secure web gateway (SWG). A SWG blocks threats from the internet by letting enterprises apply corporate policies on traffic that's traveling to and from the internet. This is done by using URL filtering, malicious code detection, and access control. Google Cloud offers Secure Web Proxy as a managed service. Network address translation (NAT) gateway. A NAT gateway translates IP addresses and ports. For example, this translation helps avoid overlapping IP addresses. Google Cloud offers Cloud NAT as a managed service. Web application firewall (WAF). A WAF is designed to block malicious HTTP(S) traffic that's going to a web application. Google Cloud offers WAF functionality through Google Cloud Armor security policies. The exact functionality differs between WAF vendors, so it's important to determine what you need. Cloud IDS Cloud IDS is an intrusion detection service that provides threat detection for intrusions, malware, spyware, and command-and-control attacks on your network. Cloud IDS works by creating a Google-managed peered network containing VMs that will receive mirrored traffic. The mirrored traffic is then inspected by Palo Alto Networks threat protection technologies to provide advanced threat detection. Cloud IDS provides full visibility into intra-subnet traffic, letting you monitor VM-to-VM communication and to detect lateral movement. Cloud NAT Cloud NAT provides fully managed, software-defined network address translation support for applications. It enables source network address translation (source NAT or SNAT) for internet-facing traffic from VMs that don't have external IP addresses. Firewall Insights Firewall Insights helps you understand and optimize your firewall rules. It provides data about how your firewall rules are being used, exposes misconfigurations, and identifies rules that could be made more strict. It also uses machine learning to predict future usage of your firewall rules so that you can make informed decisions about whether to remove or tighten rules that seem overly permissive. Network logging You can use multiple Google Cloud products to log and analyze network traffic. Firewall Rules Logging lets you audit, verify, and analyze the effects of your firewall rules. For example, you can determine if a firewall rule that's designed to deny traffic is functioning as intended. Firewall Rules Logging is also useful if you need to determine how many connections are affected by a given firewall rule. You enable Firewall Rules Logging individually for each firewall rule whose connections you need to log. Firewall Rules Logging is an option for any firewall rule, regardless of the action (allow or deny) or direction (ingress or egress) of the rule. VPC Flow Logs records a sample of network flows that are sent from and received by VM instances, including instances used as Google Kubernetes Engine (GKE) nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. Service networking Service networking blocks are responsible for providing lookup services that tell services where a request should go (DNS, Service Directory) and with getting requests to the correct place (Private Service Connect, Cloud Load Balancing). Cloud DNS Workloads are accessed using domain names. Cloud DNS offers reliable, low-latency translation of domain names to IP addresses that are located anywhere in the world. Cloud DNS offers both public zones and private managed DNS zones. A public zone is visible to the public internet, while a private zone is visible only from one or more VPC networks that you specify. Cloud Load Balancing Within Google Cloud, load balancers are a crucial component—they route traffic to various services to ensure speed and efficiency, and to help ensure security globally for both internal and external traffic. Our load balancers also let traffic be routed and scaled across multiple clouds or hybrid environments. This makes Cloud Load Balancing the "front door" through which any application can be scaled no matter where it is or in how many places it's hosted. Google offers various types of load balancing: global and regional, external and internal, and Layer 4 and Layer 7. Service Directory Service Directory lets you manage your service inventory, providing a single secure place to publish, discover, and connect services, all operations underpinned by identity-based access control. It lets you register named services and their endpoints. Registration can be either manual or by using integrations with Private Service Connect, GKE, and Cloud Load Balancing. Service discovery is possible by using explicit HTTP and gRPC APIs, as well as by using Cloud DNS. Cloud Service Mesh Both Cloud Service Mesh is designed to run complex, distributed applications by enabling a rich set of traffic management and security policies in service mesh architectures. Cloud Service Mesh supports Kubernetes-based regional and global deployments, both Google Cloud and on-premises, that benefit from a managed Istio product. It also supports Google Cloud using proxies on VMs or proxyless gRPC. Private Service Connect Private Service Connect creates service abstractions by making workloads accessible across VPC networks through a single endpoint. This allows two networks to communicate in a client-server model that exposes just the service to the consumer instead of the entire network or the workload itself. A service-oriented network model allows network administrators to reason about the services they expose between networks rather than subnets or VPCs, enabling consumption of the services in a producer-consumer model, be it for first-party or third-party services (SaaS). With Private Service Connect a consumer VPC can use a private IP address to connect to a Google API or a service in another VPC. You can extend Private Service Connect to your on-premises network to access endpoints that connect to Google APIs or to managed services in another VPC network. Private Service Connect allows consumption of services at Layer 4 or Layer 7. At Layer 4, Private Service Connect requires the producer to create one or more subnets specific to Private Service Connect. These subnets are also referred to as NAT subnets. Private Service Connect performs source NAT using an IP address that's selected from one of the Private Service Connect subnets to route the requests to a service producer. This approach lets you use overlapping IP addresses between consumers and producers. At Layer 7, you can create a Private Service Connect backend using an internal Application Load Balancer. The internal Application Load Balancer lets you choose which services are available using a URL map. For more information, see About Private Service Connect backends. Private services access Private services access is a private connection between your VPC network and a network that's owned by Google or by a third party. Google or the third parties who offer services are known as service producers. Private services access uses VPC Network Peering to establish the connectivity, and it requires the producer and consumer VPC networks to be peered with each other. This is different from Private Service Connect, which lets you project a single private IP address into your subnet. The private connection lets VM instances in your VPC network and the services that you access communicate exclusively by using internal IP addresses. VM instances don't need internet access or external IP addresses to reach services that are available through private services access. Private services access can also be extended to the on-premises network by using Cloud VPN or Cloud Interconnect to provide a way for the on-premises hosts to reach the service producer's network. For a list of Google-managed services that are supported using private services access, see Supported services in the Virtual Private Cloud documentation. Serverless VPC Access Serverless VPC Access makes it possible for you to connect directly to your VPC network from services hosted in serverless environments such as Cloud Run, App Engine, or Cloud Run functions. Configuring Serverless VPC Access lets your serverless environment send requests to your VPC network using internal DNS and internal IP addresses. The responses to these requests also use your virtual network. Serverless VPC Access sends internal traffic from your VPC network to your serverless environment only when that traffic is a response to a request that was sent from your serverless environment through the Serverless VPC Access connector. Serverless VPC Access has the following benefits: Requests sent to your VPC network are never exposed to the internet. Communication through Serverless VPC Access can have less latency compared to communication over the internet. Direct VPC egress Direct VPC egress lets your Cloud Run service send traffic to a VPC network without setting up a Serverless VPC Access connector. Service security The service security blocks control access to resources based on the identity of the requestor or based on higher-level understanding of packet patterns instead of just the characteristics of an individual packet. Google Cloud Armor for DDoS/WAF Google Cloud Armor is a web-application firewall (WAF) and distributed denial-of-service (DDoS) mitigation service that helps you defend your web applications and services from multiple types of threats. These threats include DDoS attacks, web-based attacks such as cross-site scripting (XSS) and SQL injection (SQLi), and fraud and automation-based attacks. Google Cloud Armor inspects incoming requests on Google's global edge. It has a built-in set of web application firewall rules to scan for common web attacks and an advanced ML-based attack detection system that builds a model of good traffic and then detects bad traffic. Finally, Google Cloud Armor integrates with Google reCAPTCHA to help detect and stop sophisticated fraud and automation-based attacks by using both endpoint telemetry and cloud telemetry. Identity Aware Proxy (IAP) Identity-Aware Proxy (IAP) provides context-aware access controls to cloud-based applications and VMs that are running on Google Cloud or that are connected to Google Cloud using any of the hybrid networking technologies. IAP verifies the user identity and determines if the user request is originating from trusted sources, based on various contextual attributes. IAP also supports TCP tunneling for SSH/RDP access from enterprise users. VPC Service Controls VPC Service Controls helps you mitigate the risk of data exfiltration from Google Cloud services such as Cloud Storage and BigQuery. Using VPC Service Controls helps ensure that use of your Google Cloud services happens only from approved environments. You can use VPC Service Controls to create perimeters that protect the resources and data of services that you specify by limiting access to specific cloud-native identity constructs like service accounts and VPC networks. After a perimeter has been created, access to the specified Google services is denied unless the request comes from within the perimeter. Content delivery The content delivery blocks control the optimization of delivery of applications and content. Cloud CDN Cloud CDN provides static content acceleration by using Google's global edge network to deliver content from a point closest to the user. This helps reduce latency for your websites and applications. Media CDN Media CDN is Google's media delivery solution and is built for high-throughput egress workloads. Observability The observability blocks give you visibility into your network and provide insight which can be used to troubleshoot, document, investigate, issues. Network Intelligence Center Network Intelligence Center comprises several products that address various aspects of network observability. Each product has a different focus and provides rich insights to inform administrators, architects, and practitioners about network health and issues. Reference architectures The following documents present reference architectures for different types of workloads: intra-cloud, internet-facing, and hybrid. These workload architectures are built on top of a cloud data plane that is realized using the building blocks and the architectural patterns that were outlined in earlier sections of this document. You can use the reference architectures to design ways to migrate or build workloads in the cloud. Your workloads are then underpinned by the cloud data plane and use the architectures. Although these documents don't provide an exhaustive set of reference architectures, they do cover the most common scenarios. As with the security architecture patterns that are described in Architectures for Protecting Cloud Data Planes, real-world services might use a combination of these designs. These documents discuss each workload type and the considerations for each security architecture. Networking for secure intra-cloud access: Reference architectures Networking for internet-facing application delivery: Reference architectures Networking for hybrid and multi-cloud workloads: Reference architectures What's next Migration to Google Cloud can help you to plan, design, and implement the process of migrating your workloads to Google Cloud. Landing zone design in Google Cloud has guidance for creating a landing zone network. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback
|
Architectural_approaches_to_adopt_a_hybrid_or_multicloud_architecture.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns/adopt
|
2 |
+
Date Scraped: 2025-02-23T11:49:46.536Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Architectural approaches to adopt a hybrid or multicloud architecture Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC This document provides guidance on common and proven approaches and considerations to migrate your workload to the cloud. It expands on guidance in Design a hybrid and multicloud architecture strategy, which discusses several possible, and recommended, steps to design a strategy for adopting a hybrid or multicloud architecture. Note: The phrase migrate your workload to the cloud refers to hybrid and multicloud scenarios, not to a complete cloud migration. Cloud first A common way to begin using the public cloud is the cloud-first approach. In this approach, you deploy your new workloads to the public cloud while your existing workloads stay where they are. In that case, consider a classic deployment to a private computing environment only if a public cloud deployment is impossible for technical or organizational reasons. The cloud-first strategy has advantages and disadvantages. On the positive side, it's forward looking. You can deploy new workloads in a modernized fashion while avoiding (or at least minimizing) the hassles of migrating existing workloads. While a cloud-first approach can provide certain advantages, it could potentially result in missed opportunities for improving or using existing workloads. New workloads might represent a fraction of the overall IT landscape, and their effect on IT expenses and performance can be limited. Allocating time and resources to migrating an existing workload could potentially lead to more substantial benefits or cost savings compared to attempting to accommodate a new workload in the cloud environment. Following a strict cloud-first approach also risks increasing the overall complexity of your IT environment. This approach might create redundancies, lower performance due to potential excessive cross-environment communication, or result in a computing environment that isn't well suited for the individual workload. Also, compliance with industry regulations and data privacy laws can restrict enterprises from migrating certain applications that hold sensitive data. Considering these risks, you might be better off using a cloud-first approach only for selected workloads. Using a cloud-first approach lets you concentrate on the workloads that can benefit the most from a cloud deployment or migration. This approach also considers the modernization of existing workloads. A common example of a cloud-first hybrid architecture is when legacy applications and services holding critical data must be integrated with new data or applications. To complete the integration, you can use a hybrid architecture that modernizes legacy services by using API interfaces, which unlocks them for consumption by new cloud services and applications. With a cloud API management platform, like Apigee, you can implement such use cases with minimal application changes and add security, analytics, and scalability to the legacy services. Migration and modernization Hybrid multicloud and IT modernization are distinct concepts that are linked in a virtuous circle. Using the public cloud can facilitate and simplify the modernization of IT workloads. Modernizing your IT workloads can help you get more from the cloud. The primary goals of modernizing workloads are as follows: Achieve greater agility so that you can adapt to changing requirements. Reduce the costs of your infrastructure and operations. Increase reliability and resiliency to minimize risk. However, it might not be feasible to modernize every application in the migration process at the same time. As described in Migration to Google Cloud, you can implement one of the following migration types, or even combine multiple types as needed: Rehost (lift and shift) Replatform (lift and optimize) Refactor (move and improve) Rearchitect (continue to modernize) Rebuild (remove and replace, sometimes called rip and replace) Repurchase When making strategic decisions about your hybrid and multicloud architectures, it's important to consider the feasibility of your strategy from a cost and time perspective. You might want to consider a phased migration approach, starting with lifting and shifting or replatforming and then refactoring or rearchitecting as the next step. Typically, lifting and shifting helps to optimize applications from an infrastructure perspective. After applications are running in the cloud, it's easier to use and integrate cloud services to further optimize them using cloud-first architectures and capabilities. Also, these applications can still communicate with other environments over a hybrid network connection. For example, you can refactor or rearchitect a large, monolithic VM-based application and turn it into several independent microservices, based on a cloud-based microservice architecture. In this example, the microservices architecture uses Google Cloud managed container services like Google Kubernetes Engine (GKE) or Cloud Run. However, if the architecture or infrastructure of an application isn't supported in the target cloud environment as it is, you might consider starting with replatforming, refactoring, or rearchitecting your migration strategy to overcome those constraints where feasible. When using any of these migration approaches, consider modernizing your applications (where applicable and feasible). Modernization can require adopting and implementing Site Reliability Engineering (SRE) or DevOps principles, such that you might also need to extend application modernization to your private environment in a hybrid setup. Even though implementing SRE principles involves engineering at its core, it's more of a transformation process than a technical challenge. As such, it will likely require procedural and cultural changes. To learn more about how the first step to implementing SRE in an organization is to get leadership buy-in, see With SRE, failing to plan is planning to fail. Mix and match migration approaches Each migration approach discussed here has certain strengths and weaknesses. A key advantage of following a hybrid and multicloud strategy is that it isn't necessary to settle on a single approach. Instead, you can decide which approach works best for each workload or application stack, as shown in the following diagram. This conceptual diagram illustrates the various migration and modernization paths or approaches that can be simultaneously applied to different workloads, driven by the unique business, technical requirements, and objectives of each workload or application. In addition, it's not necessary that the same application stack components follow the same migration approach or strategy. For example: The backend on-premises database of an application can be replatformed from self-hosted MySQL to a managed database using Cloud SQL in Google Cloud. The application frontend virtual machines can be refactored to run on containers using GKE Autopilot, where Google manages the cluster configuration, including nodes, scaling, security, and other preconfigured settings. The on-premises hardware load balancing solution and web application firewall WAF capabilities can be replaced with Cloud Load Balancing and Google Cloud Armor. Choose rehost (lift and shift), if any of the following is true of the workloads: They have a relatively small number of dependencies on their environment. They aren't considered worth refactoring, or refactoring before migration isn't feasible. They are based on third-party software. Consider refactor (move and improve) for these types of workloads: They have dependencies that must be untangled. They rely on operating systems, hardware, or database systems that can't be accommodated in the cloud. They aren't making efficient use of compute or storage resources. They can't be deployed in an automated fashion without some effort. Consider whether rebuild (remove and replace) meets your needs for these types of workloads: They no longer satisfy current requirements. They can be incorporated with other applications that provide similar capabilities without compromising business requirements. They are based on third-party technology that has reached its end of life. They require third-party license fees that are no longer economical. The Rapid Migration Program shows how Google Cloud helps customers to use best practices, lower risk, control costs, and simplify their path to cloud success. Previous arrow_back Plan a hybrid and multicloud strategy Next Other considerations arrow_forward Send feedback
|
Architecture.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/enterprise-application-blueprint/architecture
|
2 |
+
Date Scraped: 2025-02-23T11:47:03.160Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Architecture Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC The following diagram shows the high-level architecture that is deployed by the blueprint for a single environment. You deploy this architecture across three separate environments: production, non-production, and development. This diagram includes the following: Cloud Load Balancing distributes application traffic across regions to Kubernetes service objects. Behind each service is a logical grouping of related pods. Cloud Service Mesh lets Kubernetes services communicate with each other. Kubernetes services are grouped into tenants, which are represented as Kubernetes namespaces. Tenants are an abstraction that represent multiple users and workloads that operate in a cluster, with separate RBAC for access control. Each tenant also has its own project for tenant-specific cloud resources such as databases, storage buckets, and Pub/Sub subscriptions. Namespaces with their own identities for accessing peer services and cloud resources. The identity is consistent across the same namespace in different clusters because of fleet Workload Identity Federation for GKE. Each environment has a separate workload identity pool to mitigate privilege escalation between environments. Each service has a dedicated pipeline that builds and deploys that service. The same pipeline is used to deploy the service into the development environment, then deploy the service into the non-production environment, and finally deploy the service into the production environment. Key architectural decisions for developer platform The following table describes the architecture decisions that the blueprint implements. Decision area Decision Reason Deployment archetype Deploy across multiple regions. Permit availability of applications during region outages. Organizational architecture Deploy on top of the enterprise foundation blueprint. Use the organizational structure and security controls that are provided by the foundation. Use the three environment folders that are set up in the foundation: development, nonproduction, and production. Provide isolation for environments that have different access controls. Developer platform cluster architecture Package and deploy applications as containers. Support separation of responsibilities, efficient operations, and application portability. Run applications on GKE clusters. Use a managed container service that is built by the company that pioneered containers. Replicate and run application containers in an active-active configuration. Achieve higher availability and rapid progressive rollouts, improving development velocity. Provision the production environment with two GKE clusters in two different regions. Achieve higher availability than a single cloud region. Provision the non-production environment with two GKE clusters in two different regions. Stage changes to cross-regional settings, such as load balancers, before deployment to production. Provision the development environment with a single GKE cluster instance. Helps reduce cost. Configure highly-available control planes for each GKE cluster. Ensure that the cluster control plane is available during upgrade and resizing. Use the concept of sameness across namespaces, services, and identity in each GKE cluster. Ensure that Kubernetes objects with the same name in different clusters are treated as the same thing. This normalization is done to make administering fleet resources easier. Enable private IP address spaces for GKE clusters through Private Service Connect access to the control plane and private node pools. Help protect the Kubernetes cluster API from scanning attacks. Enable administrative access to the GKE clusters through the Connect gateway. Use one command to fetch credentials for access to multiple clusters. Use groups and third-party identity providers to manage cluster access. Use Cloud NAT to provide GKE pods with access to resources with public IP addresses. Improve the overall security posture of the cluster, because pods are not directly exposed to the internet, but are still able to access internet-facing resources. Configure nodes to use Container-Optimized OS and Shielded GKE Nodes. Limit the attack surface of the nodes. Associate each environment with a GKE fleet. Permit management of sets of GKE clusters as a unit. Use the foundation infrastructure pipeline to deploy the application factory, fleet-scope pipeline, and multi-tenant infrastructure pipeline. Provide a controllable, auditable, and repeatable mechanism to deploy application infrastructure. Configure GKE clusters using GKE Enterprise configuration and policy management features. Provide a service that allows configuration-as-code for GKE clusters. Use an application factory to deploy the application CI/CD pipelines used in the blueprint. Provide a repeatable pattern to deploy application pipelines more easily. Use an application CI/CD pipeline to build and deploy the blueprint application components. Provide a controllable, auditable, and repeatable mechanism to deploy applications. Configure the application CI/CD pipeline to use Cloud Build, Cloud Deploy, and Artifact Registry. Use managed build and deployment services to optimize for security, scale, and simplicity. Use immutable containers across environments, and sign the containers with Binary Authorization. Provide clear code provenance and ensure that code has been tested across environments. Use Google Cloud Observability, which includes Cloud Logging and Cloud Monitoring. Simplify operations by using an integrated managed service of Google Cloud. Enable Container Threat Detection (a service in Security Command Center) to monitor the integrity of containers. Use a managed service that enhances security by continually monitoring containers. Control access to the GKE clusters by Kubernetes role-based access control (RBAC), which is based on Google Groups for GKE. Enhance security by linking access control to Google Cloud identities. Service architecture Use a unique Kubernetes service account for each Kubernetes service. This account acts as an IAM service account through the use of Workload Identity Federation for GKE. Enhance security by minimizing the permissions each service needs to be provided. Expose services through the GKE Gateway API. Simplify configuration management by providing a declarative-based and resource-based approach to managing ingress rules and load-balancing configurations. Run services as distributed services through the use of Cloud Service Mesh with Certificate Authority Service. Provide enhanced security through enforcing authentication between services and also provides automatic fault tolerance by redirecting traffic away from unhealthy services. Use cross-region replication for AlloyDB for PostgreSQL. Provide for high-availability in the database layer. Network architecture Shared VPC instances are configured in each environment and GKE clusters are created in service projects. Shared VPC provides centralized network configuration management while maintaining separation of environments. Use Cloud Load Balancing in a multi-cluster, multi-region configuration. Provide a single anycast IP address to access regionalized GKE clusters for high availability and low-latency services. Use HTTPS connections for client access to services. Redirect any client HTTP requests to HTTPS. Help protect sensitive data in transit and help prevent person-in-the-middle-attacks. Use Certificate Manager to manage public certificates. Manage certificates in a unified way. Protect the web interface with Google Cloud Armor. Enhance security by protecting against common web application vulnerabilities and volumetric attacks. Your decisions might vary from the blueprint. For information about alternatives, see Alternatives to default recommendations. What's next Read about developer platform controls (next document in this series). Send feedback
|
Architecture_and_functions_in_a_data_mesh.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/data-mesh
|
2 |
+
Date Scraped: 2025-02-23T11:48:52.839Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Architecture and functions in a data mesh Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-03 UTC A data mesh is an architectural and organizational framework which treats data as a product (referred to in this document as data products). In this framework, data products are developed by the teams that best understand that data, and who follow an organization-wide set of data governance standards. Once data products are deployed to the data mesh, distributed teams in an organization can discover and access data that's relevant to their needs more quickly and efficiently. To achieve such a well-functioning data mesh, you must first establish the high-level architectural components and organizational roles that this document describes. This document is part of a series which describes how to implement a data mesh on Google Cloud. It assumes that you have read and are familiar with the concepts described in Build a modern, distributed Data Mesh with Google Cloud. The series has the following parts: Architecture and functions in a data mesh (this document) Design a self-service data platform for a data mesh Build data products in a data mesh Discover and consume data products in a data mesh In this series, the data mesh that's described is internal to an organization. Although it's possible to extend a data mesh architecture to provide data products to third-parties, this extended approach is outside the scope of this document. Extending a data mesh involves additional considerations beyond just the usage within an organization. Architecture The following key terms are used to define the architectural components which are described in this series: Data product: A data product is a logical container or grouping of one or more related data resources. Data resource: A data resource is a physical asset in a storage system which holds structured data or stores a query that yields structured data. Data attribute: A data attribute is a field or element of a data resource. The following diagram provides an overview of the key architectural components in a data mesh implemented on Google Cloud. The preceding diagram shows the following: Central services enable the creation and management of data products, including organizational policies that affect the data mesh participants, access controls (through Identity and Access Management groups), and the infrastructure-specific artifacts. Examples of such commitments and reservations, and infrastructure that facilitates the functioning of the data mesh are described in Create platform components and solutions. Central services primarily supply the Data Catalog for all the data products in the data mesh and the discovery mechanism for potential customers of these products. Data domains expose subsets of their data as data products through well-defined data consumption interfaces. These data products could be a table, view, structured file, topic, or stream. In BigQuery, it would be a dataset, and in Cloud Storage, it would be a folder or bucket. There can be different types of interfaces that can be exposed as a data product. An example of an interface is a BigQuery view over a BigQuery table. The types of interfaces most commonly used for analytical purposes are discussed in Build data products in a data mesh. Data mesh reference implementation You can find a reference implementation of this architecture in the data-mesh-demo repository. The Terraform scripts that are used in the reference implementation demonstrate data mesh concepts and are not intended for production use. By running these scripts, you'll learn how to do the following: Separate product definitions from the underlying data. Create Data Catalog templates for describing product interfaces. Tag product interfaces with these templates. Grant permissions to the product consumers. For the product interfaces, the reference implementation creates and uses the following interface types: Authorized views over BigQuery tables. Data streams based on Pub/Sub topics. For further details, refer to the README file in the repository. Functions in a data mesh For a data mesh to operate well, you must define clear roles for the people who perform tasks within the data mesh. Ownership is assigned to team archetypes, or functions. These functions hold the core user journeys for people who work in the data mesh. To clearly describe user journeys, they have been assigned to user roles. These user roles can be split and combined based on the circumstances of each enterprise. You don't need to map the roles directly with employees or teams in your organization. A data domain is aligned with a business unit (BU), or a function within an enterprise. Common examples of business domains might be the mortgage department in a bank, or the customer, distribution, finance, or HR departments of an enterprise. Conceptually, there are two domain-related functions in a data mesh: the data producer teams and the data consumer teams. It's important to understand that a single data domain is likely to serve both functions at once. A data domain team produces data products from data that it owns. The team also consumes data products for business insight, and to produce derived-data products for the use of other domains. In addition to the domain-based functions, a data mesh also has a set of functions that are performed by centralized teams within the organization. These central teams enable the operation of the data mesh by providing cross-domain oversight, services, and governance. They reduce the operational burden for data domains in producing and consuming data products, and facilitate the cross-domain relationships that are required for the data mesh to operate. This document only describes functions that have a data mesh-specific role. There are several other roles that are required in any enterprise, regardless of the architecture being employed for the platform. However, these other roles are out of scope for this document. The four main functions in a data mesh are as follows: Data domain-based producer teams: Create and maintain data products over their lifecycle. These teams are often referred to as the data producers. Data domain-based consumer teams: Discover data products and use them in various analytic applications. These teams might consume data products to create new data products. These teams are often referred to as the data consumers. Central data governance team: Defines and enforces data governance policies among data producers, ensuring high data quality and data trustworthiness for consumers. This team is often referred to as the data governance team. Central self-service data infrastructure platform team: Provides a self-service data platform for data producers. This team also provides the tooling for central data discovery and data product observability that both data consumers and data producers use. This team is often referred to as the data platform team. An optional extra function to consider is that of a Center of Excellence (COE) for the data mesh. The purpose of the COE is to provide management of the data mesh. The COE is also the designated arbitration team that resolves any conflicts raised by any of the other functions. This function is useful for helping to connect the other four functions. Data domain-based producer team Typically, data products are built on top of a physical repository of data (either single or multiple data warehouses, lakes, or streams). An organization needs traditional data platform roles to create and maintain these physical repositories. However, these traditional data platform roles are not typically the people who create the data product. To create data products from these physical repositories, an organization needs a mix of data practitioners, such as data engineers and data architects. The following table lists all the domain-specific user roles that are needed in data producer teams. Role Responsibilities Required skills Desired outcomes Data product owner Acts as the primary business point of contact for the data product. Is accountable for the definitions, policies, business decisions, and application of business rules for the data exposed as products. Acts as a point of contact for business questions. As such, the owner represents the data domain when meeting with the data consumer teams or the centralized teams (data governance and data infrastructure platform). Data analytics Data architecture Product management The data product is driving value for consumers. There's robust management of the lifecycle of the data product, including deciding when to retire a product or release a new version. There's coordination of universal data elements with other data domains. Data product technical lead Acts as the primary technical point of contact for the product. Is accountable for implementing and publishing product interfaces. Acts as a point of contact for technical questions. As such, the lead represents the data domain when meeting with the data consumer teams or the centralized teams (data governance and data infrastructure platform). Works with the data governance team to define and implement data mesh standards in the organization. Works with the data platform team to help to develop the platform in tandem with the technical needs that production and consumption generate. Data engineering Data architecture Software engineering The data product meets business requirements and adheres to the data mesh technical standards. The data consumer teams use the data product and it appears in the results generated by the data product discovery experience. The use of the data product can be analyzed (for example, the number of daily queries). Data product support Acts as the point of contact for production support. Is accountable for maintaining product Service Level Agreement (SLA). Software engineering Site reliability engineering (SRE) The data product is meeting the stated SLA. Data consumer questions about the use of the data product are addressed and resolved. Subject matter expert (SME) for data domain Represents the data domain when meeting with SMEs from other data domains to establish data element definitions and boundaries that are common across the organization. Helps new data producers within the domain define their product scopes. Data analytics Data architecture Collaborates with other SMEs from across data domains to establish and maintain comprehensive understanding of the data in the organization and the data models that it uses. Facilitates the creation of interoperable data products which match the overall data model of the organization. There are clear standards for data product creation and lifecycle management. The data products from the data domain provide business value. Data owner Is accountable for a content area. Is responsible for data quality and accuracy. Approves access requests. Contributes to data product documentation. Any skill, but must have full knowledge of the business function. Any skill, but must have full knowledge of what the data means and business rules around it. Any skill, but must be able to determine the best possible resolution to data quality issues. Data that cross-functional areas use is accurate. Stakeholders understand the data. Data use is in accordance with usage policies. Data domain-based consumer teams In a data mesh, the people that consume a data product are typically data users who are outside of the data product domain. These data consumers use a central data catalog to find data products that are relevant to their needs. Because it's possible that more than one data product might meet their needs, data consumers can end up subscribing to multiple data products. If data consumers are unable to find the required data product for their use case, it's their responsibility to consult directly with the data mesh COE. During that consultation, data consumers can raise their data needs and seek advice on how to get those needs met by one or more domains. When looking for a data product, data consumers are looking for data that help them achieve various use cases such as persistent analytics dashboards and reports, individual performance reports, and other business performance metrics. Alternatively, data consumers might be looking for data products that can be used in artificial intelligence (AI) and machine learning (ML) use cases. To achieve these various use cases, data consumers require a mix of data practitioner personas, which are as follows: Role Responsibilities Required skills Desired outcomes Data analyst Searches for, identifies, evaluates, and subscribes to single-domain or cross-domain data products to create a foundation for business intelligence frameworks to operate. Analytics engineering Business analytics Provides clean, curated, and aggregated datasets for data visualization specialists to consume. Creates best practices for how to use data products. Aggregates and curates cross-domain datasets to meet the analytical needs of their domain. Application developer Develops an application framework for consumption of data across one or more data products, either inside or outside of the domain. Application development Data engineering Creates, serves, and maintains applications that consume data from one or more data products. Creates data applications for end-user consumption. Data visualization specialist Translates data engineering and data analysis jargon into information which business stakeholders can understand. Defines processes to populate business reports from data products. Creates and monitors reports that describe strategic business goals. Collaborates with engineers within the organization to design datasets which are aggregated from consumed data products. Implements reporting solutions. Translates high-level business requirements into technical requirements. Requirement analysis Data visualization Provides valid, accurate datasets and reports to end users. Business requirements are met through the dashboards and reports that are developed. Data scientist Searches for, identifies, evaluates, and subscribes to data products for data science use cases. Extracts data products and metadata from multiple data domains. Trains predictive models and deploys those models to optimize domain business processes. Provides feedback on possible data curation and data annotation techniques for multiple data domains. ML engineering Analytics engineering Creates predictive and prescriptive models to optimize business processes. Model training and model deployment are done in a timely manner. Central data governance team The data governance team enables data producers and consumers to safely share, aggregate, and compute data in a self-service manner, without introducing compliance risks to the organization. To meet the compliance requirements of the organization, the data governance team is a mix of data practitioner personas, which are as follows: Role Responsibilities Required skills Desired outcomes Data governance specialist Provides oversight and coordinates a single view of compliance. Recommends mesh-wide privacy policies on data collection, data protection, and data retention. Ensures that data stewards know about policies and can access them. Informs and consults on the latest data privacy regulations as required. Informs and consults on security questions as required. Performs internal audits and shares regular reports on risk and control plans. Legal SME Security SME Data privacy SME Privacy regulations in policies are up to date. Data producers are informed of policy changes in a timely manner. Management receives timely and regular reports on policy compliance for all published data products. Data steward (sits within each domain) Codifies the policies created by the data governance specialists. Defines and updates the taxonomy that an organization uses for annotating data products, data resources, and data attributes with discovery and privacy-related metadata. Coordinates across various stakeholders inside and outside of their respective domain. Ensures that the data products in their domain meet the metadata standards and privacy policies of the organization. Provides guidance to the data governance engineers on how to design and prioritize data platform features. Data architecture Data stewardship Required metadata has been created for all data products in the domain, and the data products for the domain are described accurately. The self-service data infrastructure platform team is building the right tooling to automate metadata annotations of data products, policy creation and verification. Data governance engineer Develops tools which auto-generate data annotations and can be used by all data domains, and then uses these annotations for policy enforcement. Implements monitoring to check the consistency of annotations and alerts when problems are found. Ensures that employees in the organization are informed of the status of data products by implementing alerts, reporting, and dashboards. Software engineering Data governance annotations are automatically verified. Data products comply with data governance policies. Data product violations are detected in a timely fashion. Central self-service data infrastructure platform team The self-service data infrastructure platform team, or just the data platform team, is responsible for creating a set of data infrastructure components. Distributed data domain teams use these components to build and deploy their data products. The data platform team also promotes best practices and introduces tools and methodologies which help to reduce cognitive load for distributed teams when adopting new technology. Platform infrastructure should provide easy integration with operations toolings for global observability, instrumentation, and compliance automation. Alternatively, the infrastructure should facilitate such integration to set up distributed teams for success. The data platform team has a shared responsibility model that it uses with the distributed domain teams and the underlying infrastructure team. The model shows what responsibilities are expected from the consumers of the platform, and what platform components the data platform team supports. As the data platform is itself an internal product, the platform doesn't support every use case. Instead, the data platform team continuously releases new services and features according to a prioritized roadmap. The data platform team might have a standard set of components in place and in development. However, data domain teams might choose to use a different, unique set of components if the needs of a team don't align with those provided by the data platform. If data domain teams choose a different approach, they must ensure that any platform infrastructure that they build and maintain complies with organization-wide policies and guardrails for security and data governance. For data platform infrastructure that is developed outside of the central data platform team, the data platform team might either choose to co-invest or embed their own engineers into the domain teams. Whether the data platform team chooses to co-invest or embed engineers might depend on the strategic importance of the data domain platform infrastructure to the organization. By staying involved in the development of infrastructure by data domain teams, organizations can provide the alignment and technical expertise required to repackage any new platform infrastructure components that are in development for future reuse. You might need to limit autonomy in the early stages of building a data mesh if your initial goal is to get approval from stakeholders for scaling up the data mesh. However, limiting autonomy risks creating a bottleneck at the central data platform team. This bottleneck can inhibit the data mesh from scaling. So, any centralization decisions should be taken carefully. For data producers, making their technical choices from a limited set of available options might be preferable to evaluating and choosing from an unlimited list of options themselves. Promoting autonomy of data producers doesn't equate to creating an ungoverned technology landscape. Instead, the goal is to drive compliance and platform adoption by striking the right balance between freedom of choice and standardization. Finally, a good data platform team is a central source of education and best practices for the rest of the company. Some of the most impactful activities that we recommend central data platform teams undertake are as follows: Fostering regular architectural design reviews for new functional projects and proposing common ways of development across development teams. Sharing knowledge and experiences, and collectively defining best practices and architectural guidelines. Ensuring engineers have the right tools in place to validate and check for common pitfalls like issues with code, bugs, and performance degradations. Organizing internal hackathons so development teams can surface their requirements for internal tooling needs. Example roles and responsibilities for the central data platform team might include the following: Role Responsibilities Required skills Desired outcomes Data platform product owner Creates an ecosystem of data infrastructure and solutions to empower distributed teams to build data products. Lowers the technical barrier to entry, ensures that governance is embedded, and minimizes collective technical debt for data infrastructure. Interfaces with leadership, data domain owners, data governance team, and technology platform owners to set the strategy and roadmap for the data platform. Data strategy and operations Product management Stakeholder management Establishes an ecosystem of successful data products. There are robust numbers of data products in production. There's a reduction in time-to-minimum viable product and time-to-production for data product releases. A portfolio of generalized infrastructure and components is in place that addresses the most common needs for data producers and data consumers. There's a high satisfaction score from data producers and data consumers. Data platform engineer Creates reusable and self-service data infrastructure and solutions for data ingestion, storage, processing, and consumption through templates, deployable architecture blueprints, developer guides, and other documentation. Also creates Terraform templates, data pipeline templates, container templates, and orchestration tooling. Develops and maintains central data services and frameworks to standardize processes for cross-functional concerns such as data sharing, pipelines orchestration, logging and monitoring, data governance, continuous integration and continuous deployment (CI/CD) with embedded guardrails, security and compliance reporting, and FinOps reporting. Data engineering Software engineering There are standardized, reusable infrastructure components and solutions for data producers to do data ingestion, storage, processing, curation, and sharing, along with necessary documentation. Releases of components, solutions, and end-user documentation align with the roadmap. Users report a high level of customer satisfaction. There are robust shared services for all functions in the data mesh. There is high uptime for shared services. The support response time is short. Platform and security engineer (a representative from the central IT teams such as networking and security, who is embedded in the data platform team) Ensures that data platform abstractions are aligned to enterprise-wide technology frameworks and decisions. Supports engineering activities by building the technology solutions and services in their core team that are necessary for data platform delivery. Infrastructure engineering Software engineering Platform infrastructure components are developed for the data platform. Releases of components, solutions, and end-user documentation align with the roadmap. The central data platform engineers report a high level of customer satisfaction. The health of the infrastructure platform improves for components that are used by the data platform (for example, logging). Underlying technology components have a high uptime. When data platform engineers have issues, the support response time is short. Enterprise architect Aligns data mesh and data platform architecture with enterprise-wide technology and data strategy. Provides advisory and design authority and assurance for both data platform and data product architectures to ensure alignment with enterprise-level strategy and best-practices. Data architecture Solution iteration and problem solving Consensus building A successful ecosystem is built that includes robust numbers of data products for which there is a reduction in time to both create minimum viable products and to release those products into production. Architecture standards have been established for critical data journeys, such as by establishing common standards for metadata management and for data sharing architecture. Additional considerations for a data mesh There are multiple architectural options for an analytics data platform, each option with different prerequisites. To enable each data mesh architecture, we recommend that your organization follow the best practices described in this section. Acquire platform funding As explained in the blog post, "If you want to transform start with finance", the platform is never finished: it's always operating based on a prioritized roadmap. Therefore, the platform must be funded as a product, not as a project with a fixed endpoint. The first adopter of the data mesh bears the cost. Usually, the cost is shared between the business that forms the first data domain to initiate the data mesh, and the central technology team, which generally houses the central data platform team. To convince finance teams to approve funding for the central platform, we recommend that you make a business case for the value of the centralized platform being realized over time. That value comes from reimplementing the same components in individual delivery teams. Define the minimum viable platform for the data mesh To help you to define the minimum viable platform for the data mesh, we recommend that you pilot and iterate with one or more business cases. For your pilot, find use cases that are needed, and where there's a consumer ready to adopt the resulting data product. The use cases should already have funding to develop the data products, but there should be a need for input from technical teams. Make sure the team that is implementing the pilot understands the data mesh operating model as follows: The business (that is, the data producer team) owns the backlog, support, and maintenance. The central team defines the self-service patterns and helps the business build the data product, but passes the data product to the business to run and own when it's complete. The primary goal is to prove the business operating model (domains produce, domains consume). The secondary goal is to prove the technical operating model (self-service patterns developed by the central team). Because platform team resources are limited, use the trunk and branch teams model to pool knowledge but still allow for the development of specialized platform services and products. We also recommend that you do the following: Plan roadmaps rather than letting services and features evolve organically. Define minimum viable platform capabilities spanning ingest, storage, processing, analysis, and ML. Embed data governance in every step, not as a separate workstream. Put in place the minimum capabilities across governance, platform, value-stream, and change management. Minimum capabilities are those which meet 80% of business cases. Plan for the co-existence of the data mesh with an existing data platform Many organizations that want to implement a data mesh likely already have an existing data platform, such as a data lake, data warehouse, or a combination of both. Before implementing a data mesh, these organizations must make a plan for how their existing data platform can evolve as the data mesh grows. These organizations should consider factors such as the following: The data resources that are most effective on the data mesh. The assets that must stay within the existing data platform. Whether assets have to move, or whether they can be maintained on the existing platform and still participate in the data mesh. What's next To learn more about designing and operating a cloud topology, see the Google Cloud Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback
|
Architecture_decision_records_overview.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/architecture-decision-records
|
2 |
+
Date Scraped: 2025-02-23T11:47:45.989Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Architecture decision records overview Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-16 UTC To help explain why your infrastructure or application teams make certain design choices, you can use architecture decision records (ADRs). This document explains when and how to use ADRs as you build and run applications on Google Cloud. An ADR captures the key options available, the main requirements that drive a decision, and the design decisions themselves. You often store ADRs in a Markdown file close to the codebase relevant to that decision. If someone needs to understand the background of a specific architectural decision, such as why you use a regional Google Kubernetes Engine (GKE) cluster, they can review the ADR and then the associated code. ADRs can also help you run more reliable applications and services. The ADR helps you understand your current state and troubleshoot when there's a problem. ADRs also build a collection of engineering decisions to help future decision choices and deployments. When to use ADRs You use ADRs to track the key areas that you think are important to your deployment. The following categories might be in your ADRs: Specific product choices, such as the choice between Pub/Sub and Cloud Tasks. Specific product options and configurations, such as the use of regional GKE clusters with Multi Cluster Ingress for highly available applications. General architectural guidance, such as best practices for Dockerfile manifests. Some specific examples that might prompt you to create an ADR could be for the following choices: How and why do you set up high availability (HA) for your Cloud SQL instances? How do you approach uptime of GKE clusters? Do you use regional clusters? Do you use canary releases? Why or why not? As you evaluate the products to use, the ADR helps to explain each of your decisions. You can revisit the ADR as the team evolves and learns more about the stack and additional decisions are made or adjusted. If you make adjustments, include the previous decision and why a change is made. This history keeps a record of how the architecture has changed as business needs evolve, or where there are new technical requirements or available solutions. The following prompts help you to know when to create ADRs: When you have a technical challenge or question and there's no existing basis for a decision, such as a recommended solution, standard operation procedure, blueprint, or codebase. When you or your team offers a solution that's not documented somewhere accessible to the team. When there are two or more engineering options and you want to document your thoughts and reasons behind the selection. When you write an ADR, it helps to have potential readers in mind. The primary readers are members of the team that work on the technology covered by the ADR. Broader groups of potential readers of the ADR might include adjacent teams who want to understand your decisions, such as architecture and security teams. You should also consider that the application might change owners or include new team members. An ADR helps new contributors understand the background of the engineering choices that were made. An ADR also makes it easier to plan future changes. Format of an ADR A typical ADR includes a set of chapters. Your ADRs should help capture what you feel is important to the application and your organization. Some ADRs might be one page long, whereas others require a longer explanation. The following example ADR outline shows how you might a format an ADR to include the information that's important for your environment: Authors and the team Context and problem you want to solve Functional and non-functional requirements you want to address Potential critical user journey (CUJ) the decision impacts Overview of the key options Your decision and reasons behind the accepted choice To help keep a record of decisions, you might include a timestamp for each decision to show when the choice was made. How ADRs work ADRs work best when engineers, developers, or application owners can easily access information they contain. When they have a question about the why something is done a certain way, they can look at the ADR to find the answer. To make the ADR accessible, some teams host it in a central wiki that's also accessible to business owners, instead of in their source control repository. When someone has a question about a specific engineering decision, the ADR is there to provide answers. ADRs work well in the following scenarios: Onboarding: New team members can easily learn about the project, and they can review the ADR if they have questions while they're learning a new codebase. Evolution of the architecture: If there's a transfer of technology stack between teams, the new owners can review past decisions to understand the current state. The team can also review past decisions when there's a new technology available to them. The ADR can help teams avoid a repeat of the same discussion points, and it can help provide historical context when teams revisit topics. Sharing best practices: Teams can align on best practices across the organization when ADRs detail why certain decisions were made and alternatives were decided against. An ADR is often written in Markdown to keep it lightweight and text-based. Markdown files can be included in the source control repository with your application code. Store your ADRs close to your application code, ideally in the same version control system. As you make changes to your ADR, you can review previous versions from source control as needed. You could also use another medium like a shared Google Doc or an internal wiki. These alternate locations might be more accessible to users not part of the ADR's team. Another option is to create your ADR in a source control repository, but mirror key decisions into a more accessible wiki. What's next The Cloud Architecture Center and the Architecture Framework provide additional guidance and best practices. For some areas that might be in your ADR, see Patterns for scalable and resilient apps. Send feedback
|
Architecture_for_connecting_visualization_software_to_Hadoop_on_Google_Cloud.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/hadoop/architecture-for-connecting-visualization-software-to-hadoop-on-google-cloud
|
2 |
+
Date Scraped: 2025-02-23T11:52:47.061Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Architecture for connecting visualization software to Hadoop on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-04-17 UTC This document is intended for operators and IT administrators who want to set up secure data access for data analysts using business intelligence (BI) tools such as Tableau and Looker. It doesn't offer guidance on how to use BI tools, or interact with Dataproc APIs. This document is the first part of a series that helps you build an end-to-end solution to give data analysts secure access to data using BI tools. This document explains the following concepts: A proposed architecture. A high-level view of the component boundaries, interactions, and networking in the architecture. A high-level view of authentication and authorization in the architecture. The second part of this series, Connecting your visualization software to Hadoop on Google Cloud, shows you how to set up the architecture on Google Cloud. Architecture The following diagram shows the architecture and the flow of events that are explained in this document. For more information about the products that are used in this architecture, see Architectural components. Client applications connect through Java Database Connectivity (JDBC) to a single entry point on a Dataproc cluster. The entry point is provided by Apache Knox, which is installed on the cluster master node. The communication with Apache Knox is secured by TLS. Apache Knox delegates authentication through an authentication provider to a system such as an LDAP directory. After authentication, Apache Knox routes the user request to one or more backend clusters. You define the routes and configuration as custom topologies. A data processing service, such as Apache Hive, listens on the selected backend cluster and takes the request. Apache Ranger intercepts the request and determines whether processing should go ahead, depending on if the user has valid authorization. If the validation succeeds, the data processing service analyzes the request and returns the results. Architectural components The architecture is made up of the following components. The managed Hadoop platform: Dataproc. Dataproc is Google Cloud-managed Apache Spark, an Apache Hadoop service that lets you take advantage of open source data tools for batch processing, querying, streaming, and machine learning. Dataproc is the platform that underpins the solution described in this document. User authentication and authorization: Apache Knox. Apache Knox acts as a single HTTP access point for all the underlying services in a Hadoop cluster. Apache Knox is designed as a reverse proxy with pluggable providers for authentication, authorization, audit, and other services. Clients send requests to Knox, and, based on the request URL and parameters, Knox routes the request to the appropriate Hadoop service. Because Knox is an entry point that transparently handles client requests and hides complexity, it's at the center of the architecture. Apache Ranger. Apache Ranger provides fine-grained authorization for users to perform specific actions on Hadoop services. It also audits user access and implements administrative actions. Processing engines: Apache Hive. Apache Hive is data warehouse software that enables access and management of large datasets residing in distributed storage using SQL. Apache Hive parses the SQL queries, performs semantic analysis, and builds a directed acyclic graph (DAG) of stages for a processing engine to execute. In the architecture shown in this document, Hive acts as the translation point between the user requests. It can also act as one of the multiple processing engines. Apache Hive is ubiquitous in the Hadoop ecosystem and it opens the door to practitioners familiar with standard SQL to perform data analysis. Apache Tez. Apache Tez is the processing engine in charge of executing the DAGs prepared by Hive and of returning the results. Apache Spark. Apache Spark is a unified analytics engine for large-scale data processing that supports the execution of DAGs. The architecture shows the Spark SQL component of Apache Spark to demonstrate the flexibility of the approach presented in this document. One restriction is that Spark SQL doesn't have official Ranger plugin support. For this reason, authorization must be done through the coarse-grained ACLs in Apache Knox instead of using the fine-grained authorization that Ranger provides. Components overview In the following sections, you learn about each of the components in more detail. You also learn how the components interact with each other. Client applications Client applications include tools that can send requests to an HTTPS REST endpoint, but don't necessarily support the Dataproc Jobs API. BI tools such as Tableau and Looker have HiveServer2 (HS2) and Spark SQL JDBC drivers that can send requests through HTTP. This document assumes that client applications are external to Google Cloud, executing in environments such as an analyst workstation, on-premises, or on another cloud. So, the communication between the client applications and Apache Knox must be secured with either a CA-signed or self-signed SSL/TLS certificate. Entry point and user authentication The proxy clusters are one or more long-lived Dataproc clusters that host the Apache Knox Gateway. Apache Knox acts as the single entry point for client requests. Knox is installed on the proxy cluster master node. Knox performs SSL termination, delegates user authentication, and forwards the request to one of the backend services. In Knox, each backend service is configured in what is referred to as a topology. The topology descriptor defines the following actions and permissions: How authentication is delegated for a service. The URI the backend service forwards requests to. Simple per-service authorization access control lists (ACLs). Knox lets you integrate authentication with enterprise and cloud identity management systems. To configure user authentication for each topology, you can use authentication providers. Knox uses Apache Shiro to authenticate against a local demonstration ApacheDS LDAP server by default. Alternatively, you can opt for Knox to use Kerberos. In the preceding diagram, as an example, you can see an Active Directory server hosted on Google Cloud outside of the cluster. For information on how to connect Knox to an enterprise authentication services such as an external ApacheDS server or Microsoft Active Directory (AD), see the Apache Knox user guide and the Google Cloud Managed Active Directory and Federated AD documentation. For the use case in this document, as long as Apache Knox acts as the single gatekeeper to the proxy and backend clusters, you don't have to use Kerberos. Processing engines The backend clusters are the Dataproc clusters hosting the services that process user requests. Dataproc clusters can autoscale the number of workers to meet the demand from your analyst team without manual reconfiguration. We recommend that you use long-lived Dataproc clusters in the backend. Long-lived Dataproc clusters allow the system to serve requests from data analysts without interruption. Alternatively, if the cluster only needs to serve requests for a brief time, you can use job-specific clusters, which are also known as ephemeral clusters. Ephemeral clusters can also be more cost effective than long-lived clusters. If you use ephemeral clusters, to avoid modifying the topology configuration, make sure that you recreate the clusters in the same zone and under the same name. Using the same zone and name lets Knox route the requests transparently using the master node internal DNS name when you recreate the ephemeral cluster. HS2 is responsible for servicing user queries made to Apache Hive. HS2 can be configured to use various execution engines such as the Hadoop MapReduce engine, Apache Tez, and Apache Spark. In this document, HS2 is configured to use the Apache Tez engine. Spark SQL is a module of Apache Spark that includes a JDBC/ODBC interface to execute SQL queries on Apache Spark. In the preceding architectural diagram, Spark SQL is shown as an alternative option for servicing user queries. A processing engine, either Apache Tez or Apache Spark, calls the YARN Resource Manager to execute the engine DAG on the cluster worker machines. Finally, the cluster worker machines access the data. For storing and accessing the data in a Dataproc cluster, use the Cloud Storage connector, not Hadoop Distributed File System (HDFS). For more information about the benefits of using the Cloud Storage connector, see the Cloud Storage connector documentation. The preceding architectural diagram shows one Apache Knox topology that forwards requests to Apache Hive, and another that forwards requests to Spark SQL. The diagram also shows other topologies that can forward requests to services in the same or different backend clusters. The backend services can process different datasets. For example, one Hive instance can offer access to personally identifiable information (PII) for a restricted set of users while another Hive instance can offer access to non-PII data for broader consumption. User authorization Apache Ranger can be installed on the backend clusters to provide fine-grained authorization for Hadoop services. In the architecture, a Ranger plugin for Hive intercepts the user requests and determines whether a user is allowed to perform an action over Hive data, based on Ranger policies. As an administrator, you define the Ranger policies using the Ranger Admin page. We strongly recommend that you configure Ranger to store these policies in an external Cloud SQL database. Externalizing the policies has two advantages: It makes them persistent in case any of the backend clusters are deleted. It enables the policies to be centrally managed for all groups or for custom groups of backend clusters. To assign Ranger policies to the correct user identities or groups, you must configure Ranger to sync the identities from the same directory that Knox is connected to. By default, the user identities used by Ranger are taken from the operating system. Apache Ranger can also externalize its audit logs to Cloud Storage to make them persistent. Ranger uses Apache Solr as its indexing and querying engine to make the audit logs searchable. Unlike HiveServer2, Spark SQL doesn't have official Ranger plugin support, so you need to use the coarse-grained ACLs available in Apache Knox to manage its authorization. To use these ACLs, add the LDAP identities that are allowed to use each service, such as Spark SQL or Hive, in the corresponding topology descriptor for the service. For more information, see Best practices to use Apache Ranger on Dataproc. High availability Dataproc provides a high availability (HA) mode. In this mode, there are several machines configured as master nodes, one of which is in active state. This mode allows uninterrupted YARN and HDFS operations despite any single-node failures or reboots. However, if the master node fails, the single entry point external IP changes, so you must reconfigure the BI tool connection. When you run Dataproc in HA mode, you should configure an external HTTP(S) load balancer as the entry point. The load balancer routes requests to an unmanaged instance group that bundles your cluster master nodes. As an alternative to a load balancer, you can apply a round-robin DNS technique instead, but there are drawbacks to this approach. These configurations are outside of the scope of this document. Cloud SQL also provides a high availability mode, with data redundancy made possible by synchronous replication between master instances and standby instances located in different zones. If there is an instance or zone failure, this configuration reduces downtime. However, note that an HA-configured instance is charged at double the price of a standalone instance. Cloud Storage acts as the datastore. For more information about Cloud Storage availability, see storage class descriptions. Networking In a layered network architecture, the proxy clusters are in a perimeter network. The backend clusters are in an internal network protected by firewall rules that only let through incoming traffic from the proxy clusters. The proxy clusters are isolated from the other clusters because they are exposed to external requests. Firewall rules only allow a restricted set of source IP addresses to access the proxy clusters. In this case, the firewall rules only allow requests that come from the addresses of your BI tools. The configuration of layered networks is outside of the scope of this document. In Connecting your visualization software to Hadoop on Google Cloud, you use the default network throughout the tutorial. For more information on layered network setups, see the best practices for VPC network security and the overview and examples on how to configure multiple network interfaces. What's next Read the second part of the series, Connecting your visualization software to Hadoop on Google Cloud, and learn how to set up the architecture on Google Cloud. Set up the architecture on Google Cloud using these Terraform configuration files. Read about the best practices for using Apache Ranger on Dataproc. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback
|
Architecture_patterns.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/architecture-patterns
|
2 |
+
Date Scraped: 2025-02-23T11:50:26.218Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Architecture patterns Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-29 UTC The documents in this series discuss networking architecture patterns that are designed based on the required communication models between applications residing in Google Cloud and in other environments (on-premises, in other clouds, or both). These patterns should be incorporated into the overall organization landing zone architecture, which can include multiple networking patterns to address the specific communication and security requirements of different applications. The documents in this series also discuss the different design variations that can be used with each architecture pattern. The following networking patterns can help you to meet communication and security requirements for your applications: Mirrored pattern Meshed pattern Gated patterns Gated egress Gated ingress Gated egress and gated ingress Handover pattern Previous arrow_back Design considerations Next Mirrored pattern arrow_forward Send feedback
|
Architecture_using_Cloud_Functions.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/serverless-functions-blueprint
|
2 |
+
Date Scraped: 2025-02-23T11:56:14.777Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Deploy a secured serverless architecture using Cloud Run functions Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-08-06 UTC Serverless architectures let you develop software and services without provisioning or maintaining servers. You can use serverless architectures to build applications for a wide range of services. This document provides opinionated guidance for DevOps engineers, security architects, and application developers on how to help protect serverless applications that use Cloud Run functions (2nd gen). The document is part of a security blueprint that consists of the following: A GitHub repository that contains a set of Terraform configurations and scripts. A guide to the architecture, design, and security controls that you implement with the blueprint (this document). Though you can deploy this blueprint without deploying the Google Cloud enterprise foundations blueprint first, this document assumes that you've already configured a foundational set of security controls as described in the Google Cloud enterprise foundations blueprint. The architecture that's described in this document helps you to layer additional controls onto your foundation to help protect your serverless applications. To help define key security controls that are related to serverless applications, the Cloud Security Alliance (CSA) published Top 12 Critical Risks for Serverless Applications. The security controls used in this blueprint are designed to address the risks that are relevant to the various use cases described in this document. Serverless use cases The blueprint supports the following use cases: Deploying a serverless architecture using Cloud Run functions (this document) Deploying a serverless architecture using Cloud Run Differences between Cloud Run functions and Cloud Run include the following: Cloud Run functions is triggered by events, such as changes to data in a database or the receipt of a message from a messaging system such as Pub/Sub. Cloud Run is triggered by requests, such as HTTP requests. Cloud Run functions is limited to a set of supported runtimes. You can use Cloud Run with any programming language. Cloud Run functions manages containers and the infrastructure that controls the web server or language runtime so that you can focus on your code. Cloud Run provides the flexibility for you to run these services yourself, so that you have control of the container configuration. For more information about differences between Cloud Run and Cloud Run functions, see Choosing a Google Cloud compute option. Architecture This blueprint uses a Shared VPC architecture, in which Cloud Run functions is deployed in a service project and can access resources that are located in other VPC networks. The following diagram shows a high-level architecture, which is further described in the example architectures that follow it. The architecture that's shown in the preceding diagram uses a combination of the following Google Cloud services and features: Cloud Run functions lets you run functions as a service and manages the infrastructure on your behalf. By default, this architecture deploys Cloud Run functions with an internal IP address only and without access to the public internet. The triggering event is the event that triggers Cloud Run functions. As further described in the example architectures, this can be a Cloud Storage event, a scheduled interval, or a change in BigQuery. Artifact Registry stores the source containers for your Cloud Run functions application. Shared VPC lets you connect a Serverless VPC Access connector in your service project to the host project. You deploy a separate Shared VPC network for each environment (production, non-production, and development). This networking design provides network isolation between the different environments. A Shared VPC network lets you centrally manage network resources in a common network while delegating administrative responsibilities for the service project. The Serverless VPC Access connector connects your serverless application to your VPC network using Serverless VPC Access. Serverless VPC Access helps to ensure that requests from your serverless application to the VPC network aren't exposed to the internet. Serverless VPC Access lets Cloud Run functions communicate with other services, storage systems, and resources that support VPC Service Controls. You can configure Serverless VPC Access in the Shared VPC host project or a service project. By default, this blueprint deploys Serverless VPC access in the Shared VPC host project to align with the Shared VPC model of centralizing network configuration resources. For more information, see Comparison of configuration methods. VPC Service Controls creates a security perimeter that isolates your Cloud Run functions services and resources by setting up authorization, access controls, and secure data exchange. This perimeter is designed to isolate your application and managed services by setting up additional access controls and monitoring, and to separate your governance of Google Cloud from the application. Your governance includes key management and logging. The consumer service is the application that is acted on by Cloud Run functions. The consumer service can be an internal server or another Google Cloud service such as Cloud SQL. Depending on your use case, this service might be behind Cloud Next Generation Firewall, in another subnet, in the same service project as Cloud Run functions, or in another service project. Secure Web Proxy is designed to secure the egress web traffic, if required. It enables flexible and granular policies based on cloud identities and web applications. This blueprint uses Secure Web Proxy for granular access policies to egress web traffic during the build phase of Cloud Run functions. The blueprint adds an allowed list of URLs to the Gateway Security Policy Rule. Cloud NAT provides outbound connection to the internet, if required. Cloud NAT supports source network address translation (SNAT) for compute resources without public IP addresses. Inbound response packets use destination network address translation (DNAT). You can disable Cloud NAT if Cloud Run functions doesn't require access to the internet. Cloud NAT implements the egress network policy that is attached to Secure Web Proxy. Cloud Key Management Service (Cloud KMS) stores the customer-managed encryption keys (CMEKs) that are used by the services in this blueprint, including your serverless application, Artifact Registry, and Cloud Run functions. Secret Manager stores the Cloud Run functions secrets. The blueprint mounts secrets as a volume to provide a higher level of security than passing secrets as environment variables. Identity and Access Management (IAM) and Resource Manager help to restrict access and isolate resources. The access controls and resource hierarchy follow the principle of least privilege. Cloud Logging collects all the logs from Google Cloud services for storage and retrieval by your analysis and investigation tools. Cloud Monitoring collects and stores performance information and metrics about Google Cloud services. Example architecture with a serverless application using Cloud Storage The following diagram shows how you can run a serverless application that accesses an internal server when a particular event occurs in Cloud Storage. In addition to the services described in Architecture, this example architecture uses a combination of the following Google Cloud services and features: Cloud Storage emits an event when any cloud resource, application, or user creates a web object on a bucket. Eventarc routes events from different resources. Eventarc encrypts events in transit and at rest. Pub/Sub queues events that are used as the input and a trigger for Cloud Run functions. Virtual Private Cloud (VPC) firewall rules control the flow of data into the subnet that hosts your resources, such as an internal server. The internal server runs on Compute Engine or Google Kubernetes Engine and hosts your internal application. If you deploy the Secure Cloud Run functions with Internal Server Example, you deploy an Apache server with a Hello World HTML page. This example simulates access to an internal application that runs VMs or containers. Example architecture with Cloud SQL The following diagram shows how you can run a serverless application that accesses a Cloud SQL hosted service at a regular interval that is defined in Cloud Scheduler. You can use this architecture when you must gather logs, aggregate data, and so on. In addition to the services described in Architecture, this example architecture uses a combination of the following Google Cloud services and features: Cloud Scheduler emits events on a regular basis. Pub/Sub queues events that are used as the input and a trigger for Cloud Run functions. Virtual Private Cloud (VPC) firewall rules control the flow of data into the subnet that hosts your resources, such as company data stored in Cloud SQL. Cloud SQL Auth Proxy controls access to Cloud SQL. Cloud SQL hosts a service that is peered to the VPC network and that the serverless application can access. If you deploy the Secure Cloud Run functions with Cloud SQL example, you deploy a MySQL database with a sample database. Example architecture with BigQuery data warehouse The following diagram shows how you can run a serverless application that is triggered when an event occurs in BigQuery (for example, data is added or a table is created). In addition to the services described in Architecture, this example architecture uses a combination of the following Google Cloud services and features: BigQuery hosts a data warehouse. If you deploy the Secure Cloud Run functions triggered by BigQuery example, you deploy a sample BigQuery dataset and table. Eventarc triggers Cloud Run functions when a particular event occurs in BigQuery. Organization structure Resource Manager lets you logically group resources by project, folder, and organization. The following diagram shows a resource hierarchy with folders that represent different environments such as bootstrap, common, production, non-production (or testing), and development. This resource hierarchy is based on the hierarchy that's described in the enterprise foundations blueprint. You deploy the projects that the blueprint specifies into the following folders: Common, Production, Non-production, and Dev. The following sections describe this diagram in more detail. Folders You use folders to isolate your production environment and governance services from your non-production and testing environments. The following table describes the folders from the enterprise foundations blueprint that are used by this blueprint. Folder Description Bootstrap Contains resources required to deploy the enterprise foundations blueprint. Common Contains centralized services for the organization, such as the security project. Production Contains projects that have cloud resources that have been tested and are ready to be used by customers. In this blueprint, the Production folder contains the service project and host project. Non-production Contains projects that have cloud resources that are currently being tested and staged for release. In this blueprint, the Non-production folder contains the service project and host project. Development Contains projects that have cloud resources that are currently being developed. In this blueprint, the Development folder contains the service project and host project. You can change the names of these folders to align with your organization's folder structure, but we recommend that you maintain a similar structure. For more information, see Organization structure. For other folder structures, see Decide a resource hierarchy for your Google Cloud landing zone. Projects You isolate resources in your environment using projects. The following table describes the projects that are needed within the organization. You can change the names of these projects, but we recommend that you maintain a similar project structure. Project Description Shared VPC host project This project includes the firewall ingress rules and any resources that have internal IP addresses (as described in Connect to a VPC network). When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. When you apply the Terraform code, you specify the name of this project, and the blueprint deploys Serverless VPC Access connector, Cloud NAT, and Cloud Secure Web Proxy. Shared VPC service project This project includes your serverless application, Cloud Run functions, and the Serverless VPC Access connector. You attach the service project to the host project so that the service project can participate in the Shared VPC network. When you apply the Terraform code, you specify the name of this project. The blueprint deploys Cloud Run functions and services needed for your use case, such as Cloud SQL, Cloud Scheduler, Cloud Storage, or BigQuery. When you apply the Terraform code, you specify the name of this project, and the blueprint deploys Cloud KMS. If you use the Secure Serverless Harness module in the serverless blueprint for Cloud Run functions, Artifact Registry is also deployed. Security project This project includes your security-specific services, such as Cloud KMS and Secret Manager. The default name of the security project is prj-bu1-p-sec. If you deploy this blueprint after you deploy the security foundations blueprint, the security project project is created in addition to the enterprise foundation blueprint's secrets project (prj-bu1-p-env-secrets). For more information about the enterprise foundations blueprint projects, see Projects. If you deploy multiple instances of this blueprint without the enterprise foundations blueprint, each instance has its own security project. Mapping roles and groups to projects You must give different user groups in your organization access to the projects that make up the serverless architecture. The following table describes the blueprint recommendations for user groups and role assignments in the projects that you create. You can customize the groups to match your organization's existing structure, but we recommend that you maintain a similar segregation of duties and role assignment. Group Project Roles Serverless administrator [email protected] Service project Cloud Run functions Admin (roles/cloudfunctions.admin) Compute Network User (roles/compute.networkUser) Compute Network Viewer (roles/compute.networkViewer) Cloud Run Admin (roles/run.admin) Serverless security administrator [email protected] Security project Artifact Registry Reader (roles/artifactregistry.reader) Cloud Run functions Admin (roles/cloudfunctions.admin) Cloud KMS Viewer (roles/cloudkms.viewer) Cloud Run Viewer (roles/run.viewer) Cloud Run functions developer [email protected] Security project Artifact Registry Writer (roles/artifactregistry.writer) Cloud Run functions Developer (roles/cloudfunctions.developer) Cloud KMS CryptoKey Encrypter (roles/cloudkms.cryptoKeyEncrypter) Cloud Run functions user [email protected] Shared VPC service project Cloud Run functions Invoker (roles/cloudfunctions.invoker) Security controls This section discusses the security controls in Google Cloud that you use to help secure your serverless architecture. The key security principles to consider are as follows: Secure access according to the principle of least privilege, giving principals only the privileges required to perform tasks. Secure network connections through trust boundary design, which includes network segmentation, organization policies, and firewall policies. Secure configuration for each of the services. Identify any compliance or regulatory requirements for the infrastructure that hosts serverless workloads and assign a risk level. Configure sufficient monitoring and logging to support audit trails for security operations and incident management. Build system controls When you deploy your serverless application, you use Artifact Registry to store the container images and binaries. Artifact Registry supports CMEK so that you can encrypt the repository using your own encryption keys. Network and firewall rules Virtual Private Cloud (VPC) firewall rules control the flow of data into the perimeters. You create firewall rules that deny all egress, except for specific TCP port 443 connections from restricted.googleapis.com special domain names. Using the restricted.googleapis.com domain has the following benefits: It helps to reduce your network attack surface by using Private Google Access when workloads communicate with Google APIs and services. It ensures that you use only services that support VPC Service Controls. In addition, you create a DNS record to resolve *.googleapis.com to restricted.googleapis.com. For more information, see Configuring Private Google Access. Perimeter controls As shown in the Architecture section, you place the resources for the serverless application in a separate VPC Service Controls security perimeter. This perimeter helps reduce the broad impact from a compromise of systems or services. However, this security perimeter doesn't apply to the Cloud Run functions build process when Cloud Build automatically builds your code into a container image and pushes that image to Artifact Registry. In this scenario, create an ingress rule for the Cloud Build service account in the service perimeter. Access policy To help ensure that only specific principals (users or services) can access resources and data, you enable IAM groups and roles. To help ensure that only specific resources can access your projects, you enable an access policy for your Google organization. For more information, see Access level attributes. Service accounts and access controls Service accounts are accounts for applications or compute workloads instead of for individual end users. To implement the principle of least privilege and the principle of separation of duties, you create service accounts with granular permissions and limited access to resources. The service accounts are as follows: A Cloud Run functions service account (cloudfunction_sa) that has the following roles: Compute Network Viewer (roles/compute.networkViewer) Eventarc Event Receiver (roles/eventarc.eventReceiver) Cloud Run Invoker (roles/run.invoker) Secret Manager Secret Assessor (roles/secretmanager.secretAccessor) For more information, see Allow Cloud Run functions to access a secret. Cloud Run functions uses this service account to grant permission to specific Pub/Sub topics only and to restrict the Eventarc event system from Cloud Run functions compute resources in Example architecture with a serverless application using Cloud Storage and Example architecture with BigQuery data warehouse. A Serverless VPC Access connector account (gcp_sa_vpcaccess) that has the Compute Network User (roles/compute.networkUser) role. A second Serverless VPC Access connector account (cloud_services) that has the Compute Network User (roles/compute.networkUser) role. These service accounts for the Serverless VPC Access connector are required so that the connector can create the firewall ingress and egress rules in the host project. For more information, see Grant permissions to service accounts in your service projects. A service identity to run Cloud Run functions (cloudfunction_sa) that has the [Serverless VPC Access User (roles/vpcaccess.user)](/iam/docs/understanding-roles#vpcaccess.user) and the Service Account User (roles/iam.serviceAccountUser) roles. A service account for the Google APIs (cloud_services_sa) that has the Compute Network User (roles/compute.networkUser) role to run internal Google processes on your behalf. A service identity for Cloud Run functions (cloud_serverless_sa) that has the Artifact Registry Reader (roles/artifactregistry.reader) role. This service account provides access to Artifact Registry and CMEKs. A service identity for Eventarc (eventarc_sa) that has the Cloud KMS CryptoKey Decrypter (roles/cloudkms.cryptoKeyDecrypter) and the Cloud KMS CryptoKey Encrypter (roles/cloudkms.cryptoKeyEncrypter) roles. A service identity for Artifact Registry (artifact_sa) with the CryptoKey Decrypter (roles/cloudkms.cryptoKeyDecrypter) and the Cloud KMS CryptoKey Encrypter (roles/cloudkms.cryptoKeyEncrypter) roles. Key management To validate integrity and help protect your data at rest, you use CMEKs with Artifact Registry, Cloud Run functions, Cloud Storage, and Eventarc. CMEKs provides you with greater control over your encryption key. The following CMEKs are used: A software key for Artifact Registry that attests the code for your serverless application. An encryption key to encrypt the container images that Cloud Run functions deploys. An encryption key for Eventarc events that encrypts the messaging channel at rest. An encryption key to help protect data in Cloud Storage. When you apply the Terraform configuration, you specify the CMEK location, which determines the geographical location where the keys are stored. You must ensure that your CMEKs are in the same region as your resources. By default, CMEKs are rotated every 30 days. Secret management Cloud Run functions supports Secret Manager to store the secrets that your serverless application might require. These secrets can include API keys and database usernames and passwords. To expose the secret as a mounted volume, use the service_configs object variables in the main module. When you deploy this blueprint with the enterprise foundations blueprint, you must add your secrets to the secrets project before you apply the Terraform code. The blueprint will grant the Secret Manager Secret Assessor (roles/secretmanager.secretAccessor) role to the Cloud Run functions service account. For more information, see Using secrets. Organization policies This blueprint adds constraints to the organization policy constraints that the enterprise foundations blueprint uses. For more information about the constraints that the enterprise foundations blueprint uses, see Organization policy constraints. The following table describes the additional organization policy constraints that are defined in the Secure Cloud Run functions Security module of this blueprint. Policy constraint Description Recommended value Allowed ingress settings (Cloud Run functions) constraints/cloudfunctions.allowedIngressSettings Allow ingress traffic only from internal services or the external HTTP(S) load balancer. The default is ALLOW_ALL. ALLOW_INTERNAL_ONLY Require VPC Connector (Cloud Run functions) constraints/cloudfunctions.requireVPCConnector Require specifying a Serverless VPC Access connector when deploying a function. When this constraint is enforced, functions must specify a Serverless VPC Access connector. The default is false. true Allowed VPC Connector egress settings (Cloud Run functions) cloudfunctions.allowedVpcConnectorEgressSettings Require all egress traffic for Cloud Run functions to use a Serverless VPC Access connector. The default is PRIVATE_RANGES_ONLY. ALL_TRAFFIC Operational controls You can enable logging and Security Command Center Premium tier features such as security health analytics and threat detection. These controls help you to do the following: Monitor data access. Ensure that proper auditing is in place. Support security operations and incident management capabilities of your organization. Logging To help you meet auditing requirements and get insight into your projects, you configure the Google Cloud Observability with data logs for the services that you want to track. Deploy Cloud Logging in the projects before you apply the Terraform code to ensure that the blueprint can configure logging for the firewall, load balancer, and VPC network. After you deploy the blueprint, we recommend that you configure the following: Create an aggregated log sink across all projects. Add CMEKs to your logging sink. For all services within the projects, ensure that your logs include information about data writes and administrative access. For more information about logging best practices, see Detective controls. Monitoring and alerts After you deploy the blueprint, you can set up alerts to notify your security operations center (SOC) that a security event has occurred. For example, you can use alerts to let your security analysts know when a permission was changed on an IAM role. For more information about configuring Security Command Center alerts, see Setting up finding notifications. The Cloud Run functions Monitoring dashboard helps you to monitor the performance and health of your Cloud Run functions. It provides a variety of metrics and logs, which you can use to identify and troubleshoot problems. The dashboard also includes a number of features that can help you to improve the performance of your functions, such as the ability to set alerts and quotas. For more information, see Monitoring Cloud Run functions. To export alerts, see the following documents: Introduction to alerting Cloud Monitoring metric export Debugging and troubleshooting You can run Connectivity Tests to help you debug network configuration issues between Cloud Run functions and the resources within your subnet. Connectivity Tests simulates the expected path of a packet and provides details about the connectivity, including resource-to-resource connectivity analysis. Connectivity Tests isn't enabled by the Terraform code; you must set it up separately. For more information, see Create and run Connectivity Tests. Terraform deployment modes The following table describes the ways that you can deploy this blueprint, and which Terraform modules apply for each deployment mode. Deployment mode Terraform modules Deploy this blueprint after deploying the enterprise foundations blueprint (recommended). This option deploys the resources for this blueprint in the same VPC Service Controls perimeter that is used by the enterprise foundations blueprint. For more information, see How to customize Foundation v3.0.0 for Secure Cloud Run functions deployment. This option also uses the secrets project that you created when you deployed the enterprise foundations blueprint. Use these Terraform modules: secure-cloud-function-core secure-serverless-net secure-web-proxy Install this blueprint without installing the security foundations blueprint. This option requires that you create a VPC Service Controls perimeter. Use these Terraform modules: secure-cloud-function-core secure-serverless-harness secure-serverless-net secure-cloud-function-security secure-web-proxy secure-cloud-function Bringing it all together To implement the architecture described in this document, do the following: Review the README for the blueprint to ensure that you meet all the prerequisites. In your testing environment, to see the blueprint in action, deploy one of the examples. These examples match the architecture examples described in Architecture. As part of your testing process, consider doing the following: Use Security Command Center to scan the projects against common compliance requirements. Replace the sample application with a real application (for example 1) and run through a typical deployment scenario. Work with the application engineering and operations teams in your enterprise to test their access to the projects and to verify whether they can interact with the solution in the way that they would expect. Deploy the blueprint into your environment. What's next Review the Google Cloud enterprise foundations blueprint for a baseline secure environment. To see the details of the blueprint, read the Terraform configuration README. To deploy a serverless application using Cloud Run, see Deploy a secured serverless architecture using Cloud Run. Send feedback
|
Architecture_using_Cloud_Run.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/serverless-blueprint
|
2 |
+
Date Scraped: 2025-02-23T11:56:16.852Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Deploy a secured serverless architecture using Cloud Run Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-03-10 UTC This content was last updated in March 2023, and represents the status quo as of the time it was written. Google's security policies and systems may change going forward, as we continually improve protection for our customers. Serverless architectures let you develop software and services without provisioning or maintaining servers. You can use serverless architectures to build applications for a wide range of services. This document provides opinionated guidance for DevOps engineers, security architects, and application developers on how to help protect serverless applications that use Cloud Run. The document is part of a security blueprint that consists of the following: A GitHub repository that contains a set of Terraform configurations and scripts. A guide to the architecture, design, and security controls that you implement with the blueprint (this document). Though you can deploy this blueprint without deploying the Google Cloud enterprise foundations blueprint first, this document assumes that you've already configured a foundational set of security controls as described in the Google Cloud enterprise foundations blueprint. The architecture that's described in this document helps you to layer additional controls onto your foundation to help protect your serverless applications. To help define key security controls that are related to serverless applications, the Cloud Security Alliance (CSA) published Top 12 Critical Risks for Serverless Applications. The security controls used in this blueprint are designed to address the risks that are relevant to the various use cases described in this document. Serverless use cases The blueprint supports the following use cases: Deploying a serverless architecture using Cloud Run (this document) Deploying a serverless architecture using Cloud Run functions Differences between Cloud Run functions and Cloud Run include the following: Cloud Run functions is triggered by events, such as changes to data in a database or the receipt of a message from a messaging system such as Pub/Sub. Cloud Run is triggered by requests, such as HTTP requests. Cloud Run functions is limited to a set of supported runtimes. You can use Cloud Run with any programming language. Cloud Run functions manages containers and the infrastructure that controls the web server or language runtime so that you can focus on your code. Cloud Run provides the flexibility for you to run these services yourself, so that you have control of the container configuration. For more information about differences between Cloud Run and Cloud Run functions, see Choosing a Google Cloud compute option. Architecture This blueprint lets you run serverless applications on Cloud Run with Shared VPC. We recommend that you use Shared VPC because it centralizes network policy and control for all networking resources. In addition, Shared VPC is deployed in the enterprise foundations blueprint. Recommended architecture: Cloud Run with a Shared VPC network The following image shows how you can run your serverless applications in a Shared VPC network. The architecture that's shown in the preceding diagram uses a combination of the following Google Cloud services and features: An external Application Load Balancer receives the data that serverless applications require from the internet and forwards it to Cloud Run. The external Application Load Balancer is a Layer 7 load balancer. Google Cloud Armor acts as the web application firewall to help protect your serverless applications against denial of service (DoS) and web attacks. Cloud Run lets you run application code in containers and manages the infrastructure on your behalf. In this blueprint, the Internal and Cloud Load Balancing ingress setting restricts access to Cloud Run so that Cloud Run will accept requests only from the external Application Load Balancer. The Serverless VPC Access connector connects your serverless application to your VPC network using Serverless VPC Access. Serverless VPC Access helps to ensure that requests from your serverless application to the VPC network aren't exposed to the internet. Serverless VPC Access lets Cloud Run communicate with other services, storage systems, and resources that support VPC Service Controls. By default, you create the Serverless VPC Access connector in the service project. You can create the Serverless VPC Access connector in the host project by specifying true for the connector_on_host_project input variable when you run the Secure Cloud Run Network module. For more information, see Comparison of configuration methods. Virtual Private Cloud (VPC) firewall rules control the flow of data into the subnet that hosts your resources, such as a company server hosted on Compute Engine, or company data stored in Cloud Storage. VPC Service Controls creates a security perimeter that isolates your Cloud Run services and resources by setting up authorization, access controls, and secure data exchange. This perimeter is designed to protect incoming content, to isolate your application by setting up additional access controls and monitoring, and to separate your governance of Google Cloud from the application. Your governance includes key management and logging. Shared VPC lets you connect the Serverless VPC Access connector in your service project to the host project. Cloud Key Management Service (Cloud KMS) stores the customer-managed encryption keys (CMEKs) that are used by the services in this blueprint, including your serverless application, Artifact Registry, and Cloud Run. Identity and Access Management (IAM) and Resource Manager help to restrict access and isolate resources. The access controls and resource hierarchy follow the principle of least privilege. Alternative architecture: Cloud Run without a Shared VPC network If you're not using a Shared VPC network, you can deploy Cloud Run and your serverless application in a VPC Service Control perimeter without a Shared VPC network. You might implement this alternative architecture if you're using a hub-and-spoke topology. The following image shows how you can run your serverless applications without Shared VPC. The architecture that's shown in the preceding diagram uses a combination of Google Cloud services and features that's similar to those that are described in the previous section, Recommended architecture: Cloud Run with a shared VPC. Organization structure You group your resources so that you can manage them and separate your development and testing environments from your production environment. Resource Manager lets you logically group resources by project, folder, and organization. The following diagram shows a resource hierarchy with folders that represent different environments such as bootstrap, common, production, non-production (or testing), and development. This resource hierarchy is based on the hierarchy that's described in the enterprise foundations blueprint. You deploy the projects that the blueprint specifies into the following folders: Common, Production, Non-production, and Dev. The following sections describe this diagram in more detail. Folders You use folders to isolate your production environment and governance services from your non-production and testing environments. The following table describes the folders from the enterprise foundations blueprint that are used by this blueprint. Folder Description Bootstrap Contains resources required to deploy the enterprise foundations blueprint. Common Contains centralized services for the organization, such as the security project. Production Contains projects that have cloud resources that have been tested and are ready to use by customers. In this blueprint, the Production folder contains the service project and host project. Non-production Contains projects that have cloud resources that are currently being tested and staged for release. In this blueprint, the Non-production folder contains the service project and host project. Dev Contains projects that have cloud resources that are currently being developed. In this blueprint, the Dev folder contains the service project and host project. You can change the names of these folders to align with your organization's folder structure, but we recommend that you maintain a similar structure. For more information, see Organization structure. For other folder structures, see Decide a resource hierarchy for your Google Cloud landing zone. Projects You isolate resources in your environment using projects. The following table describes the projects that are needed within the organization. You can change the names of these projects, but we recommend that you maintain a similar project structure. Project Description Host project This project includes the firewall ingress rules and any resources that have internal IP addresses (as described in Connect to a VPC network). When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. When you apply the Terraform code, you specify the name of this project, and the blueprint deploys the services. Service project This project includes your serverless application, Cloud Run, and the Serverless VPC Access connector. You attach the service project to the host project so that the service project can participate in the Shared VPC network. When you apply the Terraform code, you specify the name of this project. The blueprint deploys Cloud Run, Google Cloud Armor, Serverless VPC Access connector, and the load balancer. Security project This project includes your security-specific services, such as Cloud KMS and Secret Manager. When you apply the Terraform code, you specify the name of this project, and the blueprint deploys Cloud KMS. If you use the Secure Cloud Run Harness module, Artifact Registry is also deployed. If you deploy this blueprint after you deploy the security foundations blueprint, this project is the secrets project created by the enterprise foundations blueprint. For more information about the enterprise foundations blueprint projects, see Projects. If you deploy multiple instances of this blueprint without the enterprise foundations blueprint, each instance has its own security project. Mapping roles and groups to projects You must give different user groups in your organization access to the projects that make up the serverless architecture. The following table describes the blueprint recommendations for user groups and role assignments in the projects that you create. You can customize the groups to match your organization's existing structure, but we recommend that you maintain a similar segregation of duties and role assignment. Group Project Roles Serverless administrator [email protected] Service project roles/run.admin roles/compute.networkViewer compute.networkUser Serverless security administrator [email protected] Security project roles/run.viewer roles/cloudkms.viewer roles/artifactregistry.reader Cloud Run developer [email protected] Security project roles/run.developer roles/artifactregistry.writer roles/cloudkms.cryptoKeyEncrypter Cloud Run user [email protected] Service project roles/run.invoker Security controls This section discusses the security controls in Google Cloud that you use to help secure your serverless architecture. The key security principles to consider are as follows: Secure access according to the principle of least privilege, giving entities only the privileges required to perform their tasks. Secure network connections through segmentation design, organization policies, and firewall policies. Secure configuration for each of the services. Understand the risk levels and security requirements for the environment that hosts your serverless workloads. Configure sufficient monitoring and logging to allow detection, investigation, and response. Security controls for serverless applications You can help to protect your serverless applications using controls that protect traffic on the network, control access, and encrypt data. Build system controls When you deploy your serverless application, you use Artifact Registry to store the container images and binaries. Artifact Registry supports CMEK so that you can encrypt the repository using your own encryption keys. SSL traffic To support HTTPS traffic to your serverless application, you configure an SSL certificate for your external Application Load Balancer. By default, you use a self-signed certificate that you can change to a managed certificate after you apply the Terraform code. For more information about installing and using managed certificates, see Using Google-managed SSL certificates. Network and firewall rules Virtual Private Cloud (VPC) firewall rules control the flow of data into the perimeters. You create firewall rules that deny all egress, except for specific TCP port 443 connections from restricted.googleapis.com special domain names. Using the restricted.googleapis.com domain has the following benefits: It helps reduce your network attack surface by using Private Google Access when workloads communicate with Google APIs and services. It ensures that you use only services that support VPC Service Controls. For more information, see Configuring Private Google Access. Perimeter controls As shown in the recommended-architecture diagram, you place the resources for the serverless application in a separate perimeter. This perimeter helps protect the serverless application from unintended access and data exfiltration. Access policy To help ensure that only specific identities (users or services) can access resources and data, you enable IAM groups and roles. To help ensure that only specific resources can access your projects, you enable an access policy for your Google organization. For more information, see Access level attributes. Identity and Access Proxy If your environment already includes Identity and Access Proxy (IAP), you can configure the external Application Load Balancer to use IAP to authorize traffic for your serverless application. IAP lets you establish a central authorization layer for your serverless application so that you can use application-level access controls instead of relying on network-level firewalls. To enable IAP for your application, in the loadbalancer.tf file, set iap_config.enable to true. For more information about IAP, see Identity-Aware Proxy overview. Service accounts and access controls Service accounts are identities that Google Cloud can use to run API requests on your behalf. To implement separation of duties, you create service accounts that have different roles for specific purposes. The service accounts are as follows: A Cloud Run service account (cloud_run_sa) that has the following roles: roles/run.invoker roles/secretmanager.secretAccessor For more information, see Allow Cloud Run to access a secret. A Serverless VPC Access connector account (gcp_sa_vpcaccess) that has the roles/compute.networkUser role. A second Serverless VPC Access connector account (cloud_services) that has the roles/compute.networkUser role. These service accounts for the Serverless VPC Access connector are required so that the connector can create the firewall ingress and egress rules in the host project. For more information, see Grant permissions to service accounts in your service projects. A service identity to run Cloud Run (run_identity_services) that has the roles/vpcaccess.user role. A service agent for the Google APIs (cloud_services_sa) that has the roles/editor role. This service account lets Cloud Run communicate with the Serverless VPC Access connector. A service identity for Cloud Run (serverless_sa) that has the roles/artifactregistry.reader role. This service account provides access to Artifact Registry and CMEK encryption and decryption keys. Key management You use the CMEK keys to help protect your data in Artifact Registry and in Cloud Run. You use the following encryption keys: A software key for Artifact Registry that attests the code for your serverless application. An encryption key to encrypt the container images that Cloud Run deploys. When you apply the Terraform configuration, you specify the CMEK location, which determines the geographical location where the keys are stored. You must ensure that your CMEK keys are in the same region as your resources. By default, CMEK keys are rotated every 30 days. Secret management Cloud Run supports Secret Manager to store the secrets that your serverless application might require. These secrets can include API keys and database usernames and passwords. To expose the secret as a mounted volume, use the volume_mounts and volumes variables in the main module. When you deploy this blueprint with the enterprise foundations blueprint, you must add your secrets to the secrets project before you apply the Terraform code. The blueprint will grant the Secret Manager Secret Accessor role to the Cloud Run service account. For more information, see Use secrets. Organization policies This blueprint adds constraints to the organization policy constraints. For more information about the constraints that the enterprise foundations blueprint uses, see Organization policy constraints. The following table describes the additional organization policy constraints that are defined in the Secure Cloud Run Security module of this blueprint. Policy constraint Description Recommended value constraints/run.allowedIngress Allow ingress traffic only from internal services or the external Application Load Balancer. internal-and-cloud-load-balancing constraints/run.allowedVPCEgress Require a Cloud Run service's revisions to use a Serverless VPC Access connector, and ensure that the revisions' VPC egress settings are set to allow private ranges only. private-ranges-only Operational controls You can enable logging and Security Command Center Premium tier features such as security health analytics and threat detection. These controls help you to do the following: Monitor who is accessing your data. Ensure that proper auditing is in place. Support the ability of your incident management and operations teams to respond to issues that might occur. Logging To help you meet auditing requirements and get insight into your projects, you configure the Google Cloud Observability with data logs for the services that you want to track. Deploy Cloud Logging in the projects before you apply the Terraform code to ensure that the blueprint can configure logging for the firewall, load balancer, and VPC network. After you deploy the blueprint, we recommend that you configure the following: Create an aggregated log sink across all projects. Select the appropriate region to store your logs. Add CMEK keys to your logging sink. For all services within the projects, ensure that your logs include information about data reads and writes, and ensure that they include information about what administrators access. For more information about logging best practices, see Detective controls. Monitoring and alerts After you deploy the blueprint, you can set up alerts to notify your security operations center (SOC) that a security incident might be occurring. For example, you can use alerts to let your security analysts know when a permission has changed in an IAM role. For more information about configuring Security Command Center alerts, see Setting up finding notifications. The Cloud Run Monitoring dashboard, which is part of the sample dashboard library, provides you with the following information: Request count Request latency Billable instance time Container CPU allocation Container memory allocation Container CPU utilization Container memory utilization For instructions on importing the dashboard, see Install sample dashboards. To export alerts, see the following documents: Introduction to alerting Cloud Monitoring metric export Debugging and troubleshooting You can run Connectivity Tests to help you debug network configuration issues between Cloud Run and the resources within your subnet. Connectivity Tests simulates the expected path of a packet and provides details about the connectivity, including resource-to-resource connectivity analysis. Connectivity Tests isn't enabled by the Terraform code; you must set it up separately. For more information, see Create and run Connectivity Tests. Detective controls This section describes the detective controls that are included in the blueprint. Google Cloud Armor and WAF You use an external Application Load Balancer and Google Cloud Armor to provide distributed denial of service (DDoS) protection for your serverless application. Google Cloud Armor is the web application firewall (WAF) included with Google Cloud. You configure the Google Cloud Armor rules described in the following table to help protect the serverless application. The rules are designed to help mitigate against OWASP Top 10 risks. Google Cloud Armor rule name ModSecurity rule name Remote code execution rce-v33-stable Local file include lfi-v33-stable Protocol attack protocolattack-v33-stable Remote file inclusion rfi-v33-stable Scanner detection scannerdetection-v33-stable Session fixation attack sessionfixation-v33-stable SQL injection sqli-v33-stable Cross-site scripting xss-v33-stable When these rules are enabled, Google Cloud Armor automatically denies any traffic that matches the rule. For more information about these rules, see Tune Google Cloud Armor preconfigured WAF rules. Security issue detection in Cloud Run You can detect potential security issues in Cloud Run using Recommender. Recommender can detect security issues such as the following: API keys or passwords that are stored in environment variables instead of in Secret Manager. Containers that include hard-coded credentials instead of using service identities. About a day after you deploy Cloud Run, Recommender starts providing its findings and recommendations. Recommender displays its findings and recommended corrective actions in the Cloud Run service list or the Recommendation Hub. Terraform deployment modes The following table describes the ways that you can deploy this blueprint, and which Terraform modules apply for each deployment mode. Deployment mode Terraform modules Deploy this blueprint after deploying the enterprise foundations blueprint (recommended). This option deploys the resources for this blueprint in the same VPC Service Controls perimeter that is used by the enterprise foundations blueprint. For more information, see How to customize Foundation v2.3.1 for Secured Serverless deployment. This option also uses the secrets project that you created when you deployed the enterprise foundations blueprint. Use these Terraform modules: secure-cloud-run-core secure-serverless-net secure-cloud-run-security secure-cloud-run Install this blueprint without installing the enterprise foundations blueprint. This option requires that you create a VPC Service Controls perimeter. Use these Terraform modules: secure-cloud-run-core secure-serverless-harness secure-serverless-net secure-cloud-run-security secure-cloud-run Bringing it all together To implement the architecture described in this document, do the following: Review the README for the blueprint and ensure that you meet all the prerequisites. Create an SSL certificate for use with the external Application Load Balancer. If you do not complete this step, the blueprint uses a self-signed certificate to deploy the load balancer, and your browser will display warnings about insecure connections when you attempt to access your serverless application. In your testing environment, deploy the Secure Cloud Run Example to see the blueprint in action. As part of your testing process, consider doing the following: Use Security Command Center to scan the projects against common compliance requirements. Replace the sample application with a real application and run through a typical deployment scenario. Work with the application engineering and operations teams in your enterprise to test their access to the projects and to verify whether they can interact with the solution in the way that they would expect. Deploy the blueprint into your environment. Compliance mappings To help define key security controls that are related to serverless applications, the Cloud Security Alliance (CSA) published Top 12 Critical Risks for Serverless Applications. The security controls used in this blueprint help you address most of these risks, as described in the following table. Risk Blueprint mitigation Your responsibility 1. Function event-data injection Google Cloud Armor and external Application Load Balancers help protect against OWASP Top 10, as described in OWASP Top 10 2021 mitigation options on Google Cloud Secure coding practices such as exception handling, as described in the OWASP Secure Coding Practices and Supply chain Levels for Software Artifacts (SLSA) 2. Broken authentication None IAP and Identity Platform to authenticate users to the service 3. Insecure serverless deployment configuration CMEK with Cloud KMS Management of your own encryption keys 4. Over-privileged function permissions and roles Custom service account for service authentication (not the default Compute Engine service account) Tightly-scoped IAM roles on the Cloud Run service account VPC Service Controls to limit scope of Google Cloud API access (as provided using the Google Cloud enterprise foundations blueprint) None 5. Inadequate function monitoring and logging Cloud Logging Cloud Monitoring dashboards and alerting structure 6. Insecure third-party dependencies None Protect the CI/CD pipeline using code scanning and pre-deployment analysis 7. Insecure application secrets storage Secret Manager Secret management in application code 8. Denial of service and financial resource exhaustion Google Cloud Armor Cloud Run service timeouts (default is 120 seconds) None 9. Serverless business logic manipulation VPC Service Controls to limit scope of Google Cloud API access (provided using enterprise foundations blueprint) None 10. Improper exception handling and verbose error messages None Secure programming best practices 11. Obsolete functions, cloud resources, and event triggers Use revisions to minimize the attack surface. Revisions help to reduce the likelihood of accidentally enabling a previous, obsolete iteration of a service. Revisions also help you test a new revision's security posture using A/B testing along with monitoring and logging tools. Infrastructure as code (IaC) to manage cloud resources Cloud resources monitoring using Security Command Center Cloud Billing monitoring Cleanup of unused cloud resources to minimize attack surface 12. Cross-execution data persistency None None What's next For a baseline secure environment, review the Google Cloud enterprise foundations blueprint. To see the details of the blueprint that's described in this document, read the Terraform configuration README file. To read about security and compliance best practices, see Google Cloud Architecture Framework: Security, privacy, and compliance. For more best practices and blueprints, see the security best practices center. Send feedback
|
Artifact_Registry(1).txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/artifact-registry/docs
|
2 |
+
Date Scraped: 2025-02-23T12:04:16.424Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Documentation Artifact Registry Artifact Registry documentation Stay organized with collections Save and categorize content based on your preferences. Artifact Registry documentation View all product documentation A universal package manager for all your build artifacts and dependencies. Fast, scalable, reliable and secure. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstarts Transitioning from Container Registry Managing repositories Configuring access control Working with container images Working with Java packages Working with Node.js packages Working with Python packages find_in_page Reference Support for the Docker Registry API REST API RPC API info Resources Artifact Registry Pricing Release notes Getting support Quotas and limits Related videos
|
Artifact_Registry.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/artifact-registry/docs
|
2 |
+
Date Scraped: 2025-02-23T12:03:02.560Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Documentation Artifact Registry Artifact Registry documentation Stay organized with collections Save and categorize content based on your preferences. Artifact Registry documentation View all product documentation A universal package manager for all your build artifacts and dependencies. Fast, scalable, reliable and secure. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstarts Transitioning from Container Registry Managing repositories Configuring access control Working with container images Working with Java packages Working with Node.js packages Working with Python packages find_in_page Reference Support for the Docker Registry API REST API RPC API info Resources Artifact Registry Pricing Release notes Getting support Quotas and limits Related videos
|
Artificial_Intelligence.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/solutions/ai
|
2 |
+
Date Scraped: 2025-02-23T11:58:37.866Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceAI and machine learning solutionsAt Google, AI is in our DNA. In partnership with Google Cloud, business leaders can leverage the power of purpose-built AI solutions to transform their organizations and solve real-world problems.Try it in consoleContact salesSummarize large documents with generative AIDeploy a preconfigured solution that uses generative AI to quickly extract text and summarize large documents.Deploy an AI/ML image processing pipelineLaunch a preconfigured, interactive solution that uses pre-trained machine learning models to analyze images and generate image annotations.Create a chat app using retrieval-augmented generation (RAG)Deploy a preconfigured solution with a chat-based experience that provides questions and answers based on embeddings stored as vectors.Turn ideas into reality with Google Cloud AIAI SOLUTIONSRELATED PRODUCTS AND SERVICESCustomer Engagement Suite with Google AIDelight customers with an end-to-end application that combines our most advanced conversational AI, with multimodal and omnichannel functionality to deliver exceptional customer experiences at every touchpoint.Conversational AgentsAgent Assist Conversational InsightsContact Center as a ServiceDocument AIImprove your operational efficiency by bringing AI-powered document understanding to unstructured data workflows across a variety of document formats.Document AIBase OCREnterprise Knowledge Graph enrichmentHuman in the LoopGemini for Google CloudGemini for Google Cloud helps you be more productive and creative. It can be your writing and coding assistant, creative designer, expert adviser, or even your data analyst.Gemini Code Assist Gemini Cloud AssistGemini in SecurityGemini in BigQueryVertex AI Search for commerceIncrease conversion rate across digital properties with AI solutions that help brands to deliver personalized consumer experiences across channels. Recommendations AIVision Product SearchRetail SearchAI SOLUTIONSCustomer Engagement Suite with Google AIDelight customers with an end-to-end application that combines our most advanced conversational AI, with multimodal and omnichannel functionality to deliver exceptional customer experiences at every touchpoint.Conversational AgentsAgent Assist Conversational InsightsContact Center as a ServiceDocument AIImprove your operational efficiency by bringing AI-powered document understanding to unstructured data workflows across a variety of document formats.Document AIBase OCREnterprise Knowledge Graph enrichmentHuman in the LoopGemini for Google CloudGemini for Google Cloud helps you be more productive and creative. It can be your writing and coding assistant, creative designer, expert adviser, or even your data analyst.Gemini Code Assist Gemini Cloud AssistGemini in SecurityGemini in BigQueryVertex AI Search for commerceIncrease conversion rate across digital properties with AI solutions that help brands to deliver personalized consumer experiences across channels. Recommendations AIVision Product SearchRetail SearchLet's solve your challenges together.See how you can transform your business with Google Cloud.Contact usLearn from our customersVideoRetailer Marks & Spencer created better customer experience with Contact Center AI from Google Cloud.46:04VideoAES uses AutoML Vision to cut wind turbine inspection time from two weeks to two days.02:30Case StudyThe City of Memphis uses Google Cloud AI to detect potholes with 90%+ accuracy.5-min readSee all customersCloud AI products comply with our SLA policies. They may offer different latency or availability guarantees from other Google Cloud services.Start your AI journey todayTry Google Cloud AI and machine learning products in the console.Go to my consoleHave a large project?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith [email protected]
|
Assess_and_discover_your_workloads.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/migration-to-gcp-assessing-and-discovering-your-workloads
|
2 |
+
Date Scraped: 2025-02-23T11:51:33.632Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Migrate to Google Cloud: Assess and discover your workloads Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-02 UTC This document can help you plan, design, and implement the assessment phase of your migration to Google Cloud. Discovering your workloads and services inventory, and mapping their dependencies, can help you identify what you need to migrate and in what order. When planning and designing a migration to Google Cloud, you first need a deep knowledge of your current environment and of the workloads to migrate. This document is part of the following multi-part series about migrating to Google Cloud: Migrate to Google Cloud: Get started Migrate to Google Cloud: Assess and discover your workloads (this document) Migrate to Google Cloud: Plan and build your foundation Migrate to Google Cloud: Transfer your large datasets Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments Migrate to Google Cloud: Optimize your environment Migrate to Google Cloud: Best practices for validating a migration plan Migrate to Google Cloud: Minimize costs The following diagram illustrates the path of your migration journey. This document is useful if you're planning a migration from an on-premises environment, a private hosting environment, another cloud provider, or if you're evaluating the opportunity to migrate and exploring what the assessment phase might look like. In the assessment phase, you determine the requirements and dependencies to migrate your source environment to Google Cloud. The assessment phase is crucial for the success of your migration. You need to gain deep knowledge about the workloads you want to migrate, their requirements, their dependencies, and about your current environment. You need to understand your starting point to successfully plan and execute a Google Cloud migration. The assessment phase consists of the following tasks: Build a comprehensive inventory of your workloads. Catalog your workloads according to their properties and dependencies. Train and educate your teams on Google Cloud. Build experiments and proofs of concept on Google Cloud. Calculate the total cost of ownership (TCO) of the target environment. Choose the migration strategy for your workloads. Choose your migration tools. Define the migration plan and timeline. Validate your migration plan. Build an inventory of your workloads To scope your migration, you must first understand how many items, such as workloads and hardware appliances, exist in your current environment, along with their dependencies. Building the inventory is a non-trivial task that requires a significant effort, especially when you don't have any automatic cataloging system in place. To have a comprehensive inventory, you need to use the expertise of the teams that are responsible for the design, deployment, and operation of each workload in your current environment, as well as the environment itself. The inventory shouldn't be limited to workloads only, but should at least contain the following: Dependencies of each workload, such as databases, message brokers, configuration storage systems, and other components. Services supporting your workload infrastructure, such as source repositories, continuous integration and continuous deployment (CI/CD) tools, and artifact repositories. Servers, either virtual or physical, and runtime environments. Physical appliances, such as network devices, firewalls, and other dedicated hardware. When compiling this list, you should also gather information about each item, including: Source code location and if you're able to modify this source code. Deployment method for the workload in a runtime environment, for example, if you use an automated deployment pipeline or a manual one. Network restrictions or security requirements. IP address requirements. How you're exposing the workload to clients. Licensing requirements for any software or hardware. How the workload authenticates against your identity and access management system. For example, for each hardware appliance, you should know its detailed specifications, such as its name, vendor, technologies, and dependencies on other items in your inventory. For example: Name: NAS Appliance Vendor and model: Vendor Y, Model Z Technologies: NFS, iSCSI Dependencies: Network connectivity with Jumbo frames to VM compute hardware. This list should also include non-technical information, for example, under which licensing terms you're allowed to use each item and any other compliance requirements. While some licenses let you deploy a workload in a cloud environment, others explicitly forbid cloud deployment. Some licenses are assigned based on the number of CPUs or sockets in use, and these concepts might not be applicable when running on cloud technology. Some of your data might have restrictions regarding the geographical region where it's stored. Finally, some sensitive workloads can require sole tenancy. Along with the inventory, it's useful to provide aids for a visual interpretation of the data you gathered. For example, you can provide a dependency graph and charts to highlight aspects of interest, such as how your workloads are distributed in an automated or manual deployment process. How to build your inventory There are different ways to build a workload inventory. Although the quickest way to get started is to proceed manually, this approach can be difficult for a large production environment. Information in manually built inventories can quickly become outdated, and the resulting migration might fail because you didn't confirm the contents of your inventories. Building the inventory is not a one-time exercise. If your current environment is highly dynamic, you should also spend effort in automating the inventory creation and maintenance, so you eventually have a consistent view of all the items in your environment at any given time. For information about how to build an inventory of your workloads, see Migration Center: Start an asset discovery. Example of a workload inventory This example is an inventory of an environment supporting an ecommerce app. The inventory includes workloads, dependencies, services supporting multiple workloads, and hardware appliances. Note: The system resources requirements refer to the current environment. These requirements must be re-evaluated to consider the resources of the target environment. For example, the CPU cores of the target environment might be more performant due to a more modern architecture and higher clock speeds, so your workloads might require fewer cores. Workloads For each workload in the environment, the following table highlights the most important technologies, its deployment procedure, and other requirements. Name Source code location Technologies Deployment procedure Other requirements Dependencies System resources requirements Marketing website Corporate repository Angular frontend Automated Legal department must validate content Caching service 5 CPU cores 8 GB of RAM Back office Corporate repository Java backend, Angular frontend Automated N/A SQL database 4 CPU cores 4 GB of RAM Ecommerce workload Proprietary workload Vendor X Model Y Version 1.2.0 Manual Customer data must reside inside the European Union SQL database 10 CPU cores 32 GB of RAM Enterprise resource planning (ERP) Proprietary workload Vendor Z, Model C, Version 7.0 Manual N/A SQL database 10 CPU cores 32 GB of RAM Stateless microservices Corporate repository Java Automated N/A Caching service 4 CPU cores 8 GB of RAM Dependencies The following table is an example of the dependencies of the workloads listed in the inventory. These dependencies are necessary for the workloads to correctly function. Name Technologies Other requirements Dependencies System resources requirements SQL database PostgreSQL Customer data must reside inside the European Union Backup and archive system 30 CPU cores 512 GB of RAM Supporting services In your environment, you might have services that support multiple workloads. In this ecommerce example, there are the following services: Name Technologies Other requirements Dependencies System resources requirements Source code repositories Git N/A Backup and archive system 2 CPU cores 4 GB of RAM Backup and archive system Vendor G, Model H, version 2.3.0 By law, long-term storage is required for some items N/A 10 CPU cores 8 GB of RAM CI tool Jenkins N/A Source code repositories artifact repository backup and archive system 32 CPU cores 128 GB of RAM Artifact repository Vendor A Model N Version 5.0.0 N/A Backup and archive system 4 CPU cores 8 GB of RAM Batch processing service Cron jobs running inside the CI tool N/A CI tool 4 CPU cores 8 GB of RAM Caching service Memcached Redis N/A N/A 12 CPU cores 50 GB of RAM Hardware The example environment has the following hardware appliances: Name Technologies Other requirements Dependencies System resources requirements Firewall Vendor H Model V N/A N/A N/A Instances of Server j Vendor K Model B Must be decommissioned because no longer supported N/A N/A NAS Appliance Vendor Y Model Z NFS iSCSI N/A N/A N/A Assess your deployment and operational processes It's important to have a clear understanding of how your deployment and operational processes work. These processes are a fundamental part of the practices that prepare and maintain your production environment and the workloads that run there. Your deployment and operational processes might build the artifacts that your workloads need to function. Therefore, you should gather information about each artifact type. For example, an artifact can be an operating system package, an application deployment package, an operating system image, a container image, or something else. In addition to the artifact type, consider how you complete the following tasks: Develop your workloads. Assess the processes that development teams have in place to build your workloads. For example, how are your development teams designing, coding, and testing your workloads? Generate the artifacts that you deploy in your source environment. To deploy your workloads in your source environment, you might be generating deployable artifacts, such as container images or operating system images, or you might be customizing existing artifacts, such as third-party operating system images by installing and configuring software. Gathering information about how you're generating these artifacts helps you to ensure that the generated artifacts are suitable for deployment in Google Cloud. Store the artifacts. If you produce artifacts that you store in an artifact registry in your source environment, you need to make the artifacts available in your Google Cloud environment. You can do so by employing strategies like the following: Establish a communication channel between the environments: Make the artifacts in your source environment reachable from the target Google Cloud environment. Refactor the artifact build process: Complete a minor refactor of your source environment so that you can store artifacts in both the source environment and the target environment. This approach supports your migration by building infrastructure like an artifact repository before you have to implement artifact build processes in the target Google Cloud environment. You can implement this approach directly, or you can build on the previous approach of establishing a communication channel first. Having artifacts available in both the source and target environments lets you focus on the migration without having to implement artifact build processes in the target Google Cloud environment as part of the migration. Scan and sign code. As part of your artifact build processes, you might be using code scanning to help you guard against common vulnerabilities and unintended network exposure, and code signing to help you ensure that only trusted code runs in your environments. Deploy artifacts in your source environment. After you generate deployable artifacts, you might be deploying them in your source environment. We recommend that you assess each deployment process. The assessment helps ensure that your deployment processes are compatible with Google Cloud. It also helps you to understand the effort that will be necessary to eventually refactor the processes. For example, if your deployment processes work with your source environment only, you might need to refactor them to target your Google Cloud environment. Inject runtime configuration. You might be injecting runtime configuration for specific clusters, runtime environments, or workload deployments. The configuration might initialize environment variables and other configuration values such as secrets, credentials, and keys. To help ensure that your runtime configuration injection processes work on Google Cloud, we recommend that you assess how you're configuring the workloads that run in your source environment. Logging, monitoring, and profiling. Assess the logging, monitoring, and profiling processes that you have in place to monitor the health of your source environment, the metrics of interest, and how you're consuming data provided by these processes. Authentication. Assess how you're authenticating against your source environment. Provision and configure your resources. To prepare your source environment, you might have designed and implemented processes that provision and configure resources. For example, you might be using Terraform along with configuration management tools to provision and configure resources in your source environment. Assess your infrastructure After you assess your deployment and operational processes, we recommend that you assess the infrastructure that is supporting your workloads in the source environment. To assess that infrastructure, consider the following: How you organized resources in your source environment. For example, some environments support a logical separation between resources using constructs that isolate groups of resources from each others, such as organizations, projects, and namespaces. How you connected your environment to other environments, such as on-premises environments, and other cloud providers. Categorize your workloads After you complete the inventory, you need to organize your workloads into different categories. This categorization can help you prioritize the workloads to migrate according to their complexity and risk in moving to the cloud. A catalog matrix should have one dimension for each assessment criterion you're considering in your environment. Choose a set of criteria that covers all the requirements of your environment, including the system resources each workload needs. For example, you might be interested to know if a workload has any dependencies, or if it's stateless or stateful. When you design the catalog matrix, consider that for each criteria you add, you are adding another dimension to represent. The resulting matrix might be difficult to visualize. A possible solution to this problem could be to use multiple smaller matrixes, instead of a single, complex one. Also, next to each workload you should add a migration complexity indicator. This indicator estimates the difficulty rating to migrate each workload. The granularity of this indicator depends on your environment. For a basic example, you might have three categories: easy to migrate, hard to migrate or cannot be migrated. To complete this activity, you need experts for each item in the inventory to estimate its migration complexity. Drivers of this migration complexity are unique to each business. When the catalog is complete, you can also build visuals and graphs to help you and your team to quickly evaluate metrics of interest. For example, draw a graph that highlights how many components have dependencies or highlight the migration difficulty of each component. For information about how to build an inventory of your workloads, see Migration Center: Start an asset discovery. Example of a workload catalog The following assessment criteria is used in this example, one for each matrix axis: How critical a workload is to the business. Whether a workload has dependencies, or is a dependency for other workloads. Maximum allowable downtime for the workload. How difficult a workload is to be migrated. Importance to the business Doesn't have dependencies or dependents Has dependencies or dependents Maximum allowable downtime Difficulty Mission critical Stateless microservices 2 minutes Easy ERP 24 hours Hard Ecommerce workload No downtime Hard Hardware firewall No downtime Can't move SQL database 10 minutes Easy Source code repositories 12 hours Easy Non-mission critical Marketing website 2 hours Easy Backup and archive 24 hours Easy Batch processing service 48 hours Easy Caching service 30 minutes Easy Back office 48 hours Hard CI tool 24 hours Easy Artifact repository 30 minutes Easy To help you visualize the results in the catalog, you can build visuals and charts. The following chart highlights the migration difficulty: In the preceding chart, most of the workloads are easy to move, three of them are hard to move, and one of them is not possible to move. Educate your organization about Google Cloud To take full advantage of Google Cloud, your organization needs to start learning about the services, products, and technologies that your business can use on Google Cloud. Your staff can begin with Google Cloud free trial accounts that contain credits to help them experiment and learn. Creating a free environment for testing and learning is critical to the learning experience of your staff. You have several training options: Public and open resources: You can get started learning Google Cloud with free hands-on labs, video series, Cloud OnAir webinars, and Cloud OnBoard training events. In-depth courses: If you want a deeper understanding of how Google Cloud works, you can attend on-demand courses from Google Cloud Skills Boost or Google Cloud Training Specializations from Coursera that you can attend online at your own pace or classroom training by our world-wide authorized training partners. These courses typically span from one to several days. Role-based learning paths: You can train your engineers according to their role in your organization. For example, you can train your workload developers or infrastructure operators how to best use Google Cloud services. You can also certify your engineers' knowledge of Google Cloud with various certifications, at different levels: Associate certifications: A starting point for those new to Google Cloud that can open the door to professional certifications, such as the associate cloud engineer certification. Professional certifications: If you want to assess advanced design and implementation skills for Google Cloud from years of experience, you can get certifications, such as the professional cloud architect or the professional data engineer. Google Workspace certifications: You can demonstrate collaboration skills using Google Workspace tools with a Google Workspace certification. Apigee certifications: With the Apigee certified API engineer certification, you can demonstrate the ability to design and develop robust, secure, and scalable APIs. Google developers certifications: You can demonstrate development skills with the Associate Android developer (This certification is being updated) and mobile web specialist certifications. In addition to training and certification, one of the best ways to get experience with Google Cloud is to begin using the product to build business proofs-of-concept. Experiment and design proofs of concept To show the value and efficacy of Google Cloud, consider designing and developing one or more proofs of concept (PoCs) for each category of workload in your workload catalog. Experimentation and testing let you validate assumptions and demonstrate the value of cloud to business leaders. At a minimum, your PoC should include the following: A comprehensive list of the use cases that your workloads support, including uncommon ones and corner cases. All the requirements for each use case, such as performance, scalability, and consistency requirements, failover mechanisms, and network requirements. A potential list of technologies and products that you want to investigate and test. You should design PoCs and experiments to validate all the use cases on the list. Each experiment should have a precise validity context, scope, expected outputs, and measurable business impact. For example, if one of your CPU-bound workloads needs to quickly scale to satisfy peaks in demand, you can run an experiment to verify that a zone can create many virtual CPU cores, and how much time it takes to do so. If you experience a significant value-add, such as reducing new workload scale-up time by 95% compared to your current environment, this experiment can demonstrate instant business value. If you're interested in evaluating how the performance of your on-premises databases compares to Cloud SQL, Spanner, Firestore, or Bigtable, you could implement a PoC where the same business logic uses different databases. This PoC gives you a low-risk opportunity to identify the right managed database solution for your workload across multiple benchmarks and operating costs. If you want to evaluate the performance of the VM provisioning process in Google Cloud, you can use a third-party tool, such as PerfKit Benchmarker, and compare Google Cloud with other cloud providers. You can measure the end-to-end time to provision resources in the cloud, in addition to reporting on standard metrics of peak performance, including latency, throughput, and time-to-complete. For example, you might be interested in how much time and effort it takes to provision many Kubernetes clusters. PerfKit Benchmarker is an open source community effort involving over 500 participants, such as researchers, academic institutions, and companies, including Google. Calculate total cost of ownership When you have a clear view of the resources you need in the new environment, you can build a total cost of ownership model that lets you compare your costs on Google Cloud with the costs of your current environment. When building this cost model, you should consider not only the costs for hardware and software, but also all the operational costs of running your own data center, such as power, cooling, maintenance, and other support services. Consider that it's also typically easier to reduce costs, thanks to the elastic scalability of Google Cloud resources, compared to a more rigid on-premises data center. A commonly overlooked cost when considering cloud migrations is the use of a cloud network. In a data center, purchasing network infrastructure, such as routers and switches, and then running appropriate network cabling are one-time costs that let you use the entire capacity of the network. In a cloud environment, there are many ways that you might be billed for network utilization. For data intensive workloads, or those that generate a large amount of network traffic, you might need to consider new architectures and network flows to lower networking costs in the cloud. Google Cloud also provides a wide range of options for intelligent scaling of resources and costs. For example, in Compute Engine you can rightsize during your migration with Migrate for Compute Engine, or after VMs are already running, or building autoscaling groups of instances. These options can have a large impact on the costs of running services and should be explored to calculate the total cost of ownership (TCO). To calculate the total cost of Google Cloud resources, you can use the price calculator. Choose the migration strategy for your workloads For each workload to migrate, evaluate and select a migration strategy that best suits their use case. For example, your workloads might have the following conditions: They don't tolerate any downtime or data loss, such as mission-critical workloads. For these workloads, you can choose zero or near-zero downtime migration strategies. They tolerate downtimes, such secondary or backend workloads. For these workloads, you can choose migration strategies that require a downtime. When you choose migration strategies, consider that zero and near-zero downtime migration strategies are usually more costly and complex to design and implement than migration strategies that require a downtime. Choose your migration tools After you choose a migration strategy for your workloads, review and decide upon the migration tools. There are many migration tools available, each optimized for certain migration use cases. Use cases can include the following: Migration strategy Source and target environments Data and workload size Frequency of changes to data and workloads Availability to use managed services for migration To ensure a seamless migration and cut-over, you can use application deployment patterns, infrastructure orchestration, and custom migration applications. However, specialized tools called managed migration services can facilitate the process of moving data, worloads, or even entire infrastructures from one environment to another. With these capabilities, they encapsulate the complex logic of migration and offer migration monitoring capabilities. Define the migration plan and timeline Now that you have an exhaustive view of your current environment, you need to complete your migration plan by: Grouping the workloads and data to migrate in batches (also called sprints in some contexts). Choosing the order in which you want to migrate the batches. Choosing the order in which you want to migrate the workloads inside each batch. As part of your migration plan, we recommend that you also produce the following documents: Technical design document RACI matrix Timeline (such as a T-Minus plan) As you gain experience with Google Cloud, momentum with the migration, and the understanding of your environment, you can do the following: Refine the grouping of workloads and data to migrate. Increase the size of migration batches. Update the order in which you migrate batches and workloads inside batches. Update the composition of the batches. To group the workloads and data to migrate in batches, and to define migration ordering, you assess your workloads against several criteria, such as the following: Business value of the workload. If the workload is deployed or run in a unique way compared to the rest of your infrastructure. Teams responsible for development, deployment, and operations of the workload. Number, type, and scope of dependencies of the workload. Refactoring effort to make the workload work in the new environment. Compliance and licensing requirements of the workload. Availability and reliability requirements of the workload. The workloads you migrate first are the ones that let your teams build their knowledge and experience on Google Cloud. Greater cloud exposure and experience from your team can lower the risk of complications during the migration phase of your migration, and make subsequent migrations easier and quicker. For this reason, choosing the right first-movers is crucial for a successful migration. Business value Choosing a workload that isn't business critical protects your main line of business, and decreases the impact on business from undiscovered risks and mistakes while your team is learning cloud technologies. For example, if you choose the component where the main financial transactions logic of your ecommerce workload is implemented as a first-mover, any mistake during the migration might cause an impact on your main line of business. A better choice is the SQL database supporting your workloads, or better yet, the staging database. You should avoid rarely used workloads. For example, if you choose a workload that's used only a few times per year by a low number of users, although it's a low risk migration, it doesn't increase the momentum of your migration, and it can be hard to detect and respond to problems. Edge cases You should also avoid edge cases, so you can discover patterns that you can apply to other workloads to migrate. A primary goal when selecting a first mover is to gain experience with common patterns in your organization so you can build a knowledge base. You can apply what you learned with these first movers when migrating future workloads later. For example, if most of your workloads are designed following a test-driven development methodology and are developed using the Python programming language, choosing a workload with little test coverage and developed using the Java programming language, doesn't let you discover any pattern that you can apply when migrating the Python workloads. Teams When choosing your first-movers, pay attention to the teams responsible for each workload. The team responsible for a first-mover should be highly motivated, and eager to try Google Cloud and its services. Moreover, business leadership should have clear goals for the first-mover teams and actively work to sponsor and support them through the process. For example, a high performing team that sits in the main office with a proven history of implementing modern development practices such as DevOps and disciplines such as site reliability engineering can be a good candidate. If they also have top-down leadership sponsors and clear goals around each workloads migration, they can be a superb candidate. Dependencies Also, you should focus on workloads that have the fewest number of dependencies, either from other workloads or services. The migration of a workload with no dependencies is easier when you have limited experience with Google Cloud. If you have to choose workloads that have dependencies on other components, pick the ones that are loosely coupled to their dependencies. If a workload is already designed for the eventual unavailability of its dependencies, it can reduce the friction when migrating the workload to the target environment. For example, loosely coupled candidates are workloads that communicate by using a message broker, or that work offline, or are designed to tolerate the unavailability of the rest of the infrastructure. Although there are strategies to migrate data of stateful workloads, a stateless workload rarely requires any data migration. Migrating a stateless workload can be easier because you don't need to worry about a transitory phase where data is partially in your current environment and partially in your target environment. For example, stateless microservices are good first-mover candidates, because they don't rely on any local stateful data. Refactoring effort A first-mover should require a minimal amount of refactoring, so you can focus on the migration itself and on Google Cloud, instead of spending a large effort on changes to the code and configuration of your workloads. The refactoring should focus on the necessary changes that allow your workloads to run in the target environment instead of focusing on modernizing and optimizing your workloads, which is tackled in later migration phases. For example, a workload that requires only configuration changes is a good first-mover, because you don't have to implement any change to codebase, and you can use the existing artifacts. Licensing and compliance Licenses also play a role in choosing the first-movers, because some of your workloads might be licensed under terms that affect your migration. For example, some licenses explicitly forbid running workloads in a cloud environment. When examining the licensing terms, don't forget the compliance requirements because you might have sole tenancy requirements for some of your workloads. For these reasons, you should choose workloads that have the least amount of licensing and compliance restrictions as first-movers. For example, your customers might have the legal right to choose in which region you store their data, or your customers' data might be restricted to a particular region. Availability and reliability Good first-movers are the ones that can afford a downtime caused by a cutover window. If you choose a workload that has strict availability requirements, you have to implement a zero-downtime data migration strategy such as Y (writing and reading) or by developing a data-access microservice. While this approach is possible, it distracts your teams from gaining the necessary experience with Google Cloud, because they have to spend time to implement such strategies. For example, the availability requirements of a batch processing engine can tolerate a longer downtime than the customer-facing workload of your ecommerce site where your users finalize their transactions. Validate your migration plan Before taking action to start your migration plan, we recommend that you validate its feasibility. For more information, see Best practices for validating a migration plan. What's next Learn how to plan your migration and build your foundation on Google Cloud. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback
|
Assess_existing_user_accounts.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/identity/assessing-existing-user-accounts
|
2 |
+
Date Scraped: 2025-02-23T11:55:21.194Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Assess existing user accounts Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC Google supports two types of user accounts, managed user accounts and consumer user accounts. Managed user accounts are under the full control of a Cloud Identity or Google Workspace administrator. In contrast, consumer accounts are fully owned and managed by the people who created them. A core tenet of identity management is to have a single place to manage identities across your organization: If you use Google as your identity provider (IdP), then Cloud Identity or Google Workspace should be the single place to manage identities. Employees should rely exclusively on user accounts that you manage in Cloud Identity or Google Workspace. If you use an external IdP, then that provider should be the single place to manage identities. The external IdP needs to provision and manage user accounts in Cloud Identity or Google Workspace, and employees should rely exclusively on these managed user accounts when they use Google services. If employees use consumer user accounts, then the premise of having a single place to manage identities is compromised: consumer accounts aren't managed by Cloud Identity, Google Workspace, or your external IdP. Therefore, you must identify the consumer user accounts that you want to convert to managed accounts, as explained in the authentication overview. To convert consumer accounts to managed accounts using the transfer tool, described later in this document, you must have a Cloud Identity or Google Workspace identity with a Super Admin role. This document helps you to understand and assess the following: Which existing user accounts that your organization's employees might be using and how to identify those accounts. Which risks might be associated with these existing user accounts. Example scenario To illustrate the different sets of user accounts that employees might be using, this document uses an example scenario for a company named Example Organization. Example Organization has six employees and former employees who have all been using Google services such as Google Google Docs and Google Ads. Example Organization now intends to consolidate their identity management and establish their external IdP as the single place to manage identities. Each employee has an identity in the external IdP, and that identity matches the employee's email address. There are two consumer user accounts, Carol and Chuck, that use an example.com email address: Carol created a consumer account using her corporate email address ([email protected]). Chuck, a former employee, created a consumer account using his corporate email address ([email protected]). Two employees, Glen and Grace, decided to use Gmail accounts: Glen signed up for a Gmail account ([email protected]), which he uses to access private and corporate documents and other Google services. Grace also uses a Gmail account ([email protected]), but she added her corporate email address, [email protected], as an alternate email address. Finally, two employees, Mary and Mike, are already using Cloud Identity: Mary has a Cloud Identity user account ([email protected]). Mike is the administrator of the Cloud Identity account and created a user ([email protected]) for himself. The following diagram illustrates the different sets of user accounts: To establish the external IdP as the single place to manage identities, you must link the identities of the existing Google user accounts to the identities in the external IdP. The following diagram therefore adds an account set that depicts the identities in the external IdP. Recall that if employees want to establish an external IdP as the single place to manage identities, they must rely exclusively on managed user accounts, and that the external IdP must control those user accounts. In this scenario, only Mary meets these requirements. She uses a Cloud Identity user, which is a managed user account, and her user account's identity matches her identity in the external IdP. All other employees either use consumer accounts, or the identity of their accounts doesn't match their identity in the external IdP. The risks and implications of not meeting the requirements are different for each of these users. Each user represents a different set of user accounts that might require further investigation. User account sets to investigate The following sections examine potentially problematic sets of user accounts. Consumer accounts This set of user accounts consists of accounts for which either of the following is true: They were created by employees using the Sign up feature offered by many Google services. They use a corporate email address as their identity. In the example scenario, this description fits Carol and Chuck. A consumer account that's used for business purposes and that uses a corporate email address can pose a risk to your business, such as the following: You cannot control the lifecycle of the consumer account. An employee who leaves the company might continue to use the user account to access corporate resources or to generate corporate expenses. Even if you revoke access to all resources, the account might still pose a social engineering risk. Because the user account uses a seemingly trustworthy identity like [email protected], the former employee might be able to convince current employees or business partners to grant access to resources again. Similarly, a former employee might use the user account to perform activities that aren't in line with your organization's policies, which could put your company's reputation at risk. You cannot enforce security policies like MFA verification or password complexity rules on the account. You cannot restrict which geographic location Google Docs and Google Drive data is stored in, which might be a compliance risk. You cannot restrict which Google services can be accessed by using this user account. If ExampleOrganization decides to use Google as their IdP, then the best way for them to deal with consumer accounts is to either migrate them to Cloud Identity or Google Workspace or to evict them by forcing the owners to rename the user account. If ExampleOrganization decides to use an external IdP, they need to further distinguish between the following: Consumer accounts that have a matching identity in the external IdP. Consumer accounts that don't have a matching identity in the external IdP. The following two sections look at these two subclasses in detail. Consumer accounts with a matching identity in the external IdP This set of user accounts consists of accounts that match all of the following: They were created by employees. They use a corporate email address as the primary email address. Their identity matches an identity in the external IdP. In the example scenario, this description fits Carol. The fact that these consumer accounts have a matching identity in your external IdP suggests that these user accounts belong to current employees and should be retained. You should therefore consider migrating these accounts to Cloud Identity or Google Workspace. You can identify consumer accounts that have matching identity in the external IdP as follows: Add all domains to Cloud Identity or Google Workspace that you suspect might have been used for consumer account signups. In particular, the list of domains in Cloud Identity or Google Workspace should include all domains that your email system supports. Use the transfer tool for unmanaged users to identify consumer accounts that use an email address that matches one of the domains you've added to Cloud Identity or Google Workspace. The tool also lets you export the list of affected users as a CSV file. Compare the list of consumer accounts with the identities in your external IdP, and find consumer accounts that have a counterpart. Consumer accounts without a matching identity in the external IdP This set of user accounts consists of accounts that match all of the following: They were created by employees. They use a corporate email address as their identity. Their identity does not match any identity in the external IdP. In the example scenario, this description fits Chuck. There can be several causes for consumer accounts without a matching identity in the external IdP, including the following: The employee who created the account might have left the company, so the corresponding identity no longer exists in the external IdP. There might be a mismatch between the email address used for the consumer account sign-up and the identity known in the external IdP. Mismatches like these can occur if your email system allows variations in email addresses such as the following: Using alternate domains. For example, [email protected] and [email protected] might be aliases for the same mailbox, but the user might only be known as [email protected] in your IdP. Using alternate handles. For example [email protected] and [email protected] might also refer to the same mailbox, but your IdP might recognize only one spelling. Using different casing. For example, the variants [email protected] and [email protected] might not be recognized as the same user. You can handle consumer accounts that don't have a matching identity in the external IdP in the following ways: You can migrate the consumer account to Cloud Identity or Google Workspace and then reconcile any mismatches caused by alternate domains, handles, or casing. If you think the user account is illegitimate or shouldn't be used anymore, you can evict the consumer account by forcing the owner to rename it. You can identify consumer accounts without a matching identity in the external IdP as follows: Add all domains to Cloud Identity or Google Workspace that you suspect might have been used for consumer account signups. In particular, the list of domains in Cloud Identity or Google Workspace should include all domains that your email system supports as aliases. Use the transfer tool for unmanaged users to identify consumer accounts that use an email address that matches one of the domains you've added to Cloud Identity or Google Workspace. The tool also lets you export the list of affected users as a CSV file. Compare the list of consumer accounts with the identities in your external IdP and find consumer accounts that lack a counterpart. Managed accounts without a matching identity in the external IdP This set of user accounts consists of accounts that match all of the following: They were manually created by a Cloud Identity or Google Workspace administrator. Their identity doesn't match any identity in the external IdP. In the example scenario, this description fits Mike, who used the identity [email protected] for his managed account. The potential causes for managed accounts without a matching identity in the external IdP are similar to those for consumer accounts without a matching identity in the external IdP: The employee for whom the account was created might have left the company, so the corresponding identity no longer exists in the external IdP. The corporate email address that matches the identity in the external IdP might have been set as an alternate email address or alias rather than as the primary email address. The email address that's used for the user account in Cloud Identity or Google Workspace might not match the identity known in the external IdP. Neither Cloud Identity nor Google Workspace verifies that the email address used as the identity exists. A mismatch can therefore not only occur because of alternate domains, alternate handles, or different casing, but also because of a typo or other human error. Regardless of their cause, managed accounts without a matching identity in the external IdP are a risk because they can become subject to inadvertent reuse and name squatting. We recommend that you reconcile these accounts. You can identify consumer accounts without a matching identity in the external IdP as follows: Using the Admin Console or the Directory API, export the list of user accounts in Cloud Identity or Google Workspace. Compare the list of accounts with the identities in your external IdP and find accounts that lack a counterpart. Gmail accounts used for corporate purposes This set of user accounts consists of accounts that match the following: They were created by employees. They use a gmail.com email address as their identity. Their identities don't match any identity in the external IdP. In the example scenario, this description fits Grace and Glen. Gmail accounts that are used for corporate purposes are subject to similar risks as consumer accounts without matching identity in external IdP: You cannot control the lifecycle of the consumer account. An employee who leaves the company might continue to use the user account to access corporate resources or to generate corporate expenses. You cannot enforce security policies like MFA verification or password complexity rules on the account. The best way to deal with Gmail accounts is therefore to revoke access for those user accounts to all corporate resources and provide affected employees with new managed user accounts as replacements. Because Gmail accounts use gmail.com as their domain, there is no clear affiliation with your organization. The lack of a clear affiliation implies that there is no systematic way—other than scrubbing existing access control policies—to identify Gmail accounts that have been used for corporate purposes. Gmail accounts with a corporate email address as alternate email This set of user accounts consists of accounts that match all of the following: They were created by employees. They use a gmail.com email address as their identity. They use a corporate email address as an alternate email address. Their identities don't match any identity in the external IdP. In the example scenario, this description fits Grace. From a risk perspective, Gmail accounts that use a corporate email address as an alternate email address are equivalent to consumer accounts without a matching identity in the external IdP. Because these accounts use a seemingly trustworthy corporate email address as their second identity, they are subject to the risk of social engineering. If you want to maintain the access rights and some of the data associated with the Gmail account, you can ask the owner to remove Gmail from the user account so that you can then migrate them to Cloud Identity or Google Workspace. The best way to handle Gmail accounts that use a corporate email address as an alternate email address is to sanitize them. When you sanitize an account, you force the owner to give up the corporate email address by creating a managed user account with that same corporate email address. Additionally, we recommend that you revoke access to all corporate resources and provide the affected employees with the new managed user accounts as replacements. What's next Learn more about the different types of user accounts on Google Cloud. Find out how the migration process for consumer accounts works. Review best practices for federating Google Cloud with an external identity provider. Send feedback
|
Assess_onboarding_plans.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/identity/assessing-onboarding-plans
|
2 |
+
Date Scraped: 2025-02-23T11:55:23.370Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Assess onboarding plans Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC Cloud Identity and Google Workspace let you manage corporate identities and control access to Google services. To take advantage of the features that Cloud Identity and Google Workspace provide, you first have to onboard existing and new identities to Cloud Identity or Google Workspace. Onboarding involves the following steps: Prepare your Cloud Identity or Google Workspace accounts. If you've decided to use an external identity provider (IdP), set up federation. Create user accounts for your corporate identities. Consolidate existing user accounts. This document helps you assess the best order in which to approach these steps. Select an onboarding plan When you select an onboarding plan, consider the following critical decisions: Select a target architecture. Most importantly, you have to decide whether you want to make Google your primary IdP or whether you prefer to use an external IdP. If you have not yet decided, see the reference architectures overview to learn more about possible options. Decide whether to migrate existing consumer accounts. If you haven't been using Cloud Identity or Google Workspace, it's possible that your organization's employees might be using consumer accounts to access Google services. If you want to keep these user accounts and their data, you must migrate them to Cloud Identity or Google Workspace. For details on consumer accounts, how to identify them, and what risk they might pose to your organization, see Assessing existing user accounts. If you've decided to use an external IdP and to migrate existing consumer accounts, then you have a third decision to make—deciding whether to set up federation first or migrate existing user accounts first. Take the following factors into account: Migrating consumer accounts requires the owner's consent. The more user accounts you have to migrate, the longer it might take to get the consent of all affected account owners. If you need to migrate 100 or more consumer accounts to migrate, consider setting up federation before you migrate the existing consumer accounts. By setting up federation first, you ensure that all new identities and each migrated user account can immediately benefit from single sign-on, two-step verification, and other security features offered by Cloud Identity and Google Workspace. Setting up federation therefore helps you to quickly improve your overall security posture. However, setting up federation first requires you to configure your identity provider in a way that still allows existing user accounts to be migrated. This configuration can increase the complexity of your overall setup. If you need to migrate fewer than 100 consumer accounts, you can expect the process of migrating these user accounts to be reasonably quick. In this case, consider migrating existing user accounts before setting up federation. By completing the user account migration first, you can avoid the extra complexity of having to configure your identity provider in a way that still allows existing user accounts to be migrated. However, delaying the federation setup might slow down the process of improving your overall security posture. The following diagram summarizes how to select the best onboarding plan. This diagram shows the following decision paths to select an onboarding plan: If you're using Google as an IdP, select plan 1. If you aren't using Google as an IdP, and you don't want to migrate existing accounts, select plan 2. Select plan 3 in the following scenario: You aren't using Google as an IdP. You want to migrate existing accounts. You want to set up federation first. Select plan 4 in the following scenario: You aren't using Google as an IdP. You want to migrate existing accounts. You don't want to set up federation first. Onboarding plans This section outlines a set of onboarding plans that correspond to the scenarios discussed in the previous section. Plan 1: No federation Consider using this plan if all of the following are true: You want to use Google as your primary IdP. You might need to migrate existing user accounts to Cloud Identity or Google Workspace. The following diagram illustrates the process and steps that this plan involves. Set up the required Cloud Identity or Google Workspace accounts. To determine the right number of Cloud Identity or Google Workspace accounts to use, see Best practices for planning accounts and organizations. For details on how to create the accounts and which stakeholders might need to get involved, see Prepare your Cloud Identity or Google Workspace accounts. If some of the identities you want to onboard have existing consumer accounts, don't create user accounts in Cloud Identity or Google Workspace for these identities because doing so would result in a conflicting account. To minimize the risk of inadvertently creating conflicting accounts, start by creating user accounts for only a small, initial set of identities. We recommend that you use the Admin Console to create these accounts instead of using the API or batch upload to create these user accounts because the Admin Console will warn you about an impending creation of a conflicting account. Start the process of consolidating your existing user accounts. For details on how to accomplish this and which stakeholders might need to get involved, see Consolidating existing user accounts. Note: You can perform steps 2 and 3 in any order or in parallel. Finally, create user accounts for all remaining identities that you need to onboard. You can create accounts manually using the Admin Console, or if you're onboarding a large number of identities, consider the following alternatives: Create users in batches by using a CSV file. Automate user and group creation by using open source tools such as Google Apps Manager (GAM). Use the Directory API. Plan 2: Federation without user account consolidation Consider using this plan if all of the following are true: You want to use an external IdP. You don't need to migrate any existing user accounts. The following diagram illustrates the process and steps that this plan involves. Set up the required Cloud Identity or Google Workspace accounts. To determine the right number of Cloud Identity or Google Workspace accounts to use, see Best practices for planning accounts and organizations. For details on how to create the accounts and which stakeholders might need to get involved in this process, see Prepare your Cloud Identity or Google Workspace accounts. Set up federation with your external IdP. Typically, this means configuring automatic user account provisioning and setting up single sign-on. When you configure federation, take into account the recommendations in Best practices for federating Google Cloud with an external identity provider. Use your external IdP to create user accounts in Cloud Identity or Google Workspace for all identities that you need to onboard. Ensure that the identities in Cloud Identity or Google Workspace are a subset of the identities in your external IdP. For details, see Reconciling orphaned managed user accounts. Plan 3: Federation with user account consolidation Consider using this plan if all of the following are true: You want to use an external IdP. You need to migrate existing user accounts to Cloud Identity or Google Workspace, but want to set up federation first. This plan lets you start using single sign-on quickly. Any new user accounts that you create in Cloud Identity or Google Workspace are immediately able to use single sign-on, as are existing user accounts after you've migrated them. This integration with an external IdP lets you minimize user account administration—your IdP can handle both identity onboarding and offboarding. Compared to the delayed federation plan explained in the next section, this plan increases your risk of conflicting accounts or locked-out users. This plan therefore requires careful attention when you set up federation. The following diagram illustrates the process and steps that this plan involves. Set up the required Cloud Identity or Google Workspace accounts. To determine the right number of Cloud Identity or Google Workspace accounts to use, see Best practices for planning accounts and organizations. For details on how to create the accounts and which stakeholders might need to get involved in this process, see Prepare your Cloud Identity or Google Workspace accounts. Set up federation with your external IdP. Typically, this means that you configure automatic user account provisioning and setting up single sign-on. Because some of the identities you want to onboard have existing consumer accounts that you still need to migrate, make sure that you prevent your external IdP from interfering with your ability to consolidate existing consumer accounts. For details on how you can configure your external IdP in a way that is safe for account consolidation, see Assessing user account consolidation impact on federation. When you configure federation, take into account the recommendations in Best practices for federating Google Cloud with an external identity provider. Use your external IdP to create user accounts in Cloud Identity or Google Workspace for the initial set of identities that you need to onboard. Be careful to create user accounts only for identities that don't have an existing user account. Start the process of consolidating your existing user accounts. For details on how to accomplish this and which stakeholders might need to get involved, see Consolidating existing user accounts. Note: You can perform steps 3 and 4 in any order or in parallel. To make your setup safe for account consolidation, remove any special configuration that you've applied to your federation setup. Because all existing accounts are already migrated at this point, this special configuration is no longer required. Use your external IdP to create user accounts in Cloud Identity or Google Workspace for all remaining identities that you need to onboard. Plan 4: Delayed federation Consider using this plan if all of the following are true: You want to use an external IdP. You need to migrate existing user accounts to Cloud Identity or Google Workspace before setting up federation. This plan is effectively a combination of no federation and federation without user account consolidation, as discussed earlier. A key benefit of this plan over federation with user account consolidation is the lower risk of conflicting accounts or locked-out users. However, because your plan is to eventually use an external IdP for authentication, the approach has the following downsides: You cannot enable single sign-on before all relevant users have been migrated. Depending on the number of unmanaged accounts you're dealing with and how quickly users react to your account transfer requests, this migration might take days or weeks. During the migration, you have to create new user accounts in Cloud Identity or Google Workspace in addition to creating accounts in your external IdP. Similarly, for employees who leave, you must disable or delete their user accounts in Cloud Identity or Google Workspace, and in the external IdP. This redundant administration increases overall effort and can introduce inconsistencies. The following diagram illustrates the process and steps that this plan involves. Set up the required Cloud Identity or Google Workspace accounts. To determine the right number of Cloud Identity or Google Workspace accounts to use, see Best practices for planning accounts and organizations. For details on how to create the accounts and which stakeholders might need to get involved, see Prepare your Cloud Identity or Google Workspace accounts. If some of the identities you want to onboard have existing consumer accounts, don't create user accounts in Cloud Identity or Google Workspace for these identities because doing so would result in conflicting accounts. Start by creating user accounts for only a small, initial set of identities. We recommend that you use the Admin Console to create these accounts instead of using the API or batch upload because the Admin Console will warn you about an impending creation of a conflicting account. Start the process of consolidating your existing user accounts. For details on how to accomplish this and which stakeholders might need to get involved, see Consolidating existing user accounts. Note: You can perform steps 2 and 3 in any order or in parallel. Set up federation with your external IdP. Typically, this means configuring automatic user account provisioning and setting up single sign-on. When you configure federation, take into account the recommendations in Best practices for federating Google Cloud with an external identity provider. Because all existing accounts are already migrated at this point, you don't need to apply any special configuration to make federation safe for account consolidation. Use your external IdP to create user accounts in Cloud Identity or Google Workspace for all identities that you need to onboard. What's next If you decided to use federation with user account consolidation, proceed by assessing user account consolidation impact on federation. Start your onboarding process by preparing Cloud Identity or Google Workspace accounts. Send feedback
|
Assess_reliability_requirements.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/infra-reliability-guide/requirements
|
2 |
+
Date Scraped: 2025-02-23T11:54:09.213Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Assess the reliability requirements for your cloud workloads Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC The first step toward building reliable infrastructure for your cloud workloads is to identify the reliability requirements of the workloads. This part of the Google Cloud infrastructure reliability guide provides guidelines to help you define the reliability requirements of workloads that you deploy in Google Cloud. Determine workload-specific requirements The reliability requirements of an application depend on the nature of the service that the application provides or the process that it performs. For example, an application that provides ATM services for a bank might need 5-nines availability. A website that supports an online trading platform might need 5-nines availability and a fast response time. A batch process that writes banking transactions to an accounting ledger at the end of every day might have a data-freshness target of eight hours. Within an application, the individual components or operations might have varying reliability requirements. For example, an order-processing application might need higher reliability for operations that write data to the orders database when compared with read requests. Assessing the reliability requirements of your workloads granularly helps you focus your spending and effort on the workloads that are critical for your business. Identify critical periods There might be periods when an application is more business-critical than at other times. These periods are often the times when the application has peak load. Identify these periods, plan adequate capacity, and test the application against peak-load conditions. To avoid the risk of application outages during peak-load periods, you can use appropriate operational practices like freezing the production code. The following are examples of applications that experience seasonal spikes in load: The inventory module of a financial accounting application is typically used more heavily on the days when the monthly, quarterly, or annual inventory audits are scheduled. An ecommerce website would have significant spikes in load during peak shopping seasons or promotional events. A database that supports the student admissions module of a university would have a high volume of write operations during certain months of every year. An online tax-filing service would have a high load during the tax-filing season. An online trading platform might need 5-nines availability and fast response time, but only during trading hours (for example, 8 AM to 5 PM from Monday to Friday). Consider other non-functional requirements Besides reliability requirements, enterprise applications can have other important non-functional requirements for security, performance, cost, and operational efficiency. When you assess the reliability requirements of an application, consider the dependencies and trade-offs with these other requirements. The following are examples of requirements that aren't for reliability, but can involve trade-offs with reliability requirements. Cost optimization: To optimize IT cost, your organization might impose quotas for certain cloud resources. For example, to reduce the cost of third-party software licenses, your organization might set quotas for the number of compute cores that can be provisioned. Similar quotas can exist for the amount of data that can be stored and the volume of cross-region network traffic. Consider the effects of these cost constraints on the options available for designing reliable infrastructure. Data residency: To meet regulatory requirements, your application might need to store and process data in specific countries, even if the business serves users globally. Consider such data residency constraints when deciding the regions and zones where your applications can be deployed. Certain design decisions that you make to meet other requirements can help improve the reliability of your applications. The following are some examples: Deployment automation: To operate your cloud deployments efficiently, you might decide to automate the provisioning flow by using infrastructure as code (IaC). Similarly, you might automate the application build and deployment process by using a continuous integration and continuous deployment (CI/CD) pipeline. Using IaC and CI/CD pipelines can help improve not just operational efficiency, but also the reliability of your workloads. Security controls: Security controls that you implement can also help improve the availability of the application. For example, Google Cloud Armor security policies can help ensure that the application remains available during denial of service (DoS) attacks. Content caching: To improve the performance of a content-serving application, you might enable caching as part of your load balancer configuration. With this design, users experience not only faster access to content but also higher availability. They can access cached content even when the origin servers are down. Reassess requirements periodically As your business evolves and grows, the requirements of your applications might change. Reassess your reliability requirements periodically, and make sure that they align with the current business goals and priorities of your organization. Consider an application that provides a standard level of availability for all users. You might have deployed the application in two zones within a region, with a regional load balancer as the frontend. If your organization plans to launch a premium service option that provides higher availability, then the reliability requirements of the application have changed. To meet the new availability requirements, you might need to deploy the application to multiple regions and use a global load balancer with Cloud CDN enabled. Another opportunity to reassess the availability requirements of your applications is after an outage occurs. Outages might expose mismatched expectations across different teams within your business. For example, one team might consider a 45-minute outage once a year (that is, 99.99% annual availability) as acceptable. But another team might expect a maximum downtime of 4.3 minutes per month (that is, 99.99% monthly availability). Depending on how you decide to modify or clarify the availability requirements, you should adjust your architecture to meet the new requirements. Previous arrow_back Building blocks of reliability Next Design reliable infrastructure arrow_forward Send feedback
|
Assess_the_impact_of_user_account_consolidation_on_federation.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
URL: https://cloud.google.com/architecture/identity/assessing-consolidation-impact-on-federation
|
2 |
+
Date Scraped: 2025-02-23T11:55:25.795Z
|
3 |
+
|
4 |
+
Content:
|
5 |
+
Home Docs Cloud Architecture Center Send feedback Assess the impact of user account consolidation on federation Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC If you plan to federate Cloud Identity or Google Workspace with an external identity provider (IdP) but still need to consolidate existing consumer accounts, this document helps you understand and assess the interplay between federation and consolidation. This document also shows you how to configure federation in a way that doesn't interfere with your ability to consolidate existing consumer accounts. Note: This document applies only if you decided to follow Plan 3: Federation with user account consolidation (or a variation thereof). Interplay between federation and user account consolidation In a federated setup, you connect Cloud Identity or Google Workspace to an external authoritative source so that the authoritative source can automatically provision user accounts in Cloud Identity or Google Workspace. These invariants typically hold for a federated setup: The authoritative source is the only source for identities. There are no user accounts in Cloud Identity or Google Workspace other than the ones provisioned by the authoritative source. The SAML identity provider does not allow Google single sign-on for any identities other than the ones for which the authoritative source has provisioned user accounts. Although these invariants reflect the best practices for federating Google Cloud with an external identity provider, they cause problems when you want to migrate existing consumer accounts: Existing consumer accounts don't originate from the authoritative source. These accounts already exist, and they now need to be linked to an identity known by the authoritative source. Existing consumer accounts, once they are migrated to Cloud Identity or Google Workspace, are user accounts that have not been provisioned by the authoritative source. The authoritative source must recognize and "adopt" these migrated accounts. The identities of existing consumer accounts might be unknown to the SAML identity provider, yet they still need to be allowed to use single sign-on. To allow existing consumer accounts to be consolidated, you have to temporarily set up federation in a way that is safe for account consolidation. Make federation safe for account consolidation The following table lists the requirements to consider in order to make federation safe for account consolidation. If you plan to use an external IdP but still need to consolidate existing consumer accounts, then you have to make sure that your setup initially meets these requirements. After you have completed the migration of existing consumer accounts, you are free to change the configuration because the requirements then no longer hold. Requirement Justification Allow single sign-on for identities with consumer accounts Migrating a consumer account requires an account transfer. A Cloud Identity or Google Workspace administrator initiates the account transfer, but in order to complete the transfer, the owner of the consumer account must consent to the transfer. As an administrator, you have limited control over when the consent will be expressed and thus, when the transfer is conducted. Once the owner expresses consent and the transfer is complete, all subsequent sign-ons are subject to single sign-on using your external IdP. For single sign-on to succeed, regardless of when the transfer is complete, ensure that your external IdP allows single sign-ons for the identities of all consumer accounts that you plan to migrate. Prevent automatic user provisioning for identities with consumer accounts If you provision a user account for an identity that already has a consumer account, you create a conflicting account. A conflicting account blocks you from transferring ownership of the consumer account, its configuration, and any associated data to Cloud Identity or Google Workspace. The default behavior of many external IdPs is to proactively create user accounts in Cloud Identity or Google Workspace. This behavior can inadvertently cause conflicting accounts to be created. By preventing automatic user provisioning for identities with existing consumer accounts, you avoid inadvertently creating conflicting accounts and ensure that consumer accounts can be transferred correctly. If you have identified consumer accounts without a matching identity in the external IdP that you consider legitimate and that you want to migrate to Cloud Identity or Google Workspace, then you have to make sure that your federation configuration does not interfere with your ability to migrate these consumer accounts. Requirement Justification Prevent deletion of migrated accounts without a matching identity in the external IdP If you have a user account in Cloud Identity or Google Workspace that does not have a matching identity in your external IdP, then your IdP might consider this user account orphaned and might suspend or delete it. By preventing your external IdP from suspending or deleting migrated accounts without matching the identity in the external IdP, you avoid losing the configuration and data associated with affected accounts and ensure that you can manually reconcile the accounts them. Make Microsoft Entra ID (formerly Azure AD) federation safe for account consolidation If you plan to federate Cloud Identity or Google Workspace with Microsoft Entra ID (formerly Azure AD), you can use the Google Workspace gallery app. Note: The Google Workspace gallery app from the Microsoft Azure marketplace is a Microsoft product and is neither maintained nor supported by Google. When you enable provisioning, Microsoft Entra ID ignores existing accounts in Cloud Identity or Google Workspace that don't have a counterpart in Microsoft Entra ID, so the requirement to prevent deletion of migrated accounts without a matching identity in the external IdP is always met. Depending on how you configure the gallery app, you must still ensure that you do the following: Allow single sign-on for identities with consumer accounts. Prevent automatic user provisioning for identities with consumer accounts. There are multiple ways to meet these requirements. Each approach has advantages and disadvantages. Approach 1: Don't configure provisioning In this approach, you configure the gallery app to handle single sign-on, but you don't configure automatic user provisioning. By not configuring user provisioning, you prevent automatic user provisioning for identities with consumer accounts. To allow single sign-on for identities with consumer accounts, assign the app to all identities that might eventually need access to Google services, even if their existing consumer accounts are still subject to being migrated. For a user who has an existing consumer account, the corresponding Cloud Identity or Google Workspace user account is created automatically when the transfer request is accepted. That user can then immediately use single sign-on. For users who don't have a user account in Cloud Identity or Google Workspace, you have to create one manually. Although this approach meets the requirements and is the least complex to set up, it comes with the limitation that any attribute changes or user suspensions performed in Microsoft Entra ID won't be propagated to Cloud Identity or Google Workspace. Approach 2: Two apps with manual assignment In this approach, you overcome the limitation of having to manually create user accounts in Google Workspace or Cloud Identity for users that don't have an existing account. The idea is to use two gallery apps, one for provisioning and one for single sign-on: The first app is used exclusively for provisioning users and groups and has single sign-on disabled. By assigning users to this app, you control which accounts are being provisioned to Cloud Identity or Google Workspace. The second app is used exclusively for single sign-on and is not authorized to provision users. By assigning users to this app, you control which users are allowed to sign on. Using these two apps, assign users as follows: Assign all identities that eventually need access to Google services to the single sign-on app. Include identities with existing consumer accounts so that you allow single sign-on for identities with consumer accounts. When assigning identities to the provisioning app, include the identities that eventually need access to Google services, but exclude all identities that are known to have an existing consumer account. This way, you prevent automatic user provisioning for identities with consumer accounts. Important: However, any mistake in assignment can lead immediately to a conflicting account being created, making this approach more risky than other approaches. Approach 3: Two apps with user creation disabled When configuring provisioning, you need to authorize Microsoft Entra ID to access Cloud Identity or Google Workspace by using a Cloud Identity or Google Workspace account. Normally, it's best to use a dedicated super-admin account for this purpose, because super-admin accounts are exempted from single sign-on (that is, any SSO configuration doesn't apply to them; they will continue to use passwords for login). However, for this scenario, you can have Microsoft Entra ID use a more restricted account for migration, one that doesn't allow Microsoft Entra ID to create users. That way, you effectively prevent Azure from automatically provisioning user accounts for identities with consumer accounts, regardless of which users are assigned to the provisioning app. A restricted administrator user account in Cloud Identity or Google Workspace should have only the following privileges: Organization Units > Read Users > Read Users > Update Groups Note: Disallowing Microsoft Entra ID from creating user accounts won't stop it from attempting to create them. Therefore, you're likely to find errors in the Microsoft Entra ID audit logs indicating that it failed to create user accounts in Cloud Identity or Google Workspace. A downside of this approach is that for users without unmanaged accounts, you must manually create accounts in Cloud Identity or Google Workspace. Federate with Microsoft Entra ID: Comparison The following table summarizes the approaches. Allow single sign-on for identities with consumer accounts Prevent automatic user provisioning for identities with consumer accounts Prevent deletion of migrated accounts without a matching identity in the external IdP Auto-provision new accounts Auto-update migrated accounts Approach 1: No provisioning ✅ ✅ ✅ X X Approach 2: Two apps with manual assignment ✅ Prone to manual error ✅ ✅ ✅ Approach 3: Two apps with user creation disabled ✅ ✅ ✅ X ✅ Make Active Directory federation safe for account If you plan to federate Cloud Identity or Google Workspace with Active Directory, you can use Google Cloud Directory Sync (GCDS) and Active Directory Federation Services (AD FS). When you configure GCDS and AD FS, you have to make sure to do the following: Allow single sign-on for identities with consumer accounts. Prevent automatic user provisioning for identities with consumer accounts. Prevent the deletion of migrated accounts without a matching identity in the external IdP. There are multiple ways to meet these requirements. Each approach has advantages and disadvantages. Approach 1: Disable GCDS In this approach, you set up single sign-on with AD FS, but you don't enable GCDS until you've finished migrating unmanaged user accounts. By disabling GCDS, you prevent automatic user provisioning for identities with consumer accounts. To allow single sign-on for identities with consumer accounts, create a custom access control policy in AD FS and assign all identities that might eventually need access to Google services, even if their existing consumer accounts are still subject to being migrated. For a user who has an existing consumer account, the corresponding Cloud Identity or Google Workspace user account is created automatically when the transfer request is accepted. By using the custom access control policy, you ensure that the user can immediately use single sign-on. For users who don't have a user account in Cloud Identity or Google Workspace, you have to create one manually. Although this approach meets the requirements and is the least complex to set up, it comes with the limitation that any attribute changes or user suspensions performed in Active Directory won't be propagated to Cloud Identity or Google Workspace. Approach 2: GCDS with manual assignment In this approach, you overcome the limitation of having to manually create user accounts in Cloud Identity or Google Workspace for users that don't have an existing account: Equivalent to Approach 1, you allow single sign-on for identities with consumer accounts by creating a custom access control policy in AD FS and assigning all identities that might eventually need access to Google services, even if their existing consumer accounts are still subject to being migrated. Create a group in Active Directory that reflects the user accounts that you want to automatically provision to GCDS. In the list of members, include the identities that eventually need access to Google services, but exclude all identities that are known to have an existing consumer account. Configure GCDS to provision user accounts only for identities that are members of this group. This way, you prevent automatic user provisioning for identities with consumer accounts. A key limitation of this approach is that you cannot prevent the deletion of migrated accounts without a matching identity in the external IdP. The approach is therefore applicable only if you don't have any consumer accounts without a matching identity in the external IdP. Important: Any mistake in assignment can lead to a conflicting account being created, making this approach more risky than other approaches. Approach 3: Disallow GCDS to create users When configuring provisioning, you must authorize GCDS to access Cloud Identity or Google Workspace. Normally, it's best to use a dedicated super-admin account for this purpose, because such accounts are exempted from single sign-on (that is, any SSO configuration doesn't apply to them; they will continue to use passwords for login). However, for this scenario, you can have GCDS use a more restricted account for migration, one that doesn't allow it to create users. That way, you effectively prevent GCDS from automatically provisioning user accounts for identities with consumer accounts and from deleting migrated accounts without a matching identity in the external IdP. A restricted administrator user account in Cloud Identity or Google Workspace should only have the following privileges: Organizational Units Users > Read Users > Update Groups Schema Management Domain Management Note: Disallowing GCDS from creating user accounts won't stop it from attempting to create them. Therefore, you're likely to find errors in the GCDS log indicating that it failed to create user accounts in Cloud Identity or Google Workspace. A downside of this approach is that for users without unmanaged accounts, you must manually create accounts in Cloud Identity or Google Workspace. Federate with Active Directory: Comparison The following table summarizes the approaches. Allow single sign-on for identities with consumer accounts Prevent automatic user provisioning for identities with consumer accounts Prevent deletion of migrated accounts without a matching identity in the external IdP Auto-provision new accounts Auto-update migrated accounts Approach 1: Don't configure provisioning ✅ ✅ ✅ X X Approach 2: GCDS with manual assignment ✅ Prone to manual error X ✅ ✅ Approach 3: Disallow GCDS to create users ✅ ✅ ✅ X ✅ Make Okta federation safe for account consolidation To federate Cloud Identity or Google Workspace with Okta, you can use the Google Workspace app from the Okta app catalog. This app can handle single sign-on and provision users and groups to Cloud Identity or Google Workspace. When you use the Google Workspace app for provisioning, Okta ignores any existing users in Cloud Identity or Google Workspace that don't have a counterpart in Okta, so the requirement to prevent deletion of migrated accounts without a matching identity in the external IdP is always met. Depending on how you configure Okta, you must still do the following: Allow single sign-on for identities with consumer accounts. Prevent automatic user provisioning for identities with consumer accounts. There are multiple ways to meet these requirements. Each approach has advantages and disadvantages. Approach 1: Don't configure provisioning In this approach, you configure the Google Workspace app to handle single sign-on but don't configure provisioning at all. By not configuring user provisioning, you prevent automatic user provisioning for identities with consumer accounts. To allow single sign-on for identities with consumer accounts, assign the app to all identities that might eventually need access to Google services, even if their existing consumer accounts are still subject to being migrated. The Google Workspace or Google Cloud icons appear on the Okta homepage of all identities that have been assigned to the app. However, signing in will fail unless a corresponding user account happens to exist on the Google side. For a user who has an existing consumer account, the corresponding Cloud Identity or Google Workspace user account is created automatically when the transfer request is accepted. That user can then immediately use single sign-on. Although this approach meets the requirements and is the least complex to set up, it comes with the limitation that any attribute changes or user suspensions performed in Okta won't be propagated to Cloud Identity or Google Workspace. Another downside of this approach is that you must manually create accounts in Cloud Identity or Google Workspace for all users who don't have an existing consumer account. Approach 2: Provision with manual assignment In this approach, you configure the Google Workspace app to handle single sign-on and provisioning but only enable the following provisioning features: Create users Update user attributes Deactivate users When you assign identities to the app, include the identities that eventually need access to Google services, but exclude all identities that are known to have an existing consumer account. This way, you prevent automatic user provisioning for identities with consumer accounts. As soon as a user accepts a transfer request, assign the user to the app so that they are enabled to use single sign-on and access Google Workspace or Google Cloud. One downside of this approach is that any mistake that you make in assignment can immediately lead to a conflicting account being created, which makes this approach much riskier than some of the other approaches. Another downside of this approach is that it causes temporary lockouts of migrated accounts. After accepting a transfer request, a user has to perform any subsequent sign-ons through Okta. These sign-on attempts will fail until you have assigned the user to the app in Okta. Approach 3: Provision with user creation disabled In this approach, you configure Google Workspace to handle single sign-on and provisioning but only enable the following provisioning features: Update user attributes Deactivate users Leave the Create Users option disabled and assign all identities that eventually need access to Google services to the app. Include identities with existing consumer accounts so that you allow single sign-on for identities with consumer accounts. By disallowing Okta to create accounts, you prevent Okta from automatically provisioning user accounts for identities with consumer accounts. At the same time, this configuration still lets Okta propagate attribute changes and user suspensions to Cloud Identity or Google Workspace for those users that have a corresponding Google Account. For identities that don't have a corresponding user account in Cloud Identity or Google Workspace, Okta might display an error message in the Okta Admin console: For a user who has an existing consumer account, the corresponding Cloud Identity or Google Workspace user account is created automatically when the transfer request is accepted. That user can then immediately use single sign-on. Although the user account is functional at this point, Okta might not display an icon on the user's home page yet and might instead continue to display the error message in the Admin UI. To fix this, retry the assignment task in the Okta Administrator Dashboard. This approach successfully prevents Okta from automatically provisioning user accounts for identities with consumer accounts, but still allows single sign-on for identities with consumer accounts. The approach is also less prone to accidental misconfiguration than the second approach. One downside is still that for users without existing consumer accounts, you must manually create user accounts in Cloud Identity or Google Workspace. Approach 4: Two apps with manual assignment You can overcome some of the disadvantages of the previous approaches by using two apps, one for provisioning and one for single sign-on: Configure one instance of the Google Workspace app to handle provisioning only. The single sign-on functionality of the app is not used. By assigning users to this app, you control which accounts are being provisioned to Cloud Identity or Google Workspace. You can ensure that this app is effectively hidden from your users by enabling the Do not display application icon to users option. Configure another instance of the Google Workspace app for single sign-on purposes only. By assigning users to this app, you control who is allowed to sign on. Using these two apps, assign users as follows: Assign all identities that eventually need access to Google services to the single sign-on app. Include identities with existing consumer accounts so that you allow single sign-on for identities with consumer accounts. When assigning identities to the provisioning app, include the identities that eventually need access to Google services, but exclude all identities that are known to have an existing consumer account. This way, you prevent automatic user provisioning for identities with consumer accounts. Whenever a user accepts a transfer request, assign the user to the app as well. Important: Similar to Approach 2, a downside of this approach is that any mistake that you make in assignment can immediately lead to a conflicting account being created, making this approach substantially more risky than other approaches. Federate with Okta: Comparison The following table summarizes the approaches. Allow single sign-on for identities with consumer accounts Prevent automatic user provisioning for identities with consumer accounts Prevent deletion of migrated accounts without a matching identity in the external IdP Auto-provision new accounts Auto-update migrated accounts Approach 1: No provisioning ✅ ✅ ✅ X X Approach 2: Provision with manual assignment X Risky ✅ ✅ ✅ Approach 3: Provision with user creation disabled ✅ ✅ ✅ X ✅ Approach 4: Two apps with manual assignment ✅ Risky ✅ ✅ ✅ What's next Review how you can set up federation with Active Directory or Microsoft Entra ID. Start your onboarding process by preparing Cloud Identity or Google Workspace accounts. Send feedback
|