blob: 55ba9b5d51f1e62adf3449287875aea92a53b82d [file] [log] [blame]
---
# ----------------------------------------------------------------------------
#
# *** AUTO GENERATED CODE *** Type: MMv1 ***
#
# ----------------------------------------------------------------------------
#
# This file is automatically generated by Magic Modules and manual
# changes will be clobbered when the file is regenerated.
#
# Please read more about how to change this file in
# .github/CONTRIBUTING.md.
#
# ----------------------------------------------------------------------------
subcategory: "Vertex AI"
description: |-
'DeploymentResourcePool can be shared by multiple deployed models,
whose underlying specification consists of dedicated resources.
---
# google_vertex_ai_deployment_resource_pool
'DeploymentResourcePool can be shared by multiple deployed models,
whose underlying specification consists of dedicated resources.'
To get more information about DeploymentResourcePool, see:
* [API documentation](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.deploymentResourcePools)
<div class = "oics-button" style="float: right; margin: 0 0 -15px">
<a href="https://console.cloud.google.com/cloudshell/open?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fdocs-examples.git&cloudshell_image=gcr.io%2Fcloudshell-images%2Fcloudshell%3Alatest&cloudshell_print=.%2Fmotd&cloudshell_tutorial=.%2Ftutorial.md&cloudshell_working_dir=vertex_ai_deployment_resource_pool&open_in_editor=main.tf" target="_blank">
<img alt="Open in Cloud Shell" src="//gstatic.com/cloudssh/images/open-btn.svg" style="max-height: 44px; margin: 32px auto; max-width: 100%;">
</a>
</div>
## Example Usage - Vertex Ai Deployment Resource Pool
```hcl
resource "google_vertex_ai_deployment_resource_pool" "deployment_resource_pool" {
region = "us-central1"
name = "example-deployment-resource-pool"
dedicated_resources {
machine_spec {
machine_type = "n1-standard-4"
accelerator_type = "NVIDIA_TESLA_K80"
accelerator_count = 1
}
min_replica_count = 1
max_replica_count = 2
autoscaling_metric_specs {
metric_name = "aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle"
target = 60
}
}
}
```
## Argument Reference
The following arguments are supported:
* `name` -
(Required)
The resource name of deployment resource pool. The maximum length is 63 characters, and valid characters are `/^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/`.
- - -
* `dedicated_resources` -
(Optional)
The underlying dedicated resources that the deployment resource pool uses.
Structure is [documented below](#nested_dedicated_resources).
* `region` -
(Optional)
The region of deployment resource pool. eg us-central1
* `project` - (Optional) The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
<a name="nested_dedicated_resources"></a>The `dedicated_resources` block supports:
* `machine_spec` -
(Required)
The specification of a single machine used by the prediction
Structure is [documented below](#nested_machine_spec).
* `min_replica_count` -
(Required)
The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
* `max_replica_count` -
(Optional)
The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
* `autoscaling_metric_specs` -
(Optional)
A list of the metric specifications that overrides a resource utilization metric.
Structure is [documented below](#nested_autoscaling_metric_specs).
<a name="nested_machine_spec"></a>The `machine_spec` block supports:
* `machine_type` -
(Optional)
The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types).
* `accelerator_type` -
(Optional)
The type of accelerator(s) that may be attached to the machine as per accelerator_count. See possible values [here](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec#AcceleratorType).
* `accelerator_count` -
(Optional)
The number of accelerators to attach to the machine.
<a name="nested_autoscaling_metric_specs"></a>The `autoscaling_metric_specs` block supports:
* `metric_name` -
(Required)
The resource metric name. Supported metrics: For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization`
* `target` -
(Optional)
The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
## Attributes Reference
In addition to the arguments listed above, the following computed attributes are exported:
* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{region}}/deploymentResourcePools/{{name}}`
* `create_time` -
A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits.
## Timeouts
This resource provides the following
[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options:
- `create` - Default is 20 minutes.
- `delete` - Default is 20 minutes.
## Import
DeploymentResourcePool can be imported using any of these accepted formats:
* `projects/{{project}}/locations/{{region}}/deploymentResourcePools/{{name}}`
* `{{project}}/{{region}}/{{name}}`
* `{{region}}/{{name}}`
* `{{name}}`
In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import DeploymentResourcePool using one of the formats above. For example:
```tf
import {
id = "projects/{{project}}/locations/{{region}}/deploymentResourcePools/{{name}}"
to = google_vertex_ai_deployment_resource_pool.default
}
```
When using the [`terraform import` command](https://developer.hashicorp.com/terraform/cli/commands/import), DeploymentResourcePool can be imported using one of the formats above. For example:
```
$ terraform import google_vertex_ai_deployment_resource_pool.default projects/{{project}}/locations/{{region}}/deploymentResourcePools/{{name}}
$ terraform import google_vertex_ai_deployment_resource_pool.default {{project}}/{{region}}/{{name}}
$ terraform import google_vertex_ai_deployment_resource_pool.default {{region}}/{{name}}
$ terraform import google_vertex_ai_deployment_resource_pool.default {{name}}
```
## User Project Overrides
This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override).