blob: f62cf08cb7f0acf41cecfdd3c7ffce14c35a692c [file] [log] [blame]
---
# ----------------------------------------------------------------------------
#
# *** AUTO GENERATED CODE *** Type: MMv1 ***
#
# ----------------------------------------------------------------------------
#
# This file is automatically generated by Magic Modules and manual
# changes will be clobbered when the file is regenerated.
#
# Please read more about how to change this file in
# .github/CONTRIBUTING.md.
#
# ----------------------------------------------------------------------------
subcategory: "Cloud (Stackdriver) Monitoring"
description: |-
A description of the conditions under which some aspect of your system is
considered to be "unhealthy" and the ways to notify people or services
about this state.
---
# google\_monitoring\_alert\_policy
A description of the conditions under which some aspect of your system is
considered to be "unhealthy" and the ways to notify people or services
about this state.
To get more information about AlertPolicy, see:
* [API documentation](https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.alertPolicies)
* How-to Guides
* [Official Documentation](https://cloud.google.com/monitoring/alerts/)
## Example Usage - Monitoring Alert Policy Basic
```hcl
resource "google_monitoring_alert_policy" "alert_policy" {
display_name = "My Alert Policy"
combiner = "OR"
conditions {
display_name = "test condition"
condition_threshold {
filter = "metric.type=\"compute.googleapis.com/instance/disk/write_bytes_count\" AND resource.type=\"gce_instance\""
duration = "60s"
comparison = "COMPARISON_GT"
aggregations {
alignment_period = "60s"
per_series_aligner = "ALIGN_RATE"
}
}
}
user_labels = {
foo = "bar"
}
}
```
## Example Usage - Monitoring Alert Policy Evaluation Missing Data
```hcl
resource "google_monitoring_alert_policy" "alert_policy" {
display_name = "My Alert Policy"
combiner = "OR"
conditions {
display_name = "test condition"
condition_threshold {
filter = "metric.type=\"compute.googleapis.com/instance/disk/write_bytes_count\" AND resource.type=\"gce_instance\""
duration = "60s"
comparison = "COMPARISON_GT"
aggregations {
alignment_period = "60s"
per_series_aligner = "ALIGN_RATE"
}
evaluation_missing_data = "EVALUATION_MISSING_DATA_INACTIVE"
}
}
user_labels = {
foo = "bar"
}
}
```
## Example Usage - Monitoring Alert Policy Forecast Options
```hcl
resource "google_monitoring_alert_policy" "alert_policy" {
display_name = "My Alert Policy"
combiner = "OR"
conditions {
display_name = "test condition"
condition_threshold {
filter = "metric.type=\"compute.googleapis.com/instance/disk/write_bytes_count\" AND resource.type=\"gce_instance\""
duration = "60s"
forecast_options {
forecast_horizon = "3600s"
}
comparison = "COMPARISON_GT"
aggregations {
alignment_period = "60s"
per_series_aligner = "ALIGN_RATE"
}
}
}
user_labels = {
foo = "bar"
}
}
```
## Example Usage - Monitoring Alert Policy Promql Condition
```hcl
resource "google_monitoring_alert_policy" "alert_policy" {
display_name = "My Alert Policy"
combiner = "OR"
conditions {
display_name = "test condition"
condition_prometheus_query_language {
query = "compute_googleapis_com:instance_cpu_usage_time > 0"
duration = "60s"
evaluation_interval = "60s"
alert_rule = "AlwaysOn"
rule_group = "a test"
}
}
alert_strategy {
auto_close = "1800s"
}
}
```
## Argument Reference
The following arguments are supported:
* `display_name` -
(Required)
A short name or phrase used to identify the policy in
dashboards, notifications, and incidents. To avoid confusion, don't use
the same display name for multiple policies in the same project. The
name is limited to 512 Unicode characters.
* `combiner` -
(Required)
How to combine the results of multiple conditions to
determine if an incident should be opened.
Possible values are: `AND`, `OR`, `AND_WITH_MATCHING_RESOURCE`.
* `conditions` -
(Required)
A list of conditions for the policy. The conditions are combined by
AND or OR according to the combiner field. If the combined conditions
evaluate to true, then an incident is created. A policy can have from
one to six conditions.
Structure is [documented below](#nested_conditions).
<a name="nested_conditions"></a>The `conditions` block supports:
* `condition_absent` -
(Optional)
A condition that checks that a time series
continues to receive new data points.
Structure is [documented below](#nested_condition_absent).
* `name` -
(Output)
The unique resource name for this condition.
Its syntax is:
projects/[PROJECT_ID]/alertPolicies/[POLICY_ID]/conditions/[CONDITION_ID]
[CONDITION_ID] is assigned by Stackdriver Monitoring when
the condition is created as part of a new or updated alerting
policy.
* `condition_monitoring_query_language` -
(Optional)
A Monitoring Query Language query that outputs a boolean stream
Structure is [documented below](#nested_condition_monitoring_query_language).
* `condition_threshold` -
(Optional)
A condition that compares a time series against a
threshold.
Structure is [documented below](#nested_condition_threshold).
* `display_name` -
(Required)
A short name or phrase used to identify the
condition in dashboards, notifications, and
incidents. To avoid confusion, don't use the same
display name for multiple conditions in the same
policy.
* `condition_matched_log` -
(Optional)
A condition that checks for log messages matching given constraints.
If set, no other conditions can be present.
Structure is [documented below](#nested_condition_matched_log).
* `condition_prometheus_query_language` -
(Optional)
A condition type that allows alert policies to be defined using
Prometheus Query Language (PromQL).
The PrometheusQueryLanguageCondition message contains information
from a Prometheus alerting rule and its associated rule group.
Structure is [documented below](#nested_condition_prometheus_query_language).
<a name="nested_condition_absent"></a>The `condition_absent` block supports:
* `aggregations` -
(Optional)
Specifies the alignment of data points in
individual time series as well as how to
combine the retrieved time series together
(such as when aggregating multiple streams
on each resource to a single stream for each
resource or when aggregating streams across
all members of a group of resources).
Multiple aggregations are applied in the
order specified.
Structure is [documented below](#nested_aggregations).
* `trigger` -
(Optional)
The number/percent of time series for which
the comparison must hold in order for the
condition to trigger. If unspecified, then
the condition will trigger if the comparison
is true for any of the time series that have
been identified by filter and aggregations.
Structure is [documented below](#nested_trigger).
* `duration` -
(Required)
The amount of time that a time series must
fail to report new data to be considered
failing. Currently, only values that are a
multiple of a minute--e.g. 60s, 120s, or 300s
--are supported.
* `filter` -
(Optional)
A filter that identifies which time series
should be compared with the threshold.The
filter is similar to the one that is
specified in the
MetricService.ListTimeSeries request (that
call is useful to verify the time series
that will be retrieved / processed) and must
specify the metric type and optionally may
contain restrictions on resource type,
resource labels, and metric labels. This
field may not exceed 2048 Unicode characters
in length.
<a name="nested_aggregations"></a>The `aggregations` block supports:
* `per_series_aligner` -
(Optional)
The approach to be used to align
individual time series. Not all
alignment functions may be applied
to all time series, depending on
the metric type and value type of
the original time series.
Alignment may change the metric
type or the value type of the time
series.Time series data must be
aligned in order to perform cross-
time series reduction. If
crossSeriesReducer is specified,
then perSeriesAligner must be
specified and not equal ALIGN_NONE
and alignmentPeriod must be
specified; otherwise, an error is
returned.
Possible values are: `ALIGN_NONE`, `ALIGN_DELTA`, `ALIGN_RATE`, `ALIGN_INTERPOLATE`, `ALIGN_NEXT_OLDER`, `ALIGN_MIN`, `ALIGN_MAX`, `ALIGN_MEAN`, `ALIGN_COUNT`, `ALIGN_SUM`, `ALIGN_STDDEV`, `ALIGN_COUNT_TRUE`, `ALIGN_COUNT_FALSE`, `ALIGN_FRACTION_TRUE`, `ALIGN_PERCENTILE_99`, `ALIGN_PERCENTILE_95`, `ALIGN_PERCENTILE_50`, `ALIGN_PERCENTILE_05`, `ALIGN_PERCENT_CHANGE`.
* `group_by_fields` -
(Optional)
The set of fields to preserve when
crossSeriesReducer is specified.
The groupByFields determine how
the time series are partitioned
into subsets prior to applying the
aggregation function. Each subset
contains time series that have the
same value for each of the
grouping fields. Each individual
time series is a member of exactly
one subset. The crossSeriesReducer
is applied to each subset of time
series. It is not possible to
reduce across different resource
types, so this field implicitly
contains resource.type. Fields not
specified in groupByFields are
aggregated away. If groupByFields
is not specified and all the time
series have the same resource
type, then the time series are
aggregated into a single output
time series. If crossSeriesReducer
is not defined, this field is
ignored.
* `alignment_period` -
(Optional)
The alignment period for per-time
series alignment. If present,
alignmentPeriod must be at least
60 seconds. After per-time series
alignment, each time series will
contain data points only on the
period boundaries. If
perSeriesAligner is not specified
or equals ALIGN_NONE, then this
field is ignored. If
perSeriesAligner is specified and
does not equal ALIGN_NONE, then
this field must be defined;
otherwise an error is returned.
* `cross_series_reducer` -
(Optional)
The approach to be used to combine
time series. Not all reducer
functions may be applied to all
time series, depending on the
metric type and the value type of
the original time series.
Reduction may change the metric
type of value type of the time
series.Time series data must be
aligned in order to perform cross-
time series reduction. If
crossSeriesReducer is specified,
then perSeriesAligner must be
specified and not equal ALIGN_NONE
and alignmentPeriod must be
specified; otherwise, an error is
returned.
Possible values are: `REDUCE_NONE`, `REDUCE_MEAN`, `REDUCE_MIN`, `REDUCE_MAX`, `REDUCE_SUM`, `REDUCE_STDDEV`, `REDUCE_COUNT`, `REDUCE_COUNT_TRUE`, `REDUCE_COUNT_FALSE`, `REDUCE_FRACTION_TRUE`, `REDUCE_PERCENTILE_99`, `REDUCE_PERCENTILE_95`, `REDUCE_PERCENTILE_50`, `REDUCE_PERCENTILE_05`.
<a name="nested_trigger"></a>The `trigger` block supports:
* `percent` -
(Optional)
The percentage of time series that
must fail the predicate for the
condition to be triggered.
* `count` -
(Optional)
The absolute number of time series
that must fail the predicate for the
condition to be triggered.
<a name="nested_condition_monitoring_query_language"></a>The `condition_monitoring_query_language` block supports:
* `query` -
(Required)
Monitoring Query Language query that outputs a boolean stream.
* `duration` -
(Required)
The amount of time that a time series must
violate the threshold to be considered
failing. Currently, only values that are a
multiple of a minute--e.g., 0, 60, 120, or
300 seconds--are supported. If an invalid
value is given, an error will be returned.
When choosing a duration, it is useful to
keep in mind the frequency of the underlying
time series data (which may also be affected
by any alignments specified in the
aggregations field); a good duration is long
enough so that a single outlier does not
generate spurious alerts, but short enough
that unhealthy states are detected and
alerted on quickly.
* `trigger` -
(Optional)
The number/percent of time series for which
the comparison must hold in order for the
condition to trigger. If unspecified, then
the condition will trigger if the comparison
is true for any of the time series that have
been identified by filter and aggregations,
or by the ratio, if denominator_filter and
denominator_aggregations are specified.
Structure is [documented below](#nested_trigger).
* `evaluation_missing_data` -
(Optional)
A condition control that determines how
metric-threshold conditions are evaluated when
data stops arriving.
Possible values are: `EVALUATION_MISSING_DATA_INACTIVE`, `EVALUATION_MISSING_DATA_ACTIVE`, `EVALUATION_MISSING_DATA_NO_OP`.
<a name="nested_trigger"></a>The `trigger` block supports:
* `percent` -
(Optional)
The percentage of time series that
must fail the predicate for the
condition to be triggered.
* `count` -
(Optional)
The absolute number of time series
that must fail the predicate for the
condition to be triggered.
<a name="nested_condition_threshold"></a>The `condition_threshold` block supports:
* `threshold_value` -
(Optional)
A value against which to compare the time
series.
* `denominator_filter` -
(Optional)
A filter that identifies a time series that
should be used as the denominator of a ratio
that will be compared with the threshold. If
a denominator_filter is specified, the time
series specified by the filter field will be
used as the numerator.The filter is similar
to the one that is specified in the
MetricService.ListTimeSeries request (that
call is useful to verify the time series
that will be retrieved / processed) and must
specify the metric type and optionally may
contain restrictions on resource type,
resource labels, and metric labels. This
field may not exceed 2048 Unicode characters
in length.
* `denominator_aggregations` -
(Optional)
Specifies the alignment of data points in
individual time series selected by
denominatorFilter as well as how to combine
the retrieved time series together (such as
when aggregating multiple streams on each
resource to a single stream for each
resource or when aggregating streams across
all members of a group of resources).When
computing ratios, the aggregations and
denominator_aggregations fields must use the
same alignment period and produce time
series that have the same periodicity and
labels.This field is similar to the one in
the MetricService.ListTimeSeries request. It
is advisable to use the ListTimeSeries
method when debugging this field.
Structure is [documented below](#nested_denominator_aggregations).
* `duration` -
(Required)
The amount of time that a time series must
violate the threshold to be considered
failing. Currently, only values that are a
multiple of a minute--e.g., 0, 60, 120, or
300 seconds--are supported. If an invalid
value is given, an error will be returned.
When choosing a duration, it is useful to
keep in mind the frequency of the underlying
time series data (which may also be affected
by any alignments specified in the
aggregations field); a good duration is long
enough so that a single outlier does not
generate spurious alerts, but short enough
that unhealthy states are detected and
alerted on quickly.
* `forecast_options` -
(Optional)
When this field is present, the `MetricThreshold`
condition forecasts whether the time series is
predicted to violate the threshold within the
`forecastHorizon`. When this field is not set, the
`MetricThreshold` tests the current value of the
timeseries against the threshold.
Structure is [documented below](#nested_forecast_options).
* `comparison` -
(Required)
The comparison to apply between the time
series (indicated by filter and aggregation)
and the threshold (indicated by
threshold_value). The comparison is applied
on each time series, with the time series on
the left-hand side and the threshold on the
right-hand side. Only COMPARISON_LT and
COMPARISON_GT are supported currently.
Possible values are: `COMPARISON_GT`, `COMPARISON_GE`, `COMPARISON_LT`, `COMPARISON_LE`, `COMPARISON_EQ`, `COMPARISON_NE`.
* `trigger` -
(Optional)
The number/percent of time series for which
the comparison must hold in order for the
condition to trigger. If unspecified, then
the condition will trigger if the comparison
is true for any of the time series that have
been identified by filter and aggregations,
or by the ratio, if denominator_filter and
denominator_aggregations are specified.
Structure is [documented below](#nested_trigger).
* `aggregations` -
(Optional)
Specifies the alignment of data points in
individual time series as well as how to
combine the retrieved time series together
(such as when aggregating multiple streams
on each resource to a single stream for each
resource or when aggregating streams across
all members of a group of resources).
Multiple aggregations are applied in the
order specified.This field is similar to the
one in the MetricService.ListTimeSeries
request. It is advisable to use the
ListTimeSeries method when debugging this
field.
Structure is [documented below](#nested_aggregations).
* `filter` -
(Optional)
A filter that identifies which time series
should be compared with the threshold.The
filter is similar to the one that is
specified in the
MetricService.ListTimeSeries request (that
call is useful to verify the time series
that will be retrieved / processed) and must
specify the metric type and optionally may
contain restrictions on resource type,
resource labels, and metric labels. This
field may not exceed 2048 Unicode characters
in length.
* `evaluation_missing_data` -
(Optional)
A condition control that determines how
metric-threshold conditions are evaluated when
data stops arriving.
Possible values are: `EVALUATION_MISSING_DATA_INACTIVE`, `EVALUATION_MISSING_DATA_ACTIVE`, `EVALUATION_MISSING_DATA_NO_OP`.
<a name="nested_denominator_aggregations"></a>The `denominator_aggregations` block supports:
* `per_series_aligner` -
(Optional)
The approach to be used to align
individual time series. Not all
alignment functions may be applied
to all time series, depending on
the metric type and value type of
the original time series.
Alignment may change the metric
type or the value type of the time
series.Time series data must be
aligned in order to perform cross-
time series reduction. If
crossSeriesReducer is specified,
then perSeriesAligner must be
specified and not equal ALIGN_NONE
and alignmentPeriod must be
specified; otherwise, an error is
returned.
Possible values are: `ALIGN_NONE`, `ALIGN_DELTA`, `ALIGN_RATE`, `ALIGN_INTERPOLATE`, `ALIGN_NEXT_OLDER`, `ALIGN_MIN`, `ALIGN_MAX`, `ALIGN_MEAN`, `ALIGN_COUNT`, `ALIGN_SUM`, `ALIGN_STDDEV`, `ALIGN_COUNT_TRUE`, `ALIGN_COUNT_FALSE`, `ALIGN_FRACTION_TRUE`, `ALIGN_PERCENTILE_99`, `ALIGN_PERCENTILE_95`, `ALIGN_PERCENTILE_50`, `ALIGN_PERCENTILE_05`, `ALIGN_PERCENT_CHANGE`.
* `group_by_fields` -
(Optional)
The set of fields to preserve when
crossSeriesReducer is specified.
The groupByFields determine how
the time series are partitioned
into subsets prior to applying the
aggregation function. Each subset
contains time series that have the
same value for each of the
grouping fields. Each individual
time series is a member of exactly
one subset. The crossSeriesReducer
is applied to each subset of time
series. It is not possible to
reduce across different resource
types, so this field implicitly
contains resource.type. Fields not
specified in groupByFields are
aggregated away. If groupByFields
is not specified and all the time
series have the same resource
type, then the time series are
aggregated into a single output
time series. If crossSeriesReducer
is not defined, this field is
ignored.
* `alignment_period` -
(Optional)
The alignment period for per-time
series alignment. If present,
alignmentPeriod must be at least
60 seconds. After per-time series
alignment, each time series will
contain data points only on the
period boundaries. If
perSeriesAligner is not specified
or equals ALIGN_NONE, then this
field is ignored. If
perSeriesAligner is specified and
does not equal ALIGN_NONE, then
this field must be defined;
otherwise an error is returned.
* `cross_series_reducer` -
(Optional)
The approach to be used to combine
time series. Not all reducer
functions may be applied to all
time series, depending on the
metric type and the value type of
the original time series.
Reduction may change the metric
type of value type of the time
series.Time series data must be
aligned in order to perform cross-
time series reduction. If
crossSeriesReducer is specified,
then perSeriesAligner must be
specified and not equal ALIGN_NONE
and alignmentPeriod must be
specified; otherwise, an error is
returned.
Possible values are: `REDUCE_NONE`, `REDUCE_MEAN`, `REDUCE_MIN`, `REDUCE_MAX`, `REDUCE_SUM`, `REDUCE_STDDEV`, `REDUCE_COUNT`, `REDUCE_COUNT_TRUE`, `REDUCE_COUNT_FALSE`, `REDUCE_FRACTION_TRUE`, `REDUCE_PERCENTILE_99`, `REDUCE_PERCENTILE_95`, `REDUCE_PERCENTILE_50`, `REDUCE_PERCENTILE_05`.
<a name="nested_forecast_options"></a>The `forecast_options` block supports:
* `forecast_horizon` -
(Required)
The length of time into the future to forecast
whether a timeseries will violate the threshold.
If the predicted value is found to violate the
threshold, and the violation is observed in all
forecasts made for the Configured `duration`,
then the timeseries is considered to be failing.
<a name="nested_trigger"></a>The `trigger` block supports:
* `percent` -
(Optional)
The percentage of time series that
must fail the predicate for the
condition to be triggered.
* `count` -
(Optional)
The absolute number of time series
that must fail the predicate for the
condition to be triggered.
<a name="nested_aggregations"></a>The `aggregations` block supports:
* `per_series_aligner` -
(Optional)
The approach to be used to align
individual time series. Not all
alignment functions may be applied
to all time series, depending on
the metric type and value type of
the original time series.
Alignment may change the metric
type or the value type of the time
series.Time series data must be
aligned in order to perform cross-
time series reduction. If
crossSeriesReducer is specified,
then perSeriesAligner must be
specified and not equal ALIGN_NONE
and alignmentPeriod must be
specified; otherwise, an error is
returned.
Possible values are: `ALIGN_NONE`, `ALIGN_DELTA`, `ALIGN_RATE`, `ALIGN_INTERPOLATE`, `ALIGN_NEXT_OLDER`, `ALIGN_MIN`, `ALIGN_MAX`, `ALIGN_MEAN`, `ALIGN_COUNT`, `ALIGN_SUM`, `ALIGN_STDDEV`, `ALIGN_COUNT_TRUE`, `ALIGN_COUNT_FALSE`, `ALIGN_FRACTION_TRUE`, `ALIGN_PERCENTILE_99`, `ALIGN_PERCENTILE_95`, `ALIGN_PERCENTILE_50`, `ALIGN_PERCENTILE_05`, `ALIGN_PERCENT_CHANGE`.
* `group_by_fields` -
(Optional)
The set of fields to preserve when
crossSeriesReducer is specified.
The groupByFields determine how
the time series are partitioned
into subsets prior to applying the
aggregation function. Each subset
contains time series that have the
same value for each of the
grouping fields. Each individual
time series is a member of exactly
one subset. The crossSeriesReducer
is applied to each subset of time
series. It is not possible to
reduce across different resource
types, so this field implicitly
contains resource.type. Fields not
specified in groupByFields are
aggregated away. If groupByFields
is not specified and all the time
series have the same resource
type, then the time series are
aggregated into a single output
time series. If crossSeriesReducer
is not defined, this field is
ignored.
* `alignment_period` -
(Optional)
The alignment period for per-time
series alignment. If present,
alignmentPeriod must be at least
60 seconds. After per-time series
alignment, each time series will
contain data points only on the
period boundaries. If
perSeriesAligner is not specified
or equals ALIGN_NONE, then this
field is ignored. If
perSeriesAligner is specified and
does not equal ALIGN_NONE, then
this field must be defined;
otherwise an error is returned.
* `cross_series_reducer` -
(Optional)
The approach to be used to combine
time series. Not all reducer
functions may be applied to all
time series, depending on the
metric type and the value type of
the original time series.
Reduction may change the metric
type of value type of the time
series.Time series data must be
aligned in order to perform cross-
time series reduction. If
crossSeriesReducer is specified,
then perSeriesAligner must be
specified and not equal ALIGN_NONE
and alignmentPeriod must be
specified; otherwise, an error is
returned.
Possible values are: `REDUCE_NONE`, `REDUCE_MEAN`, `REDUCE_MIN`, `REDUCE_MAX`, `REDUCE_SUM`, `REDUCE_STDDEV`, `REDUCE_COUNT`, `REDUCE_COUNT_TRUE`, `REDUCE_COUNT_FALSE`, `REDUCE_FRACTION_TRUE`, `REDUCE_PERCENTILE_99`, `REDUCE_PERCENTILE_95`, `REDUCE_PERCENTILE_50`, `REDUCE_PERCENTILE_05`.
<a name="nested_condition_matched_log"></a>The `condition_matched_log` block supports:
* `filter` -
(Required)
A logs-based filter.
* `label_extractors` -
(Optional)
A map from a label key to an extractor expression, which is used to
extract the value for this label key. Each entry in this map is
a specification for how data should be extracted from log entries that
match filter. Each combination of extracted values is treated as
a separate rule for the purposes of triggering notifications.
Label keys and corresponding values can be used in notifications
generated by this condition.
<a name="nested_condition_prometheus_query_language"></a>The `condition_prometheus_query_language` block supports:
* `query` -
(Required)
The PromQL expression to evaluate. Every evaluation cycle this
expression is evaluated at the current time, and all resultant time
series become pending/firing alerts. This field must not be empty.
* `duration` -
(Optional)
Alerts are considered firing once their PromQL expression evaluated
to be "true" for this long. Alerts whose PromQL expression was not
evaluated to be "true" for long enough are considered pending. The
default value is zero. Must be zero or positive.
* `evaluation_interval` -
(Optional)
How often this rule should be evaluated. Must be a positive multiple
of 30 seconds or missing. The default value is 30 seconds. If this
PrometheusQueryLanguageCondition was generated from a Prometheus
alerting rule, then this value should be taken from the enclosing
rule group.
* `labels` -
(Optional)
Labels to add to or overwrite in the PromQL query result. Label names
must be valid.
Label values can be templatized by using variables. The only available
variable names are the names of the labels in the PromQL result, including
"__name__" and "value". "labels" may be empty. This field is intended to be
used for organizing and identifying the AlertPolicy
* `rule_group` -
(Optional)
The rule group name of this alert in the corresponding Prometheus
configuration file.
Some external tools may require this field to be populated correctly
in order to refer to the original Prometheus configuration file.
The rule group name and the alert name are necessary to update the
relevant AlertPolicies in case the definition of the rule group changes
in the future. This field is optional.
* `alert_rule` -
(Optional)
The alerting rule name of this alert in the corresponding Prometheus
configuration file.
Some external tools may require this field to be populated correctly
in order to refer to the original Prometheus configuration file.
The rule group name and the alert name are necessary to update the
relevant AlertPolicies in case the definition of the rule group changes
in the future.
This field is optional. If this field is not empty, then it must be a
valid Prometheus label name.
- - -
* `enabled` -
(Optional)
Whether or not the policy is enabled. The default is true.
* `notification_channels` -
(Optional)
Identifies the notification channels to which notifications should be
sent when incidents are opened or closed or when new violations occur
on an already opened incident. Each element of this array corresponds
to the name field in each of the NotificationChannel objects that are
returned from the notificationChannels.list method. The syntax of the
entries in this field is
`projects/[PROJECT_ID]/notificationChannels/[CHANNEL_ID]`
* `alert_strategy` -
(Optional)
Control over how this alert policy's notification channels are notified.
Structure is [documented below](#nested_alert_strategy).
* `user_labels` -
(Optional)
This field is intended to be used for organizing and identifying the AlertPolicy
objects.The field can contain up to 64 entries. Each key and value is limited
to 63 Unicode characters or 128 bytes, whichever is smaller. Labels and values
can contain only lowercase letters, numerals, underscores, and dashes. Keys
must begin with a letter.
* `severity` -
(Optional)
The severity of an alert policy indicates how important incidents generated
by that policy are. The severity level will be displayed on the Incident
detail page and in notifications.
Possible values are: `CRITICAL`, `ERROR`, `WARNING`.
* `documentation` -
(Optional)
Documentation that is included with notifications and incidents related
to this policy. Best practice is for the documentation to include information
to help responders understand, mitigate, escalate, and correct the underlying
problems detected by the alerting policy. Notification channels that have
limited capacity might not show this documentation.
Structure is [documented below](#nested_documentation).
* `project` - (Optional) The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
<a name="nested_alert_strategy"></a>The `alert_strategy` block supports:
* `notification_rate_limit` -
(Optional)
Required for alert policies with a LogMatch condition.
This limit is not implemented for alert policies that are not log-based.
Structure is [documented below](#nested_notification_rate_limit).
* `auto_close` -
(Optional)
If an alert policy that was active has no data for this long, any open incidents will close.
* `notification_channel_strategy` -
(Optional)
Control over how the notification channels in `notification_channels`
are notified when this alert fires, on a per-channel basis.
Structure is [documented below](#nested_notification_channel_strategy).
<a name="nested_notification_rate_limit"></a>The `notification_rate_limit` block supports:
* `period` -
(Optional)
Not more than one notification per period.
<a name="nested_notification_channel_strategy"></a>The `notification_channel_strategy` block supports:
* `notification_channel_names` -
(Optional)
The notification channels that these settings apply to. Each of these
correspond to the name field in one of the NotificationChannel objects
referenced in the notification_channels field of this AlertPolicy. The format is
`projects/[PROJECT_ID_OR_NUMBER]/notificationChannels/[CHANNEL_ID]`
* `renotify_interval` -
(Optional)
The frequency at which to send reminder notifications for open incidents.
<a name="nested_documentation"></a>The `documentation` block supports:
* `content` -
(Optional)
The text of the documentation, interpreted according to mimeType.
The content may not exceed 8,192 Unicode characters and may not
exceed more than 10,240 bytes when encoded in UTF-8 format,
whichever is smaller.
* `mime_type` -
(Optional)
The format of the content field. Presently, only the value
"text/markdown" is supported.
* `subject` -
(Optional)
The subject line of the notification. The subject line may not
exceed 10,240 bytes. In notifications generated by this policy the contents
of the subject line after variable expansion will be truncated to 255 bytes
or shorter at the latest UTF-8 character boundary.
## Attributes Reference
In addition to the arguments listed above, the following computed attributes are exported:
* `id` - an identifier for the resource with format `{{name}}`
* `name` -
The unique resource name for this policy.
Its syntax is: projects/[PROJECT_ID]/alertPolicies/[ALERT_POLICY_ID]
* `creation_record` -
A read-only record of the creation of the alerting policy.
If provided in a call to create or update, this field will
be ignored.
Structure is [documented below](#nested_creation_record).
<a name="nested_creation_record"></a>The `creation_record` block contains:
* `mutate_time` -
(Output)
When the change occurred.
* `mutated_by` -
(Output)
The email address of the user making the change.
## Timeouts
This resource provides the following
[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options:
- `create` - Default is 20 minutes.
- `update` - Default is 20 minutes.
- `delete` - Default is 20 minutes.
## Import
AlertPolicy can be imported using any of these accepted formats:
* `{{name}}`
In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import AlertPolicy using one of the formats above. For example:
```tf
import {
id = "{{name}}"
to = google_monitoring_alert_policy.default
}
```
When using the [`terraform import` command](https://developer.hashicorp.com/terraform/cli/commands/import), AlertPolicy can be imported using one of the formats above. For example:
```
$ terraform import google_monitoring_alert_policy.default {{name}}
```
## User Project Overrides
This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override).