Project import generated by Copybara.

GitOrigin-RevId: 36f426c6748516b6b6cbf6761fe4b38c080cae78
Change-Id: I044dcfcd37a014ce790ca23a863faac492c89a4c
diff --git a/v1.4.7/.github/CODE_OF_CONDUCT.md b/v1.4.7/.github/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000..0c8b092
--- /dev/null
+++ b/v1.4.7/.github/CODE_OF_CONDUCT.md
@@ -0,0 +1,5 @@
+# Code of Conduct
+
+HashiCorp Community Guidelines apply to you when interacting with the community here on GitHub and contributing code.
+
+Please read the full text at https://www.hashicorp.com/community-guidelines
diff --git a/v1.4.7/.github/CONTRIBUTING.md b/v1.4.7/.github/CONTRIBUTING.md
new file mode 100644
index 0000000..ca460d6
--- /dev/null
+++ b/v1.4.7/.github/CONTRIBUTING.md
@@ -0,0 +1,234 @@
+# Contributing to Terraform
+
+This repository contains only Terraform core, which includes the command line interface and the main graph engine. Providers are implemented as plugins that each have their own repository linked from the [Terraform Registry index](https://registry.terraform.io/browse/providers). Instructions for developing each provider are usually in the associated README file. For more information, see [the provider development overview](https://www.terraform.io/docs/plugins/provider.html).
+
+**All communication on GitHub, the community forum, and other HashiCorp-provided communication channels is subject to [the HashiCorp community guidelines](https://www.hashicorp.com/community-guidelines).**
+
+This document provides guidance on Terraform contribution recommended practices. It covers what we're looking for in order to help set some expectations and help you get the most out of participation in this project. 
+
+To record a bug report, enhancement proposal, or give any other product feedback, please [open a GitHub issue](https://github.com/hashicorp/terraform/issues/new/choose) using the most appropriate issue template. Please do fill in all of the information the issue templates request, because we've seen from experience that this will maximize the chance that we'll be able to act on your feedback.
+
+---
+
+<!-- MarkdownTOC autolink="true" -->
+
+- [Contributing Fixes](#contributing-fixes)
+- [Proposing a Change](#proposing-a-change)
+	- [Caveats & areas of special concern](#caveats--areas-of-special-concern)
+		- [State Storage Backends](#state-storage-backends)
+		- [Provisioners](#provisioners)
+		- [Maintainers](#maintainers)
+	- [Pull Request Lifecycle](#pull-request-lifecycle)
+		- [Getting Your Pull Requests Merged Faster](#getting-your-pull-requests-merged-faster)
+	- [PR Checks](#pr-checks)
+- [Terraform CLI/Core Development Environment](#terraform-clicore-development-environment)
+- [Acceptance Tests: Testing interactions with external services](#acceptance-tests-testing-interactions-with-external-services)
+- [Generated Code](#generated-code)
+- [External Dependencies](#external-dependencies)
+
+<!-- /MarkdownTOC -->
+
+## Contributing Fixes
+
+It can be tempting to want to dive into an open source project and help _build the thing_ you believe you're missing. It's a wonderful and helpful intention. However, Terraform is a complex tool. Many seemingly simple changes can have serious effects on other areas of the code and it can take some time to become familiar with the effects of even basic changes. The Terraform team is not immune to unintended and sometimes undesirable changes. We do take our work seriously, and appreciate the globally diverse community that relies on Terraform for workflows of all sizes and criticality. 
+
+As a result of Terraform's complexity and high bar for stability, the most straightforward way to start helping with the Terraform project is to pick an existing bug and [get to work](#terraform-clicore-development-environment). 
+
+For new contributors we've labeled a few issues with `Good First Issue` as a nod to issues which will help get you familiar with Terraform development, while also providing an onramp to the codebase itself.
+
+Read the documentation, and don't be afraid to [ask questions](https://discuss.hashicorp.com/c/terraform-core/27). 
+
+## Proposing a Change
+
+In order to be respectful of the time of community contributors, we aim to discuss potential changes in GitHub issues prior to implementation. That will allow us to give design feedback up front and set expectations about the scope of the change, and, for larger changes, how best to approach the work such that the Terraform team can review it and merge it along with other concurrent work.
+
+If the bug you wish to fix or enhancement you wish to implement isn't already covered by a GitHub issue that contains feedback from the Terraform team, please do start a discussion (either in [a new GitHub issue](https://github.com/hashicorp/terraform/issues/new/choose) or an existing one, as appropriate) before you invest significant development time. If you mention your intent to implement the change described in your issue, the Terraform team can, as best as possible, prioritize including implementation-related feedback in the subsequent discussion.
+
+At this time, we do not have a formal process for reviewing outside proposals that significantly change Terraform's workflow, its primary usage patterns, and its language. Additionally, some seemingly simple proposals can have deep effects across Terraform, which is why we strongly suggest starting with an issue-based proposal. 
+
+For large proposals that could entail a significant design phase, we wish to be up front with potential contributors that, unfortunately, we are unlikely to be able to give prompt feedback. We are still interested to hear about your use-cases so that we can consider ways to meet them as part of other larger projects.
+
+Most changes will involve updates to the test suite, and changes to Terraform's documentation. The Terraform team can advise on different testing strategies for specific scenarios, and may ask you to revise the specific phrasing of your proposed documentation prose to match better with the standard "voice" of Terraform's documentation.
+
+This repository is primarily maintained by a small team at HashiCorp along with their other responsibilities, so unfortunately we cannot always respond promptly to pull requests, particularly if they do not relate to an existing GitHub issue where the Terraform team has already participated and indicated willingness to work on the issue or accept PRs for the proposal. We *are* grateful for all contributions however, and will give feedback on pull requests as soon as we're able.
+
+### Caveats & areas of special concern
+
+There are some areas of Terraform which are of special concern to the Terraform team. 
+
+#### State Storage Backends
+
+The Terraform team is not merging PRs for new state storage backends at the current time. Our priority regarding state storage backends is to find maintainers for existing backends and remove those backends without maintainers.
+
+Please see the [CODEOWNERS](https://github.com/hashicorp/terraform/blob/main/CODEOWNERS) file for the status of a given backend. Community members with an interest in a particular standard backend are welcome to help maintain it.
+
+Currently, merging state storage backends places a significant burden on the Terraform team. The team must set up an environment and cloud service provider account, or a new database/storage/key-value service, in order to build and test remote state storage backends. The time and complexity of doing so prevents us from moving Terraform forward in other ways.
+
+We are working to remove ourselves from the critical path of state storage backends by moving them towards a plugin model. In the meantime, we won't be accepting new remote state backends into Terraform.
+
+#### Provisioners
+
+Provisioners are an area of concern in Terraform for a number of reasons. Chiefly, they are often used in the place of configuration management tools or custom providers. 
+
+There are two main types of provisioners in Terraform, the generic provisioners (`file`,`local-exec`, and `remote-exec`) and the tool-specific provisioners (`chef`, `habbitat`, `puppet` & `salt-masterless`). **The tool-specific provisioners [are deprecated](https://discuss.hashicorp.com/t/notice-terraform-to-begin-deprecation-of-vendor-tool-specific-provisioners-starting-in-terraform-0-13-4/13997).** In practice this means we will not be accepting PRs for these areas of the codebase. 
+
+From our [documentation](https://www.terraform.io/docs/provisioners/index.html):
+
+> ... they [...] add a considerable amount of complexity and uncertainty to Terraform usage.[...] we still recommend attempting to solve it [your problem] using other techniques first, and use provisioners only if there is no other option.
+
+The Terraform team is in the process of building a way forward which continues to decrease reliance on provisioners. In the mean time however, as our documentation indicates, they are a tool of last resort. As such expect that PRs and issues for provisioners are not high in priority. 
+
+Please see the [CODEOWNERS](https://github.com/hashicorp/terraform/blob/main/CODEOWNERS) file for the status of a given provisioner. Community members with an interest in a particular provisioner are welcome to help maintain it.
+
+#### Maintainers
+
+Maintainers are key contributors to our Open Source project. They contribute their time and expertise and we ask that the community take extra special care to be mindful of this when interacting with them.
+
+For code that has a listed maintainer or maintainers in our [CODEOWNERS](https://github.com/hashicorp/terraform/blob/main/CODEOWNERS) file, the Terraform team will highlight them for participation in PRs which relate to the area of code they maintain. The expectation is that a maintainer will review the code and work with the PR contributor before the code is merged by the Terraform team.
+
+There is no expectation on response time for our maintainers; they may be indisposed for prolonged periods of time. Please be patient. Discussions on when code becomes "unmaintained" will be on a case-by-case basis. 
+
+If an an unmaintained area of code interests you and you'd like to become a maintainer, you may simply make a PR against our [CODEOWNERS](https://github.com/hashicorp/terraform/blob/main/CODEOWNERS) file with your github handle attached to the approriate area. If there is a maintainer or team of maintainers for that area, please coordinate with them as necessary. 
+
+### Pull Request Lifecycle
+
+1. You are welcome to submit a [draft pull request](https://github.blog/2019-02-14-introducing-draft-pull-requests/) for commentary or review before it is fully completed. It's also a good idea to include specific questions or items you'd like feedback on.
+2. Once you believe your pull request is ready to be merged you can create your pull request.
+3. When time permits Terraform's core team members will look over your contribution and either merge, or provide comments letting you know if there is anything left to do. It may take some time for us to respond. We may also have questions that we need answered about the code, either because something doesn't make sense to us or because we want to understand your thought process. We kindly ask that you do not target specific team members. 
+4. If we have requested changes, you can either make those changes or, if you disagree with the suggested changes, we can have a conversation about our reasoning and agree on a path forward. This may be a multi-step process. Our view is that pull requests are a chance to collaborate, and we welcome conversations about how to do things better. It is the contributor's responsibility to address any changes requested. While reviewers are happy to give guidance, it is unsustainable for us to perform the coding work necessary to get a PR into a mergeable state.
+5. Once all outstanding comments and checklist items have been addressed, your contribution will be merged! Merged PRs may or may not be included in the next release based on changes the Terraform teams deems as breaking or not. The core team takes care of updating the [CHANGELOG.md](https://github.com/hashicorp/terraform/blob/main/CHANGELOG.md) as they merge.
+6. In some cases, we might decide that a PR should be closed without merging. We'll make sure to provide clear reasoning when this happens. Following the recommended process above is one of the ways to ensure you don't spend time on a PR we can't or won't merge.
+
+#### Getting Your Pull Requests Merged Faster
+
+It is much easier to review pull requests that are:
+
+1. Well-documented: Try to explain in the pull request comments what your change does, why you have made the change, and provide instructions for how to produce the new behavior introduced in the pull request. If you can, provide screen captures or terminal output to show what the changes look like. This helps the reviewers understand and test the change.
+2. Small: Try to only make one change per pull request. If you found two bugs and want to fix them both, that's *awesome*, but it's still best to submit the fixes as separate pull requests. This makes it much easier for reviewers to keep in their heads all of the implications of individual code changes, and that means the PR takes less effort and energy to merge. In general, the smaller the pull request, the sooner reviewers will be able to make time to review it.
+3. Passing Tests: Based on how much time we have, we may not review pull requests which aren't passing our tests (look below for advice on how to run unit tests). If you need help figuring out why tests are failing, please feel free to ask, but while we're happy to give guidance it is generally your responsibility to make sure that tests are passing. If your pull request changes an interface or invalidates an assumption that causes a bunch of tests to fail, then you need to fix those tests before we can merge your PR.
+
+If we request changes, try to make those changes in a timely manner. Otherwise, PRs can go stale and be a lot more work for all of us to merge in the future.
+
+Even with everyone making their best effort to be responsive, it can be time-consuming to get a PR merged. It can be frustrating to deal with the back-and-forth as we make sure that we understand the changes fully. Please bear with us, and please know that we appreciate the time and energy you put into the project.
+
+### PR Checks
+
+The following checks run when a PR is opened:
+
+- Contributor License Agreement (CLA): If this is your first contribution to Terraform you will be asked to sign the CLA.
+- Tests: tests include unit tests and acceptance tests, and all tests must pass before a PR can be merged.
+
+----
+
+## Terraform CLI/Core Development Environment
+
+This repository contains the source code for Terraform CLI, which is the main component of Terraform that contains the core Terraform engine.
+
+Terraform providers are not maintained in this repository; you can find relevant
+repository and relevant issue tracker for each provider within the
+[Terraform Registry index](https://registry.terraform.io/browse/providers).
+
+This repository also does not include the source code for some other parts of the Terraform product including Terraform Cloud, Terraform Enterprise, and the Terraform Registry. Those components are not open source, though if you have feedback about them (including bug reports) please do feel free to [open a GitHub issue on this repository](https://github.com/hashicorp/terraform/issues/new/choose).
+
+---
+
+If you wish to work on the Terraform CLI source code, you'll first need to install the [Go](https://golang.org/) compiler and the version control system [Git](https://git-scm.com/).
+
+At this time the Terraform development environment is targeting only Linux and Mac OS X systems. While Terraform itself is compatible with Windows, unfortunately the unit test suite currently contains Unix-specific assumptions around maximum path lengths, path separators, etc.
+
+Refer to the file [`.go-version`](https://github.com/hashicorp/terraform/blob/main/.go-version) to see which version of Go Terraform is currently built with. Other versions will often work, but if you run into any build or testing problems please try with the specific Go version indicated. You can optionally simplify the installation of multiple specific versions of Go on your system by installing [`goenv`](https://github.com/syndbg/goenv), which reads `.go-version` and automatically selects the correct Go version.
+
+Use Git to clone this repository into a location of your choice. Terraform is using [Go Modules](https://blog.golang.org/using-go-modules), and so you should *not* clone it inside your `GOPATH`.
+
+Switch into the root directory of the cloned repository and build Terraform using the Go toolchain in the standard way:
+
+```
+cd terraform
+go install .
+```
+
+The first time you run the `go install` command, the Go toolchain will download any library dependencies that you don't already have in your Go modules cache. Subsequent builds will be faster because these dependencies will already be available on your local disk.
+
+Once the compilation process succeeds, you can find a `terraform` executable in the Go executable directory. If you haven't overridden it with the `GOBIN` environment variable, the executable directory is the `bin` directory inside the directory returned by the following command:
+
+```
+go env GOPATH
+```
+
+If you are planning to make changes to the Terraform source code, you should run the unit test suite before you start to make sure everything is initially passing:
+
+```
+go test ./...
+```
+
+As you make your changes, you can re-run the above command to ensure that the tests are *still* passing. If you are working only on a specific Go package, you can speed up your testing cycle by testing only that single package, or packages under a particular package prefix:
+
+```
+go test ./internal/command/...
+go test ./internal/addrs
+```
+
+## Acceptance Tests: Testing interactions with external services
+
+Terraform's unit test suite is self-contained, using mocks and local files to help ensure that it can run offline and is unlikely to be broken by changes to outside systems.
+
+However, several Terraform components interact with external services, such as the automatic provider installation mechanism, the Terraform Registry, Terraform Cloud, etc.
+
+There are some optional tests in the Terraform CLI codebase that *do* interact with external services, which we collectively refer to as "acceptance tests". You can enable these by setting the environment variable `TF_ACC=1` when running the tests. We recommend focusing only on the specific package you are working on when enabling acceptance tests, both because it can help the test run to complete faster and because you are less likely to encounter failures due to drift in systems unrelated to your current goal:
+
+```
+TF_ACC=1 go test ./internal/initwd
+```
+
+Because the acceptance tests depend on services outside of the Terraform codebase, and because the acceptance tests are usually used only when making changes to the systems they cover, it is common and expected that drift in those external systems will cause test failures. Because of this, prior to working on a system covered by acceptance tests it's important to run the existing tests for that system in an *unchanged* work tree first and respond to any test failures that preexist, to avoid misinterpreting such failures as bugs in your new changes.
+
+## Generated Code
+
+Some files in the Terraform CLI codebase are generated. In most cases, we update these using `go generate`, which is the standard way to encapsulate code generation steps in a Go codebase.
+
+```
+go generate ./...
+```
+
+Use `git diff` afterwards to inspect the changes and ensure that they are what you expected.
+
+Terraform includes generated Go stub code for the Terraform provider plugin protocol, which is defined using Protocol Buffers. Because the Protocol Buffers tools are not written in Go and thus cannot be automatically installed using `go get`, we follow a different process for generating these, which requires that you've already installed a suitable version of `protoc`:
+
+```
+make protobuf
+```
+
+## External Dependencies
+
+Terraform uses Go Modules for dependency management.
+
+Our dependency licensing policy for Terraform excludes proprietary licenses and "copyleft"-style licenses. We accept the common Mozilla Public License v2, MIT License, and BSD licenses. We will consider other open source licenses in similar spirit to those three, but if you plan to include such a dependency in a contribution we'd recommend opening a GitHub issue first to discuss what you intend to implement and what dependencies it will require so that the Terraform team can review the relevant licenses to for whether they meet our licensing needs.
+
+If you need to add a new dependency to Terraform or update the selected version for an existing one, use `go get` from the root of the Terraform repository as follows:
+
+```
+go get github.com/hashicorp/hcl/v2@2.0.0
+```
+
+This command will download the requested version (2.0.0 in the above example) and record that version selection in the `go.mod` file. It will also record checksums for the module in the `go.sum`.
+
+To complete the dependency change, clean up any redundancy in the module metadata files by running:
+
+```
+go mod tidy
+```
+
+To ensure that the upgrade has worked correctly, be sure to run the unit test suite at least once:
+
+```
+go test ./...
+```
+
+Because dependency changes affect a shared, top-level file, they are more likely than some other change types to become conflicted with other proposed changes during the code review process. For that reason, and to make dependency changes more visible in the change history, we prefer to record dependency changes as separate commits that include only the results of the above commands and the minimal set of changes to Terraform's own code for compatibility with the new version:
+
+```
+git add go.mod go.sum
+git commit -m "go get github.com/hashicorp/hcl/v2@2.0.0"
+```
+
+You can then make use of the new or updated dependency in new code added in subsequent commits.
diff --git a/v1.4.7/.github/ISSUE_TEMPLATE/bug_report.yml b/v1.4.7/.github/ISSUE_TEMPLATE/bug_report.yml
new file mode 100644
index 0000000..c4d9de4
--- /dev/null
+++ b/v1.4.7/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -0,0 +1,124 @@
+name: Bug Report
+description: Let us know about an unexpected error, a crash, or an incorrect behavior.
+labels: ["bug", "new"]
+body:
+  - type: markdown
+    attributes:
+      value: |
+        # Thank you for opening an issue.
+
+        The [hashicorp/terraform](https://github.com/hashicorp/terraform) issue tracker is reserved for bug reports relating to the core Terraform CLI application and configuration language.
+
+        For general usage questions, please see: https://www.terraform.io/community.html.
+
+        ## If your issue relates to:
+        * **Terraform Cloud/Enterprise**: please email tf-cloud@hashicorp.support or [open a new request](https://support.hashicorp.com/hc/en-us/requests/new).
+        * **AWS Terraform Provider**: Open an issue at [hashicorp/terraform-provider-aws](https://github.com/hashicorp/terraform-provider-aws/issues/new/choose).
+        * **Azure Terraform Provider**: Open an issue at [hashicorp/terraform-provider-azurerm](https://github.com/hashicorp/terraform-provider-azurerm/issues/new/choose).
+        * **Other Terraform Providers**: Please open an issue in the provider's own repository, which can be found by searching the [Terraform Registry](https://registry.terraform.io/browse/providers).
+
+        ## Filing a bug report
+
+        To fix problems, we need clear reproduction cases - we need to be able to see it happen locally. A reproduction case is ideally something a Terraform Core engineer can git-clone or copy-paste and run immediately, without inventing any details or context.
+
+        * A short example can be directly copy-pasteable; longer examples should be in separate git repositories, especially if multiple files are needed
+        * Please include all needed context. For example, if you figured out that an expression can cause a crash, put the expression in a variable definition or a resource
+        * Set defaults on (or omit) any variables. The person reproducing it should not need to invent variable settings
+        * If multiple steps are required, such as running terraform twice, consider scripting it in a simple shell script. Providing a script can be easier than explaining what changes to make to the config between runs.
+        * Omit any unneeded complexity: remove variables, conditional statements, functions, modules, providers, and resources that are not needed to trigger the bug
+        * When possible, use the [null resource](https://www.terraform.io/docs/providers/null/resource.html) provider rather than a real provider in order to minimize external dependencies. We know this isn't always feasible. The Terraform Core team doesn't have deep domain knowledge in every provider, or access to every cloud platform for reproduction cases.
+
+  - type: textarea
+    id: tf-version
+    attributes:
+      label: Terraform Version
+      description: Run `terraform version` to show the version, and paste the result below. If you are not running the latest version of Terraform, please try upgrading because your issue may have already been fixed.
+      render: shell
+      placeholder: ...output of `terraform version`...
+      value:
+    validations:
+      required: true
+
+  - type: textarea
+    id: tf-config
+    attributes:
+      label: Terraform Configuration Files
+      description: Paste the relevant parts of your Terraform configuration between the ``` marks below. For Terraform configs larger than a few resources, or that involve multiple files, please make a GitHub repository that we can clone, rather than copy-pasting multiple files in here.
+      placeholder:
+      value: |
+        ```terraform
+        ...terraform config...
+        ```
+    validations:
+      required: true
+
+  - type: textarea
+    id: tf-debug
+    attributes:
+      label: Debug Output
+      description: Full debug output can be obtained by running Terraform with the environment variable `TF_LOG=trace`. Please create a GitHub Gist containing the debug output. Please do _not_ paste the debug output in the issue, since debug output is long. Debug output may contain sensitive information. Please review it before posting publicly.
+      placeholder: ...link to gist...
+      value:
+    validations:
+      required: true
+  - type: textarea
+    id: tf-expected
+    attributes:
+      label: Expected Behavior
+      description: What should have happened?
+      placeholder: What should have happened?
+      value:
+    validations:
+      required: true
+  - type: textarea
+    id: tf-actual
+    attributes:
+      label: Actual Behavior
+      description: What actually happened?
+      placeholder: What actually happened?
+      value:
+    validations:
+      required: true
+  - type: textarea
+    id: tf-repro-steps
+    attributes:
+      label: Steps to Reproduce
+      description: |
+        Please list the full steps required to reproduce the issue, for example:
+          1. `terraform init`
+          2. `terraform apply`
+      placeholder: |
+        1. `terraform init`
+        2. `terraform apply`
+      value:
+    validations:
+      required: true
+  - type: textarea
+    id: tf-add-context
+    attributes:
+      label: Additional Context
+      description: |
+        Are there anything atypical about your situation that we should know?
+        For example: is Terraform running in a wrapper script or in a CI system? Are you passing any unusual command line options or environment variables to opt-in to non-default behavior?"
+      placeholder: Additional context...
+      value:
+    validations:
+      required: false
+  - type: textarea
+    id: tf-references
+    attributes:
+      label: References
+      description: |
+        Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
+        ```
+          - #6017
+        ```
+      placeholder:
+      value:
+    validations:
+      required: false
+
+  - type: markdown
+    attributes:
+      value: |
+        **Note:** If the submit button is disabled and you have filled out all required fields, please check that you did not forget a **Title** for the issue.
diff --git a/v1.4.7/.github/ISSUE_TEMPLATE/config.yml b/v1.4.7/.github/ISSUE_TEMPLATE/config.yml
new file mode 100644
index 0000000..2c525cb
--- /dev/null
+++ b/v1.4.7/.github/ISSUE_TEMPLATE/config.yml
@@ -0,0 +1,20 @@
+blank_issues_enabled: false
+contact_links:
+  - name: Terraform Cloud/Enterprise Troubleshooting and Feature Requests
+    url: https://support.hashicorp.com/hc/en-us/requests/new
+    about: For issues and feature requests related to the Terraform Cloud/Enterprise platform, please submit a HashiCorp support request or email tf-cloud@hashicorp.support
+  - name: AWS Terraform Provider Feedback and Questions
+    url: https://github.com/hashicorp/terraform-provider-aws
+    about: The AWS Terraform Provider has its own repository, any provider related issues or questions should be directed there.
+  - name: Azure Terraform Provider Feedback and Questions
+    url: https://github.com/hashicorp/terraform-provider-azurerm
+    about: The Azure Terraform Provider has its own repository, any provider related issues or questions should be directed there.
+  - name: Other Provider-related Feedback and Questions
+    url: https://registry.terraform.io/browse/providers
+    about: Each provider (e.g. GCP, Oracle, K8S, etc.) has its own repository, any provider related issues or questions should be directed to the appropriate issue tracker linked from the Registry.
+  - name: Provider Development Feedback and Questions
+    url: https://github.com/hashicorp/terraform-plugin-sdk/issues/new/choose
+    about: Plugin SDK has its own repository, any SDK and provider development related issues or questions should be directed there.
+  - name: Terraform Usage, Language, or Workflow Questions
+    url: https://discuss.hashicorp.com/c/terraform-core
+    about: Please ask and answer language or workflow related questions through the Terraform Core Community Forum.
\ No newline at end of file
diff --git a/v1.4.7/.github/ISSUE_TEMPLATE/documentation_issue.yml b/v1.4.7/.github/ISSUE_TEMPLATE/documentation_issue.yml
new file mode 100644
index 0000000..321a3b7
--- /dev/null
+++ b/v1.4.7/.github/ISSUE_TEMPLATE/documentation_issue.yml
@@ -0,0 +1,73 @@
+name: Documentation Issue
+description: Report an issue or suggest a change in the documentation.
+labels: ["documentation", "new"]
+body:
+  - type: markdown
+    attributes:
+      value: |
+        # Thank you for opening a documentation change request.
+
+        Please only use the [hashicorp/terraform](https://github.com/hashicorp/terraform) `Documentation` issue type to report problems with the documentation on [https://www.terraform.io/docs](). Only technical writers (not engineers) monitor this issue type. Report Terraform bugs or feature requests with the `Bug report` or `Feature Request` issue types instead to get engineering attention.
+
+        For general usage questions, please see: https://www.terraform.io/community.html.
+
+  - type: textarea
+    id: tf-version
+    attributes:
+      label: Terraform Version
+      description: Run `terraform version` to show the version, and paste the result below. If you're not using the latest version, please check to see if something related to your request has already been implemented in a later version.
+      render: shell
+      placeholder: ...output of `terraform version`...
+      value:
+    validations:
+      required: true
+
+  - type: textarea
+    id: tf-affected-pages
+    attributes:
+      label: Affected Pages
+      description: |
+          Link to the pages relevant to your documentation change request.
+      placeholder:
+      value:
+    validations:
+      required: false
+
+  - type: textarea
+    id: tf-problem
+    attributes:
+      label: What is the docs issue?
+      description: What problems or suggestions do you have about the documentation?
+      placeholder:
+      value:
+    validations:
+      required: true
+
+  - type: textarea
+    id: tf-proposal
+    attributes:
+      label: Proposal
+      description: What documentation changes would fix this issue and where would you expect to find them? Are one or more page headings unclear? Do one or more pages need additional context, examples, or warnings? Do we need a new page or section dedicated to a specific topic?  Your ideas help us understand what you and other users need from our documentation and how we can improve the content.
+      placeholder:
+      value:
+    validations:
+      required: false
+
+  - type: textarea
+    id: tf-references
+    attributes:
+      label: References
+      description: |
+        Are there any other open or closed GitHub issues related to the problem or solution you described? If so, list them below. For example:
+        ```
+          - #6017
+        ```
+      placeholder:
+      value:
+    validations:
+      required: false
+
+  - type: markdown
+    attributes:
+      value: |
+        **Note:** If the submit button is disabled and you have filled out all required fields, please check that you did not forget a **Title** for the issue.
diff --git a/v1.4.7/.github/ISSUE_TEMPLATE/feature_request.yml b/v1.4.7/.github/ISSUE_TEMPLATE/feature_request.yml
new file mode 100644
index 0000000..9549ba8
--- /dev/null
+++ b/v1.4.7/.github/ISSUE_TEMPLATE/feature_request.yml
@@ -0,0 +1,87 @@
+name: Feature Request
+description: Suggest a new feature or other enhancement.
+labels: ["enhancement", "new"]
+body:
+  - type: markdown
+    attributes:
+      value: |
+        # Thank you for opening a feature request.
+
+        The [hashicorp/terraform](https://github.com/hashicorp/terraform) issue tracker is reserved for feature requests relating to the core Terraform CLI application and configuration language.
+
+        For general usage questions, please see: https://www.terraform.io/community.html.
+
+        ## If your feature request relates to:
+        * **Terraform Cloud/Enterprise**: please email tf-cloud@hashicorp.support or [open a new request](https://support.hashicorp.com/hc/en-us/requests/new).
+        * **AWS Terraform Provider**: Open an issue at [hashicorp/terraform-provider-aws](https://github.com/hashicorp/terraform-provider-aws/issues/new/choose).
+        * **Azure Terraform Provider**: Open an issue at [hashicorp/terraform-provider-azurerm](https://github.com/hashicorp/terraform-provider-azurerm/issues/new/choose).
+        * **Other Terraform Providers**: Please open an issue in the provider's own repository, which can be found by searching the [Terraform Registry](https://registry.terraform.io/browse/providers).
+
+  - type: textarea
+    id: tf-version
+    attributes:
+      label: Terraform Version
+      description: Run `terraform version` to show the version, and paste the result below. If you're not using the latest version, please check to see if something related to your request has already been implemented in a later version.
+      render: shell
+      placeholder: ...output of `terraform version`...
+      value:
+    validations:
+      required: true
+
+  - type: textarea
+    id: tf-use-case
+    attributes:
+      label: Use Cases
+      description: |
+        In order to properly evaluate a feature request, it is necessary to understand the use cases for it.
+        Please describe below the _end goal_ you are trying to achieve that has led you to request this feature.
+        Please keep this section focused on the problem and not on the suggested solution. We'll get to that in a moment, below!
+      placeholder:
+      value:
+    validations:
+      required: true
+
+  - type: textarea
+    id: tf-attempted-solution
+    attributes:
+      label: Attempted Solutions
+      description: |
+          If you've already tried to solve the problem within Terraform's existing features and found a limitation that prevented you from succeeding, please describe it below in as much detail as possible.
+          Ideally, this would include real configuration snippets that you tried, real Terraform command lines you ran, and what results you got in each case.
+          Please remove any sensitive information such as passwords before sharing configuration snippets and command lines.
+      placeholder:
+      value:
+    validations:
+      required: true
+
+  - type: textarea
+    id: tf-proposal
+    attributes:
+      label: Proposal
+      description: |
+          If you have an idea for a way to address the problem via a change to Terraform features, please describe it below.
+          In this section, it's helpful to include specific examples of how what you are suggesting might look in configuration files, or on the command line, since that allows us to understand the full picture of what you are proposing.
+          If you're not sure of some details, don't worry! When we evaluate the feature request we may suggest modifications as necessary to work within the design constraints of Terraform Core.
+      placeholder:
+      value:
+    validations:
+      required: false
+
+  - type: textarea
+    id: tf-references
+    attributes:
+      label: References
+      description: |
+        Are there any other GitHub issues, whether open or closed, that are related to the problem you've described above or to the suggested solution? If so, please create a list below that mentions each of them. For example:
+        ```
+          - #6017
+        ```
+      placeholder:
+      value:
+    validations:
+      required: false
+
+  - type: markdown
+    attributes:
+      value: |
+        **Note:** If the submit button is disabled and you have filled out all required fields, please check that you did not forget a **Title** for the issue.
diff --git a/v1.4.7/.github/SUPPORT.md b/v1.4.7/.github/SUPPORT.md
new file mode 100644
index 0000000..9ba846c
--- /dev/null
+++ b/v1.4.7/.github/SUPPORT.md
@@ -0,0 +1,4 @@
+# Support
+
+If you have questions about Terraform usage, please feel free to create a topic
+on [the official community forum](https://discuss.hashicorp.com/c/terraform-core).
diff --git a/v1.4.7/.github/actions/equivalence-test/action.yml b/v1.4.7/.github/actions/equivalence-test/action.yml
new file mode 100644
index 0000000..42af6ab
--- /dev/null
+++ b/v1.4.7/.github/actions/equivalence-test/action.yml
@@ -0,0 +1,58 @@
+name: equivalence-test
+description: "Execute the suite of Terraform equivalence tests in testing/equivalence-tests"
+inputs:
+  target-terraform-version:
+    description: "The version of Terraform to use in execution."
+    required: true
+  target-terraform-branch:
+    description: "The branch within this repository to update and compare."
+    required: true
+  target-equivalence-test-version:
+    description: "The version of the Terraform equivalence tests to use."
+    default: "0.3.0"
+  target-os:
+    description: "Current operating system"
+    default: "linux"
+  target-arch:
+    description: "Current architecture"
+    default: "amd64"
+runs:
+  using: "composite"
+  steps:
+    - name: "download equivalence test binary"
+      shell: bash
+      run: |
+        ./.github/scripts/equivalence-test.sh download_equivalence_test_binary \
+          ${{ inputs.target-equivalence-test-version }} \
+          ./bin/equivalence-tests \
+          ${{ inputs.target-os }} \
+          ${{ inputs.target-arch }}
+    - name: "download terraform binary"
+      shell: bash
+      run: |
+        ./.github/scripts/equivalence-test.sh download_terraform_binary \
+          ${{ inputs.target-terraform-version }} \
+          ./bin/terraform \
+          ${{ inputs.target-os }} \
+          ${{ inputs.target-arch }}
+    - name: "run and update equivalence tests"
+      shell: bash
+      run: |
+        ./bin/equivalence-tests update \
+          --tests=testing/equivalence-tests/tests \
+          --goldens=testing/equivalence-tests/outputs \
+          --binary=$(pwd)/bin/terraform
+        
+        changed=$(git diff --quiet -- testing/equivalence-tests/outputs || echo true)
+        if [[ $changed == "true" ]]; then
+          echo "found changes, and pushing new golden files into branch ${{ inputs.target-terraform-branch }}."
+
+          git config user.email "52939924+teamterraform@users.noreply.github.com"
+          git config user.name "The Terraform Team"
+
+          git add ./testing/equivalence-tests/outputs
+          git commit -m "Automated equivalence test golden file update for release ${{ inputs.target-terraform-version }}."
+          git push
+        else
+          echo "found no changes, so not pushing any updates."
+        fi
diff --git a/v1.4.7/.github/actions/go-version/action.yml b/v1.4.7/.github/actions/go-version/action.yml
new file mode 100644
index 0000000..60b3671
--- /dev/null
+++ b/v1.4.7/.github/actions/go-version/action.yml
@@ -0,0 +1,23 @@
+name: 'Determine Go Toolchain Version'
+description: 'Uses the .go-version file to determine which Go toolchain to use for any Go-related actions downstream.'
+outputs:
+  version:
+    description: "Go toolchain version"
+    value: ${{ steps.go.outputs.version }}
+runs:
+  using: "composite"
+  steps:
+    # We use goenv to make sure we're always using the same Go version we'd
+    # use for releases, as recorded in the .go-version file.
+    - name: "Determine Go version"
+      id: go
+      shell: bash
+      # We use .go-version as our source of truth for current Go
+      # version, because "goenv" can react to it automatically.
+      # However, we don't actually use goenv for our automated
+      # steps in GitHub Actions, because it's primarily for
+      # interactive use in shells and makes things unnecessarily
+      # complex for automation.
+      run: |
+        echo "Building with Go $(cat .go-version)"
+        echo "version=$(cat .go-version)" >> "$GITHUB_OUTPUT"
diff --git a/v1.4.7/.github/pull_request_template.md b/v1.4.7/.github/pull_request_template.md
new file mode 100644
index 0000000..dfa6b82
--- /dev/null
+++ b/v1.4.7/.github/pull_request_template.md
@@ -0,0 +1,56 @@
+<!--
+
+Describe in detail the changes you are proposing, and the rationale.
+
+See the contributing guide:
+
+https://github.com/hashicorp/terraform/blob/main/.github/CONTRIBUTING.md
+
+-->
+
+<!--
+
+Link all GitHub issues fixed by this PR, and add references to prior
+related PRs.
+
+-->
+
+Fixes #
+
+## Target Release
+
+<!--
+
+In normal circumstances we only target changes at the upcoming minor
+release, or as a patch to the current minor version. If you need to
+port a security fix to an older release, highlight this here by listing
+all targeted releases.
+
+If targeting the next patch release, also add the relevant x.y-backport
+label to enable the backport bot.
+
+-->
+
+1.4.x
+
+## Draft CHANGELOG entry
+
+<!--
+
+Choose a category, delete the others:
+
+-->
+
+### NEW FEATURES | UPGRADE NOTES | ENHANCEMENTS | BUG FIXES | EXPERIMENTS
+
+<!--
+
+Write a short description of the user-facing change. Examples:
+
+- `terraform show -json`: Fixed crash with sensitive set values.
+- When rendering a diff, Terraform now quotes the name of any object attribute whose string representation is not a valid identifier.
+- The local token configuration in the cloud and remote backend now has higher priority than a token specified in a credentials block in the CLI configuration.
+
+--> 
+
+-  
diff --git a/v1.4.7/.github/scripts/e2e_test_linux_darwin.sh b/v1.4.7/.github/scripts/e2e_test_linux_darwin.sh
new file mode 100755
index 0000000..e0fa69b
--- /dev/null
+++ b/v1.4.7/.github/scripts/e2e_test_linux_darwin.sh
@@ -0,0 +1,15 @@
+#!/usr/bin/env bash
+set -uo pipefail
+
+if [[ $arch == 'arm' || $arch == 'arm64' ]]
+then
+    export DIR=$(mktemp -d)
+    unzip -d $DIR "${e2e_cache_path}/terraform-e2etest_${os}_${arch}.zip"
+    unzip -d $DIR "./terraform_${version}_${os}_${arch}.zip"
+    sudo chmod +x $DIR/e2etest
+    docker run --platform=linux/arm64 -v $DIR:/src -w /src arm64v8/alpine ./e2etest -test.v
+else
+    unzip "${e2e_cache_path}/terraform-e2etest_${os}_${arch}.zip"
+    unzip "./terraform_${version}_${os}_${arch}.zip"
+    TF_ACC=1 ./e2etest -test.v
+fi
\ No newline at end of file
diff --git a/v1.4.7/.github/scripts/equivalence-test.sh b/v1.4.7/.github/scripts/equivalence-test.sh
new file mode 100755
index 0000000..78b5476
--- /dev/null
+++ b/v1.4.7/.github/scripts/equivalence-test.sh
@@ -0,0 +1,162 @@
+#!/usr/bin/env bash
+set -uo pipefail
+
+function usage {
+  cat <<-'EOF'
+Usage: ./equivalence-test.sh <command> [<args>] [<options>]
+
+Description:
+  This script will handle various commands related to the execution of the
+  Terraform equivalence tests.
+
+Commands:
+  get_target_branch <version>
+    get_target_branch returns the default target branch for a given Terraform
+    version.
+
+    target_branch=$(./equivalence-test.sh get_target_branch v1.4.3); target_branch=v1.4
+    target_branch=$(./equivalence-test.sh get_target_branch 1.4.3); target_branch=v1.4
+
+  download_equivalence_test_binary <version> <target> <os> <arch>
+    download_equivalence_test_binary downloads the equivalence testing binary
+    for a given version and places it at the target path.
+
+    ./equivalence-test.sh download_equivalence_test_binary 0.3.0 ./bin/terraform-equivalence-testing linux amd64
+
+  download_terraform_binary <version> <target> <os> <arch>
+    download_terraform_binary downloads the terraform release binary for a given
+    version and places it at the target path.
+
+    ./equivalence-test.sh download_terraform_binary 1.4.3 ./bin/terraform linux amd64
+EOF
+}
+
+function download_equivalence_test_binary {
+  VERSION="${1:-}"
+  TARGET="${2:-}"
+  OS="${3:-}"
+  ARCH="${4:-}"
+
+  if [[ -z "$VERSION" || -z "$TARGET" || -z "$OS" || -z "$ARCH" ]]; then
+    echo "missing at least one of [<version>, <target>, <os>, <arch>] arguments"
+    usage
+    exit 1
+  fi
+
+  curl \
+    -H "Accept: application/vnd.github+json" \
+    "https://api.github.com/repos/hashicorp/terraform-equivalence-testing/releases" > releases.json
+
+  ASSET="terraform-equivalence-testing_v${VERSION}_${OS}_${ARCH}.zip"
+  ASSET_ID=$(jq -r --arg VERSION "v$VERSION" --arg ASSET "$ASSET" '.[] | select(.name == $VERSION) | .assets[] | select(.name == $ASSET) | .id' releases.json)
+
+  mkdir -p zip
+  curl -L \
+    -H "Accept: application/octet-stream" \
+    "https://api.github.com/repos/hashicorp/terraform-equivalence-testing/releases/assets/$ASSET_ID" > "zip/$ASSET"
+
+  mkdir -p bin
+  unzip -p "zip/$ASSET" terraform-equivalence-testing > "$TARGET"
+  chmod u+x "$TARGET"
+  rm -r zip
+  rm releases.json
+}
+
+function download_terraform_binary {
+  VERSION="${1:-}"
+  TARGET="${2:-}"
+  OS="${3:-}"
+  ARCH="${4:-}"
+
+  if [[ -z "$VERSION" || -z "$TARGET" || -z "$OS" || -z "$ARCH" ]]; then
+    echo "missing at least one of [<version>, <target>, <os>, <arch>] arguments"
+    usage
+    exit 1
+  fi
+
+  mkdir -p zip
+  curl "https://releases.hashicorp.com/terraform/${VERSION}/terraform_${VERSION}_${OS}_${ARCH}.zip" > "zip/terraform.zip"
+
+  mkdir -p bin
+  unzip -p "zip/terraform.zip" terraform > "$TARGET"
+  chmod u+x "$TARGET"
+  rm -r zip
+}
+
+function get_target_branch {
+  VERSION="${1:-}"
+
+  if [ -z "$VERSION" ]; then
+    echo "missing <version> argument"
+    usage
+    exit 1
+  fi
+
+
+  # Split off the build metadata part, if any
+  # (we won't actually include it in our final version, and handle it only for
+  # completeness against semver syntax.)
+  IFS='+' read -ra VERSION BUILD_META <<< "$VERSION"
+
+  # Separate out the prerelease part, if any
+  IFS='-' read -r BASE_VERSION PRERELEASE <<< "$VERSION"
+
+  # Separate out major, minor and patch versions.
+  IFS='.' read -r MAJOR_VERSION MINOR_VERSION PATCH_VERSION <<< "$BASE_VERSION"
+
+  if [[ "$PRERELEASE" == *"alpha"* ]]; then
+    TARGET_BRANCH=main
+  else
+    if [[ $MAJOR_VERSION = v* ]]; then
+      TARGET_BRANCH=${MAJOR_VERSION}.${MINOR_VERSION}
+    else
+      TARGET_BRANCH=v${MAJOR_VERSION}.${MINOR_VERSION}
+    fi
+  fi
+
+  echo "$TARGET_BRANCH"
+}
+
+function main {
+  case "$1" in
+    get_target_branch)
+      if [ "${#@}" != 2 ]; then
+        echo "invalid number of arguments"
+        usage
+        exit 1
+      fi
+
+      get_target_branch "$2"
+
+      ;;
+    download_equivalence_test_binary)
+      if [ "${#@}" != 5 ]; then
+        echo "invalid number of arguments"
+        usage
+        exit 1
+      fi
+
+      download_equivalence_test_binary "$2" "$3" "$4" "$5"
+
+      ;;
+    download_terraform_binary)
+      if [ "${#@}" != 5 ]; then
+        echo "invalid number of arguments"
+        usage
+        exit 1
+      fi
+
+      download_terraform_binary "$2" "$3" "$4" "$5"
+
+      ;;
+    *)
+      echo "unrecognized command $*"
+      usage
+      exit 1
+
+      ;;
+  esac
+}
+
+main "$@"
+exit $?
diff --git a/v1.4.7/.github/scripts/get_product_version.sh b/v1.4.7/.github/scripts/get_product_version.sh
new file mode 100755
index 0000000..a89e364
--- /dev/null
+++ b/v1.4.7/.github/scripts/get_product_version.sh
@@ -0,0 +1,39 @@
+#!/usr/bin/env bash
+set -uo pipefail
+
+# Trim the "v" prefix, if any.
+VERSION="${RAW_VERSION#v}"
+
+# Split off the build metadata part, if any
+# (we won't actually include it in our final version, and handle it only for
+# compleness against semver syntax.)
+IFS='+' read -ra VERSION BUILD_META <<< "$VERSION"
+
+# Separate out the prerelease part, if any
+# (version.go expects it to be in a separate variable)
+IFS='-' read -r BASE_VERSION PRERELEASE <<< "$VERSION"
+
+EXPERIMENTS_ENABLED=0
+if [[ "$PRERELEASE" == alpha* ]]; then
+EXPERIMENTS_ENABLED=1
+fi
+if [[ "$PRERELEASE" == dev* ]]; then
+EXPERIMENTS_ENABLED=1
+fi
+
+LDFLAGS="-w -s"
+if [[ "$EXPERIMENTS_ENABLED" == 1 ]]; then
+LDFLAGS="${LDFLAGS} -X 'main.experimentsAllowed=yes'"
+fi
+LDFLAGS="${LDFLAGS} -X 'github.com/hashicorp/terraform/version.Version=${BASE_VERSION}'"
+LDFLAGS="${LDFLAGS} -X 'github.com/hashicorp/terraform/version.Prerelease=${PRERELEASE}'"
+
+echo "Building Terraform CLI ${VERSION}"
+if [[ "$EXPERIMENTS_ENABLED" == 1 ]]; then
+echo "This build allows use of experimental features"
+fi
+echo "product-version=${VERSION}" | tee -a "${GITHUB_OUTPUT}"
+echo "product-version-base=${BASE_VERSION}" | tee -a "${GITHUB_OUTPUT}"
+echo "product-version-pre=${PRERELEASE}" | tee -a "${GITHUB_OUTPUT}"
+echo "experiments=${EXPERIMENTS_ENABLED}" | tee -a "${GITHUB_OUTPUT}"
+echo "go-ldflags=${LDFLAGS}" | tee -a "${GITHUB_OUTPUT}"
\ No newline at end of file
diff --git a/v1.4.7/.github/scripts/verify_docker b/v1.4.7/.github/scripts/verify_docker
new file mode 100755
index 0000000..6d016b2
--- /dev/null
+++ b/v1.4.7/.github/scripts/verify_docker
@@ -0,0 +1,47 @@
+#!/bin/bash
+# Copyright (c) HashiCorp, Inc.
+# SPDX-License-Identifier: MPL-2.0
+
+
+set -euo pipefail
+
+# verify_docker invokes the given Docker image with the argument `version` and inspects its output.
+# If its output doesn't match the version given, the script will exit 1 and report why it failed.
+# This is meant to be run as part of the build workflow to verify the built image meets some basic
+# criteria for validity.
+#
+# Because this is meant to be run as the `smoke_test` for the docker-build workflow, the script expects
+# the image name parameter to be provided by the `IMAGE_NAME` environment variable, rather than a
+# positional argument.
+
+function usage {
+  echo "IMAGE_NAME=<image uri> ./verify_docker <expect_version>"
+}
+
+function main {
+  local image_name="${IMAGE_NAME:-}"
+  local expect_version="${1:-}"
+  local got_version
+
+  if [[ -z "${image_name}" ]]; then
+    echo "ERROR: IMAGE_NAME is not set"
+    usage
+    exit 1
+  fi
+
+  if [[ -z "${expect_version}" ]]; then
+    echo "ERROR: expected version argument is required"
+    usage
+    exit 1
+  fi
+
+  got_version="$( awk '{print $2}' <(head -n1 <(docker run --rm "${image_name}" version)) )"
+  if [ "${got_version}" != "${expect_version}" ]; then
+    echo "Test FAILED"
+    echo "Got: ${got_version}, Want: ${expect_version}"
+    exit 1
+  fi
+  echo "Test PASSED"
+}
+
+main "$@"
diff --git a/v1.4.7/.github/workflows/build-Dockerfile b/v1.4.7/.github/workflows/build-Dockerfile
new file mode 100644
index 0000000..c0ea5b8
--- /dev/null
+++ b/v1.4.7/.github/workflows/build-Dockerfile
@@ -0,0 +1,41 @@
+# This Dockerfile is not intended for general use, but is rather used to
+# produce our "light" release packages as part of our official release
+# pipeline.
+#
+# If you want to test this locally you'll need to set the three arguments
+# to values realistic for what the hashicorp/actions-docker-build GitHub
+# action would set, and ensure that there's a suitable "terraform" executable
+# in the dist/linux/${TARGETARCH} directory.
+
+FROM docker.mirror.hashicorp.services/alpine:latest AS default
+
+# This is intended to be run from the hashicorp/actions-docker-build GitHub
+# action, which sets these appropriately based on context.
+ARG PRODUCT_VERSION=UNSPECIFIED
+ARG PRODUCT_REVISION=UNSPECIFIED
+ARG BIN_NAME=terraform
+
+# This argument is set by the Docker toolchain itself, to the name
+# of the CPU architecture we're building an image for.
+# Our caller should've extracted the corresponding "terraform" executable
+# into dist/linux/${TARGETARCH} for us to use.
+ARG TARGETARCH
+
+LABEL maintainer="HashiCorp Terraform Team <terraform@hashicorp.com>"
+
+# New standard version label.
+LABEL version=$PRODUCT_VERSION
+
+# Historical Terraform-specific label preserved for backward compatibility.
+LABEL "com.hashicorp.terraform.version"="${PRODUCT_VERSION}"
+
+RUN apk add --no-cache git openssh
+
+# The hashicorp/actions-docker-build GitHub Action extracts the appropriate
+# release package for our target architecture into the current working
+# directory before running "docker build", which we'll then copy into the
+# Docker image to make sure that we use an identical binary as all of the
+# other official release channels.
+COPY ["dist/linux/${TARGETARCH}/terraform", "/bin/terraform"]
+
+ENTRYPOINT ["/bin/terraform"]
diff --git a/v1.4.7/.github/workflows/build-terraform-oss.yml b/v1.4.7/.github/workflows/build-terraform-oss.yml
new file mode 100644
index 0000000..61fa022
--- /dev/null
+++ b/v1.4.7/.github/workflows/build-terraform-oss.yml
@@ -0,0 +1,101 @@
+---
+name: build_terraform
+
+# This workflow is intended to be called by the build workflow. The crt make
+# targets that are utilized automatically determine build metadata and
+# handle building and packing Terraform.
+
+on:
+  workflow_call:
+    inputs:
+      cgo-enabled:
+        type: string
+        default: 0
+        required: true
+      goos:
+        required: true
+        type: string
+      goarch:
+        required: true
+        type: string
+      go-version:
+        type: string
+      package-name:
+        type: string
+        default: terraform
+      product-version:
+        type: string
+        required: true
+      ld-flags:
+        type: string
+        required: true
+      runson:
+        type: string
+        required: true
+
+jobs:
+  build:
+    runs-on: ${{ inputs.runson }}
+    name: Terraform ${{ inputs.goos }} ${{ inputs.goarch }} v${{ inputs.product-version }}
+    steps:
+      - uses: actions/checkout@v3
+      - uses: actions/setup-go@v3
+        with:
+          go-version: ${{ inputs.go-version }}
+      - name: Determine artifact basename
+        run: echo "ARTIFACT_BASENAME=${{ inputs.package-name }}_${{ inputs.product-version }}_${{ inputs.goos }}_${{ inputs.goarch }}.zip" >> $GITHUB_ENV
+      - name: Build Terraform
+        env:
+          GOOS: ${{ inputs.goos }}
+          GOARCH: ${{ inputs.goarch }}
+          GO_LDFLAGS: ${{ inputs.ld-flags }}
+          ACTIONSOS: ${{ inputs.runson }}
+          CGO_ENABLED: ${{ inputs.cgo-enabled }}
+        uses: hashicorp/actions-go-build@v0.1.7
+        with:
+          product_name: ${{ inputs.package-name }}
+          product_version: ${{ inputs.product-version }}
+          go_version: ${{ inputs.go-version }}
+          os: ${{ inputs.goos }}
+          arch: ${{ inputs.goarch }}
+          reproducible: report
+          instructions: |-
+            mkdir dist out
+            set -x
+            go build -ldflags "${{ inputs.ld-flags }}" -o dist/ .
+            zip -r -j out/${{ env.ARTIFACT_BASENAME }} dist/
+      - uses: actions/upload-artifact@v3
+        with:
+          name: ${{ env.ARTIFACT_BASENAME }}
+          path: out/${{ env.ARTIFACT_BASENAME }}
+          if-no-files-found: error
+      - if: ${{ inputs.goos == 'linux' }}
+        uses: hashicorp/actions-packaging-linux@v1
+        with:
+          name: "terraform"
+          description: "Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned."
+          arch: ${{ inputs.goarch }}
+          version: ${{ inputs.product-version }}
+          maintainer: "HashiCorp"
+          homepage: "https://terraform.io/"
+          license: "MPL-2.0"
+          binary: "dist/terraform"
+          deb_depends: "git"
+          rpm_depends: "git"
+      - if: ${{ inputs.goos == 'linux' }}
+        name: Determine package file names
+        run: |
+          echo "RPM_PACKAGE=$(basename out/*.rpm)" >> $GITHUB_ENV
+          echo "DEB_PACKAGE=$(basename out/*.deb)" >> $GITHUB_ENV
+      - if: ${{ inputs.goos == 'linux' }}
+        uses: actions/upload-artifact@v3
+        with:
+          name: ${{ env.RPM_PACKAGE }}
+          path: out/${{ env.RPM_PACKAGE }}
+          if-no-files-found: error
+      - if: ${{ inputs.goos == 'linux' }}
+        uses: actions/upload-artifact@v3
+        with:
+          name: ${{ env.DEB_PACKAGE }}
+          path: out/${{ env.DEB_PACKAGE }}
+          if-no-files-found: error
\ No newline at end of file
diff --git a/v1.4.7/.github/workflows/build.yml b/v1.4.7/.github/workflows/build.yml
new file mode 100644
index 0000000..f8a3be1
--- /dev/null
+++ b/v1.4.7/.github/workflows/build.yml
@@ -0,0 +1,321 @@
+name: build
+
+# If you want to test changes to this file before merging to a main branch,
+# push them up to a branch whose name has the prefix "build-workflow-dev/",
+# which is a special prefix that triggers this workflow even though it's not
+# actually a release branch.
+
+on:
+  workflow_dispatch:
+  push:
+    branches:
+      - main
+      - 'v[0-9]+.[0-9]+'
+      - releng/**
+    tags:
+      - 'v[0-9]+.[0-9]+.[0-9]+*'
+
+env:
+  PKG_NAME: "terraform"
+
+permissions:
+  contents: read
+  statuses: write
+
+jobs:
+  get-product-version:
+    name: "Determine intended Terraform version"
+    runs-on: ubuntu-latest
+    outputs:
+      product-version: ${{ steps.get-product-version.outputs.product-version }}
+      product-version-base: ${{ steps.get-product-version.outputs.base-product-version }}
+      product-version-pre: ${{ steps.get-product-version.outputs.prerelease-product-version }}
+      experiments: ${{ steps.get-ldflags.outputs.experiments }}
+      go-ldflags: ${{ steps.get-ldflags.outputs.go-ldflags }}
+      pkg-name: ${{ steps.get-pkg-name.outputs.pkg-name }}
+
+    steps:
+      - uses: actions/checkout@v3
+      - name: Get Package Name
+        id: get-pkg-name
+        run: |
+          pkg_name=${{ env.PKG_NAME }}
+          echo "pkg-name=${pkg_name}" | tee -a "${GITHUB_OUTPUT}"
+      - name: Decide version number
+        id: get-product-version
+        uses: hashicorp/actions-set-product-version@v1
+      - name: Determine experiments
+        id: get-ldflags
+        env:
+          RAW_VERSION: ${{ steps.get-product-version.outputs.product-version }}
+        shell: bash
+        run: .github/scripts/get_product_version.sh
+      - name: Report chosen version number
+        run: |
+          [ -n "${{steps.get-product-version.outputs.product-version}}" ]
+          echo "::notice title=Terraform CLI Version::${{ steps.get-product-version.outputs.product-version }}"
+
+  get-go-version:
+    name: "Determine Go toolchain version"
+    runs-on: ubuntu-latest
+    outputs:
+      go-version: ${{ steps.get-go-version.outputs.version }}
+
+    steps:
+      - uses: actions/checkout@v3
+      - name: Determine Go version
+        id: get-go-version
+        uses: ./.github/actions/go-version
+
+  generate-metadata-file:
+    name: "Generate release metadata"
+    runs-on: ubuntu-latest
+    needs: get-product-version
+    outputs:
+      filepath: ${{ steps.generate-metadata-file.outputs.filepath }}
+
+    steps:
+      - uses: actions/checkout@v3
+      - name: Generate package metadata
+        id: generate-metadata-file
+        uses: hashicorp/actions-generate-metadata@v1
+        with:
+          version: ${{ needs.get-product-version.outputs.product-version }}
+          product: ${{ env.PKG_NAME }}
+
+      - uses: actions/upload-artifact@v2
+        with:
+          name: metadata.json
+          path: ${{ steps.generate-metadata-file.outputs.filepath }}
+
+  build:
+    name: Build for ${{ matrix.goos }}_${{ matrix.goarch }}
+    needs:
+      - get-product-version
+      - get-go-version
+    uses: ./.github/workflows/build-terraform-oss.yml
+    with:
+      goarch: ${{ matrix.goarch }}
+      goos: ${{ matrix.goos }}
+      go-version: ${{ needs.get-go-version.outputs.go-version }}
+      package-name: ${{ needs.get-product-version.outputs.pkg-name }}
+      product-version: ${{ needs.get-product-version.outputs.product-version }}
+      ld-flags: ${{ needs.get-product-version.outputs.go-ldflags }}
+      cgo-enabled: ${{ matrix.cgo-enabled }}
+      runson: ${{ matrix.runson }}
+    secrets: inherit
+    strategy:
+      matrix:
+        include:
+          - {goos: "freebsd", goarch: "386", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "freebsd", goarch: "amd64", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "freebsd", goarch: "arm", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "linux", goarch: "386", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "linux", goarch: "amd64", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "linux", goarch: "arm", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "linux", goarch: "arm64", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "openbsd", goarch: "386", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "openbsd", goarch: "amd64", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "solaris", goarch: "amd64", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "windows", goarch: "386", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "windows", goarch: "amd64", runson: "ubuntu-latest", cgo-enabled: "0"}
+          - {goos: "darwin", goarch: "amd64", runson: "macos-latest", cgo-enabled: "1"}
+          - {goos: "darwin", goarch: "arm64", runson: "macos-latest", cgo-enabled: "1"}
+      fail-fast: false
+
+  package-docker:
+    name: Build Docker image for linux_${{ matrix.arch }}
+    runs-on: ubuntu-latest
+    needs:
+      - get-product-version
+      - build
+    strategy:
+      matrix:
+        arch: ["amd64", "386", "arm", "arm64"]
+      fail-fast: false
+    env:
+      repo: "terraform"
+      version: ${{needs.get-product-version.outputs.product-version}}
+    steps:
+      - uses: actions/checkout@v3
+      - name: Build Docker images
+        uses: hashicorp/actions-docker-build@v1
+        with:
+          pkg_name: "terraform_${{env.version}}"
+          version: ${{env.version}}
+          bin_name: terraform
+          target: default
+          arch: ${{matrix.arch}}
+          dockerfile: .github/workflows/build-Dockerfile
+          smoke_test: .github/scripts/verify_docker v${{ env.version }}
+          tags: |
+            docker.io/hashicorp/${{env.repo}}:${{env.version}}
+            public.ecr.aws/hashicorp/${{env.repo}}:${{env.version}}
+
+  e2etest-build:
+    name: Build e2etest for ${{ matrix.goos }}_${{ matrix.goarch }}
+    runs-on: ubuntu-latest
+    outputs:
+      e2e-cache-key: ${{ steps.set-cache-values.outputs.e2e-cache-key }}
+      e2e-cache-path: ${{ steps.set-cache-values.outputs.e2e-cache-path }}
+    needs:
+      - get-product-version
+      - get-go-version
+    strategy:
+      matrix:
+        include:
+          - {goos: "darwin", goarch: "amd64"}
+          - {goos: "darwin", goarch: "arm64"}
+          - {goos: "windows", goarch: "amd64"}
+          - {goos: "windows", goarch: "386"}
+          - {goos: "linux", goarch: "386"}
+          - {goos: "linux", goarch: "amd64"}
+          - {goos: linux, goarch: "arm"}
+          - {goos: linux, goarch: "arm64"}
+      fail-fast: false
+
+    env:
+      build_script: ./internal/command/e2etest/make-archive.sh
+
+    steps:
+      - name: Set Cache Values
+        id: set-cache-values
+        run: |
+          cache_key=e2e-cache-${{ github.sha }}
+          cache_path=internal/command/e2etest/build
+          echo "e2e-cache-key=${cache_key}" | tee -a "${GITHUB_OUTPUT}"
+          echo "e2e-cache-path=${cache_path}" | tee -a "${GITHUB_OUTPUT}"
+      - uses: actions/checkout@v3
+
+      - name: Install Go toolchain
+        uses: actions/setup-go@v3
+        with:
+          go-version: ${{ needs.get-go-version.outputs.go-version }}
+
+      - name: Build test harness package
+        env:
+          GOOS: ${{ matrix.goos }}
+          GOARCH: ${{ matrix.goarch }}
+          GO_LDFLAGS: ${{ needs.get-product-version.outputs.go-ldflags }}
+        run: |
+          # NOTE: This script reacts to the GOOS, GOARCH, and GO_LDFLAGS
+          # environment variables defined above. The e2e test harness
+          # needs to know the version we're building for so it can verify
+          # that "terraform version" is returning that version number.
+          bash ./internal/command/e2etest/make-archive.sh
+
+      - name: Save test harness to cache
+        uses: actions/cache/save@v3
+        with:
+          path: ${{ steps.set-cache-values.outputs.e2e-cache-path }}
+          key: ${{ steps.set-cache-values.outputs.e2e-cache-key }}_${{ matrix.goos }}_${{ matrix.goarch }}
+
+  e2e-test:
+    name: Run e2e test for ${{ matrix.goos }}_${{ matrix.goarch }}
+    runs-on: ${{ matrix.runson }}
+    needs:
+      - get-product-version
+      - build
+      - e2etest-build
+    strategy:
+      matrix:
+        include:
+          - { runson: ubuntu-latest, goos: linux, goarch: "amd64" }
+          - { runson: ubuntu-latest, goos: linux, goarch: "386" }
+          - { runson: ubuntu-latest, goos: linux, goarch: "arm" }
+          - { runson: ubuntu-latest, goos: linux, goarch: "arm64" }
+          - { runson: macos-latest, goos: darwin, goarch: "amd64" }
+          - { runson: windows-latest, goos: windows, goarch: "amd64" }
+          - { runson: windows-latest, goos: windows, goarch: "386" }
+      fail-fast: false
+
+    env:
+      os: ${{ matrix.goos }}
+      arch: ${{ matrix.goarch }}
+      version: ${{needs.get-product-version.outputs.product-version}}
+
+    steps:
+      # NOTE: This intentionally _does not_ check out the source code
+      # for the commit/tag we're building, because by now we should
+      # have everything we need in the combination of CLI release package
+      # and e2etest package for this platform. (This helps ensure that we're
+      # really testing the release package and not inadvertently testing a
+      # fresh build from source.)
+      - name: Checkout repo
+        if: ${{ (matrix.goos == 'linux') || (matrix.goos == 'darwin') }}
+        uses: actions/checkout@v3
+      - name: "Restore cache"
+        uses: actions/cache/restore@v3
+        id: e2etestpkg
+        with:
+          path: ${{ needs.e2etest-build.outputs.e2e-cache-path }}
+          key: ${{ needs.e2etest-build.outputs.e2e-cache-key }}_${{ matrix.goos }}_${{ matrix.goarch }}
+          fail-on-cache-miss: true
+          enableCrossOsArchive: true
+      - name: "Download Terraform CLI package"
+        uses: actions/download-artifact@v2
+        id: clipkg
+        with:
+          name: terraform_${{env.version}}_${{ env.os }}_${{ env.arch }}.zip
+          path: .
+      - name: Extract packages
+        if: ${{ matrix.goos == 'windows' }}
+        run: |
+          unzip "${{ needs.e2etest-build.outputs.e2e-cache-path }}/terraform-e2etest_${{ env.os }}_${{ env.arch }}.zip"
+          unzip "./terraform_${{env.version}}_${{ env.os }}_${{ env.arch }}.zip"
+      - name: Set up QEMU
+        uses: docker/setup-qemu-action@v1
+        if: ${{ contains(matrix.goarch, 'arm') }}
+        with:
+          platforms: all
+      - name: Run E2E Tests (Darwin & Linux)
+        id: get-product-version
+        shell: bash
+        if: ${{ (matrix.goos == 'linux') || (matrix.goos == 'darwin') }}
+        env:
+          e2e_cache_path: ${{ needs.e2etest-build.outputs.e2e-cache-path }}
+        run: .github/scripts/e2e_test_linux_darwin.sh
+      - name: Run E2E Tests (Windows)
+        if: ${{ matrix.goos == 'windows' }}
+        env:
+          TF_ACC: 1
+        shell: cmd
+        run: e2etest.exe -test.v
+
+
+  e2e-test-exec:
+    name: Run terraform-exec test for linux amd64
+    runs-on: ubuntu-latest
+    needs:
+      - get-product-version
+      - get-go-version
+      - build
+
+    env:
+      os: ${{ matrix.goos }}
+      arch: ${{ matrix.goarch }}
+      version: ${{needs.get-product-version.outputs.product-version}}
+
+    steps:
+      - name: Install Go toolchain
+        uses: actions/setup-go@v3
+        with:
+          go-version: ${{ needs.get-go-version.outputs.go-version }}
+      - name: Download Terraform CLI package
+        uses: actions/download-artifact@v2
+        id: clipkg
+        with:
+          name: terraform_${{ env.version }}_linux_amd64.zip
+          path: .
+      - name: Checkout terraform-exec repo
+        uses: actions/checkout@v3
+        with:
+          repository: hashicorp/terraform-exec
+          path: terraform-exec
+      - name: Run terraform-exec end-to-end tests
+        run: |
+          FULL_RELEASE_VERSION="${{ env.version }}"
+          unzip terraform_${FULL_RELEASE_VERSION}_linux_amd64.zip
+          export TFEXEC_E2ETEST_TERRAFORM_PATH="$(pwd)/terraform"
+          cd terraform-exec
+          go test -race -timeout=30m -v ./tfexec/internal/e2etest
diff --git a/v1.4.7/.github/workflows/checks.yml b/v1.4.7/.github/workflows/checks.yml
new file mode 100644
index 0000000..c275445
--- /dev/null
+++ b/v1.4.7/.github/workflows/checks.yml
@@ -0,0 +1,190 @@
+# This workflow is a collection of "quick checks" that should be reasonable
+# to run for any new commit to this repository in principle.
+#
+# The main purpose of this workflow is to represent checks that we want to
+# run prior to reviewing and merging a pull request. We should therefore aim
+# for these checks to complete in no more than a few minutes in the common
+# case.
+#
+# The build.yml workflow includes some additional checks we run only for
+# already-merged changes to release branches and tags, as a compromise to
+# keep the PR feedback relatively fast. The intent is that checks.yml should
+# catch most problems but that build.yml might occasionally by the one to catch
+# more esoteric situations, such as architecture-specific or OS-specific
+# misbehavior.
+
+name: Quick Checks
+
+on:
+  pull_request:
+  push:
+    branches:
+      - main
+      - 'v[0-9]+.[0-9]+'
+      - checks-workflow-dev/*
+    tags:
+      - 'v[0-9]+.[0-9]+.[0-9]+*'
+
+# This workflow runs for not-yet-reviewed external contributions and so it
+# intentionally has no write access and only limited read access to the
+# repository.
+permissions:
+  contents: read
+
+jobs:
+  unit-tests:
+    name: "Unit Tests"
+    runs-on: ubuntu-latest
+
+    steps:
+      - name: "Fetch source code"
+        uses: actions/checkout@v2
+
+      - name: Determine Go version
+        id: go
+        uses: ./.github/actions/go-version
+
+      - name: Install Go toolchain
+        uses: actions/setup-go@v2
+        with:
+          go-version: ${{ steps.go.outputs.version }}
+
+      # NOTE: This cache is shared so the following step must always be
+      # identical across the unit-tests, e2e-tests, and consistency-checks
+      # jobs, or else weird things could happen.
+      - name: Cache Go modules
+        uses: actions/cache@v3
+        with:
+          path: "~/go/pkg"
+          key: go-mod-${{ hashFiles('go.sum') }}
+          restore-keys: |
+            go-mod-
+
+      - name: "Unit tests"
+        run: |
+          go test ./...
+
+  race-tests:
+    name: "Race Tests"
+    runs-on: ubuntu-latest
+
+    steps:
+      - name: "Fetch source code"
+        uses: actions/checkout@v2
+
+      - name: Determine Go version
+        id: go
+        uses: ./.github/actions/go-version
+
+      - name: Install Go toolchain
+        uses: actions/setup-go@v2
+        with:
+          go-version: ${{ steps.go.outputs.version }}
+
+      # NOTE: This cache is shared so the following step must always be
+      # identical across the unit-tests, e2e-tests, and consistency-checks
+      # jobs, or else weird things could happen.
+      - name: Cache Go modules
+        uses: actions/cache@v3
+        with:
+          path: "~/go/pkg"
+          key: go-mod-${{ hashFiles('go.sum') }}
+          restore-keys: |
+            go-mod-
+
+      # The race detector add significant time to the unit tests, so only run
+      # it for select packages.
+      - name: "Race detector"
+        run: |
+          go test -race ./internal/terraform ./internal/command ./internal/states
+
+  e2e-tests:
+    # This is an intentionally-limited form of our E2E test run which only
+    # covers Terraform running on Linux. The build.yml workflow runs these
+    # tests across various other platforms in order to catch the rare exception
+    # that might leak through this.
+    name: "End-to-end Tests"
+    runs-on: ubuntu-latest
+
+    steps:
+      - name: "Fetch source code"
+        uses: actions/checkout@v2
+
+      - name: Determine Go version
+        id: go
+        uses: ./.github/actions/go-version
+
+      - name: Install Go toolchain
+        uses: actions/setup-go@v2
+        with:
+          go-version: ${{ steps.go.outputs.version }}
+
+      # NOTE: This cache is shared so the following step must always be
+      # identical across the unit-tests, e2e-tests, and consistency-checks
+      # jobs, or else weird things could happen.
+      - name: Cache Go modules
+        uses: actions/cache@v3
+        with:
+          path: "~/go/pkg"
+          key: go-mod-${{ hashFiles('go.sum') }}
+          restore-keys: |
+            go-mod-
+
+      - name: "End-to-end tests"
+        run: |
+          TF_ACC=1 go test -v ./internal/command/e2etest
+
+  consistency-checks:
+    name: "Code Consistency Checks"
+    runs-on: ubuntu-latest
+
+    steps:
+      - name: "Fetch source code"
+        uses: actions/checkout@v2
+        with:
+          fetch-depth: 0 # We need to do comparisons against the main branch.
+
+      - name: Determine Go version
+        id: go
+        uses: ./.github/actions/go-version
+
+      - name: Install Go toolchain
+        uses: actions/setup-go@v2
+        with:
+          go-version: ${{ steps.go.outputs.version }}
+
+      # NOTE: This cache is shared so the following step must always be
+      # identical across the unit-tests, e2e-tests, and consistency-checks
+      # jobs, or else weird things could happen.
+      - name: Cache Go modules
+        uses: actions/cache@v3
+        with:
+          path: "~/go/pkg"
+          key: go-mod-${{ hashFiles('go.sum') }}
+          restore-keys: |
+            go-mod-
+
+      - name: "go.mod and go.sum consistency check"
+        run: |
+          go mod tidy
+          if [[ -n "$(git status --porcelain)" ]]; then
+            echo >&2 "ERROR: go.mod/go.sum are not up-to-date. Run 'go mod tidy' and then commit the updated files."
+            exit 1
+          fi
+
+      - name: Cache protobuf tools
+        uses: actions/cache@v3
+        with:
+          path: "tools/protobuf-compile/.workdir"
+          key: protobuf-tools-${{ hashFiles('tools/protobuf-compile/protobuf-compile.go') }}
+          restore-keys: |
+            protobuf-tools-
+
+      - name: "Code consistency checks"
+        run: |
+          make fmtcheck importscheck generate staticcheck exhaustive protobuf
+          if [[ -n "$(git status --porcelain)" ]]; then
+            echo >&2 "ERROR: Generated files are inconsistent. Run 'make generate' and 'make protobuf' locally and then commit the updated files."
+            git >&2 status --porcelain
+            exit 1
+          fi
diff --git a/v1.4.7/.github/workflows/crt-hook-equivalence-tests.yml b/v1.4.7/.github/workflows/crt-hook-equivalence-tests.yml
new file mode 100644
index 0000000..a4607cc
--- /dev/null
+++ b/v1.4.7/.github/workflows/crt-hook-equivalence-tests.yml
@@ -0,0 +1,45 @@
+name: crt-hook-equivalence-tests
+
+on:
+  repository_dispatch:
+    types:
+      - crt-hook-equivalence-tests::terraform::*
+
+permissions:
+  contents: write
+
+jobs:
+  parse-metadata:
+    name: "Parse metadata.json"
+    runs-on: ubuntu-latest
+    outputs:
+      version: ${{ steps.parse.outputs.version }}
+      target-branch: ${{ steps.parse.outputs.target-branch }}
+    steps:
+      - name: parse
+        id: parse
+        env:
+          METADATA_PAYLOAD: ${{ toJSON(github.event.client_payload.payload) }}
+        run: |
+          VERSION=$(echo ${METADATA_PAYLOAD} | jq -r '.version')
+          TARGET_BRANCH=$(./.github/scripts/equivalence-test.sh get-target-branch "$VERSION")
+          
+          echo "target-branch=$TARGET_BRANCH" >> "GITHUB_OUTPUT"
+          echo "version=$VERSION" >> "$GITHUB_OUTPUT"
+
+  run-equivalence-tests:
+    runs-on: ubuntu-latest
+    name: "Run equivalence tests"
+    needs:
+      - parse-metadata
+    steps:
+      - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
+        with:
+          ref: ${{ needs.parse-metadata.outputs.target-branch }}
+      - uses: ./.github/actions/equivalence-test
+        with:
+          target-terraform-version: ${{ needs.parse-metadata.outputs.version }}
+          target-terraform-branch: ${{ needs.parse-metadata.outputs.target-branch }}
+          target-equivalence-test-version: 0.3.0
+          target-os: linux
+          target-arch: amd64
diff --git a/v1.4.7/.github/workflows/issue-comment-created.yml b/v1.4.7/.github/workflows/issue-comment-created.yml
new file mode 100644
index 0000000..b8c4d6b
--- /dev/null
+++ b/v1.4.7/.github/workflows/issue-comment-created.yml
@@ -0,0 +1,15 @@
+name: Issue Comment Created Triage
+
+on:
+  issue_comment:
+    types: [created]
+
+jobs:
+  issue_comment_triage:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions-ecosystem/action-remove-labels@v1
+        with:
+          labels: |
+            stale
+            waiting-reply
diff --git a/v1.4.7/.github/workflows/lock.yml b/v1.4.7/.github/workflows/lock.yml
new file mode 100644
index 0000000..ed67648
--- /dev/null
+++ b/v1.4.7/.github/workflows/lock.yml
@@ -0,0 +1,23 @@
+name: 'Lock Threads'
+
+on:
+  schedule:
+    - cron: '50 1 * * *'
+
+jobs:
+  lock:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: dessant/lock-threads@v2
+        with:
+          github-token: ${{ github.token }}
+          issue-lock-comment: >
+            I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
+
+            If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
+          issue-lock-inactive-days: '30'
+          pr-lock-comment: >
+            I'm going to lock this pull request because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active contributions.
+
+            If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
+          pr-lock-inactive-days: '30'
diff --git a/v1.4.7/.github/workflows/main.yml b/v1.4.7/.github/workflows/main.yml
new file mode 100644
index 0000000..08b438d
--- /dev/null
+++ b/v1.4.7/.github/workflows/main.yml
@@ -0,0 +1,21 @@
+---
+name: Backport Assistant Runner
+    
+on:
+  pull_request_target:
+    types:
+      - closed
+    
+jobs:
+  backport:
+    if: github.event.pull_request.merged
+    runs-on: ubuntu-latest
+    container: hashicorpdev/backport-assistant:0.2.1
+    steps:
+      - name: Run Backport Assistant
+        run: |
+          backport-assistant backport
+        env:
+          BACKPORT_LABEL_REGEXP: "(?P<target>\\d+\\.\\d+)-backport"
+          BACKPORT_TARGET_TEMPLATE: "v{{.target}}"
+          GITHUB_TOKEN: ${{ secrets.ELEVATED_GITHUB_TOKEN }}
diff --git a/v1.4.7/.github/workflows/manual-equivalence-tests.yml b/v1.4.7/.github/workflows/manual-equivalence-tests.yml
new file mode 100644
index 0000000..fb6b0d5
--- /dev/null
+++ b/v1.4.7/.github/workflows/manual-equivalence-tests.yml
@@ -0,0 +1,37 @@
+name: manual-equivalence-tests
+
+on:
+  workflow_dispatch:
+    inputs:
+      target-branch:
+        type: string
+        description: "Which branch should be updated?"
+        required: true
+      terraform-version:
+        type: string
+        description: "Terraform version to run against (no v prefix, eg. 1.4.4)."
+        required: true
+      equivalence-test-version:
+        type: string
+        description: 'Equivalence testing framework version to use (no v prefix, eg. 0.3.0).'
+        default: "0.3.0"
+        required: true
+
+permissions:
+  contents: write # We push updates to the equivalence tests back into the repository.
+
+jobs:
+  run-equivalence-tests:
+    name: "Run equivalence tests"
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # v3.3.0
+        with:
+          ref: ${{ inputs.target-branch }}
+      - uses: ./.github/actions/equivalence-test
+        with:
+          target-terraform-version: ${{ inputs.terraform-version }}
+          target-terraform-branch: ${{ inputs.target-branch }}
+          target-equivalence-test-version: ${{ inputs.equivalence-test-version }}
+          target-os: linux
+          target-arch: amd64
diff --git a/v1.4.7/.github/workflows/merged-pr.yml b/v1.4.7/.github/workflows/merged-pr.yml
new file mode 100644
index 0000000..df1249a
--- /dev/null
+++ b/v1.4.7/.github/workflows/merged-pr.yml
@@ -0,0 +1,24 @@
+name: Merged Pull Request
+permissions:
+  pull-requests: write
+
+# only trigger on pull request closed events
+on:
+  pull_request_target:
+    types: [ closed ]
+
+jobs:
+  merge_job:
+    # this job will only run if the PR has been merged
+    if: github.event.pull_request.merged == true
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/github-script@v5
+        with:
+          script: |
+            github.rest.issues.createComment({
+              issue_number: context.issue.number,
+              owner: context.repo.owner,
+              repo: context.repo.repo,
+              body: "Reminder for the merging maintainer: if this is a user-visible change, please update the changelog on the appropriate release branch."
+            })
diff --git a/v1.4.7/.gitignore b/v1.4.7/.gitignore
new file mode 100644
index 0000000..cc34a88
--- /dev/null
+++ b/v1.4.7/.gitignore
@@ -0,0 +1,27 @@
+*.dll
+*.exe
+.DS_Store
+bin/
+modules-dev/
+/pkg/
+website/.vagrant
+website/.bundle
+website/build
+website/node_modules
+.vagrant/
+*.backup
+*.bak
+*~
+.*.swp
+.idea
+*.iml
+*.test
+*.iml
+
+/terraform
+
+website/vendor
+vendor/
+
+# Coverage
+coverage.txt
diff --git a/v1.4.7/.go-version b/v1.4.7/.go-version
new file mode 100644
index 0000000..2a4feaf
--- /dev/null
+++ b/v1.4.7/.go-version
@@ -0,0 +1 @@
+1.19.6
diff --git a/v1.4.7/.release/ci.hcl b/v1.4.7/.release/ci.hcl
new file mode 100644
index 0000000..5f993bf
--- /dev/null
+++ b/v1.4.7/.release/ci.hcl
@@ -0,0 +1,166 @@
+schema = "1"
+
+project "terraform" {
+  // the team key is not used by CRT currently
+  team = "terraform"
+  slack {
+    notification_channel = "C011WJ112MD"
+  }
+  github {
+    organization = "hashicorp"
+    repository = "terraform"
+
+    release_branches = [
+      "main",
+      "release/**",
+      "v**.**",
+    ]
+  }
+}
+
+event "build" {
+  depends = ["merge"]
+  action "build" {
+    organization = "hashicorp"
+    repository = "terraform"
+    workflow = "build"
+  }
+}
+
+// Read more about what the `prepare` workflow does here:
+// https://hashicorp.atlassian.net/wiki/spaces/RELENG/pages/2489712686/Dec+7th+2022+-+Introducing+the+new+Prepare+workflow
+event "prepare" {
+  depends = ["build"]
+
+  action "prepare" {
+    organization = "hashicorp"
+    repository   = "crt-workflows-common"
+    workflow     = "prepare"
+    depends      = ["build"]
+  }
+
+  notification {
+    on = "fail"
+  }
+}
+
+## These are promotion and post-publish events
+## they should be added to the end of the file after the verify event stanza.
+
+event "trigger-staging" {
+// This event is dispatched by the bob trigger-promotion command
+// and is required - do not delete.
+}
+
+event "promote-staging" {
+  depends = ["trigger-staging"]
+  action "promote-staging" {
+    organization = "hashicorp"
+    repository = "crt-workflows-common"
+    workflow = "promote-staging"
+    config = "release-metadata.hcl"
+  }
+
+  notification {
+    on = "always"
+  }
+}
+
+event "promote-staging-docker" {
+  depends = ["promote-staging"]
+  action "promote-staging-docker" {
+    organization = "hashicorp"
+    repository = "crt-workflows-common"
+    workflow = "promote-staging-docker"
+  }
+
+  notification {
+    on = "always"
+  }
+}
+
+event "promote-staging-packaging" {
+  depends = ["promote-staging-docker"]
+  action "promote-staging-packaging" {
+    organization = "hashicorp"
+    repository = "crt-workflows-common"
+    workflow = "promote-staging-packaging"
+  }
+
+  notification {
+    on = "always"
+  }
+}
+
+event "trigger-production" {
+// This event is dispatched by the bob trigger-promotion command
+// and is required - do not delete.
+}
+
+event "promote-production" {
+  depends = ["trigger-production"]
+  action "promote-production" {
+    organization = "hashicorp"
+    repository = "crt-workflows-common"
+    workflow = "promote-production"
+  }
+
+  notification {
+    on = "always"
+  }
+}
+
+event "promote-production-docker" {
+  depends = ["promote-production"]
+  action "promote-production-docker" {
+    organization = "hashicorp"
+    repository = "crt-workflows-common"
+    workflow = "promote-production-docker"
+  }
+
+  notification {
+    on = "always"
+  }
+}
+
+event "promote-production-packaging" {
+  depends = ["promote-production-docker"]
+  action "promote-production-packaging" {
+    organization = "hashicorp"
+    repository = "crt-workflows-common"
+    workflow = "promote-production-packaging"
+  }
+
+  notification {
+    on = "always"
+  }
+}
+
+// commenting the ironbank update for now until it is all set up on the Ironbank side
+
+// event "update-ironbank" {
+//   depends = ["promote-production-packaging"]
+//   action "update-ironbank" {
+//     organization = "hashicorp"
+//     repository = "crt-workflows-common"
+//     workflow = "update-ironbank"
+//   }
+
+//   notification {
+//     on = "always"
+//   }
+// }
+
+event "crt-hook-tfc-upload" {
+  // this will need to be changed back to update-ironbank once the Ironbank setup is done
+  depends = ["promote-production-packaging"]
+  action "crt-hook-tfc-upload" {
+    organization = "hashicorp"
+    repository = "terraform-releases"
+    workflow = "crt-hook-tfc-upload"
+  }
+
+  notification {
+    on = "always"
+  }
+}
diff --git a/v1.4.7/.release/release-metadata.hcl b/v1.4.7/.release/release-metadata.hcl
new file mode 100644
index 0000000..5a0f95f
--- /dev/null
+++ b/v1.4.7/.release/release-metadata.hcl
@@ -0,0 +1,5 @@
+url_docker_registry_dockerhub = "https://hub.docker.com/r/hashicorp/terraform"
+url_docker_registry_ecr = "https://gallery.ecr.aws/hashicorp/terraform"
+url_license = "https://github.com/hashicorp/terraform/blob/main/LICENSE"
+url_project_website = "https://www.terraform.io"
+url_source_repository = "https://github.com/hashicorp/terraform"
\ No newline at end of file
diff --git a/v1.4.7/.release/security-scan.hcl b/v1.4.7/.release/security-scan.hcl
new file mode 100644
index 0000000..bb86c5c
--- /dev/null
+++ b/v1.4.7/.release/security-scan.hcl
@@ -0,0 +1,16 @@
+# Copyright (c) HashiCorp, Inc.
+# SPDX-License-Identifier: MPL-2.0
+
+container {
+  dependencies = false
+  alpine_secdb = true
+  secrets      = false
+}
+
+binary {
+  secrets      = true
+  go_modules   = true
+  osv          = false
+  oss_index    = true
+  nvd          = false
+}
\ No newline at end of file
diff --git a/v1.4.7/.tfdev b/v1.4.7/.tfdev
new file mode 100644
index 0000000..857b02d
--- /dev/null
+++ b/v1.4.7/.tfdev
@@ -0,0 +1,7 @@
+version_info {
+  version_var =    "github.com/hashicorp/terraform/version.Version"
+  prerelease_var = "github.com/hashicorp/terraform/version.Prerelease"
+}
+
+version_exec = false
+disable_provider_requirements = true
diff --git a/v1.4.7/BUGPROCESS.md b/v1.4.7/BUGPROCESS.md
new file mode 100644
index 0000000..faad74d
--- /dev/null
+++ b/v1.4.7/BUGPROCESS.md
@@ -0,0 +1,85 @@
+# Terraform Core GitHub Bug Triage & Labeling
+The Terraform Core team has adopted a more structured bug triage process than we previously used. Our goal is to respond to reports of issues quickly.
+
+When a bug report is filed, our goal is to either:
+1. Get it to a state where it is ready for engineering to fix it in an upcoming Terraform release, or 
+2. Close it and explain why, if we can't help
+
+## Process
+
+### 1. [Newly created issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Anew+label%3Abug+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3A%22waiting+for+reproduction%22+-label%3A%22waiting-response%22+-label%3Aexplained+) require initial filtering. 
+
+These are raw reports that need categorization and support clarifying them. They need the following done:
+
+* label backends, provisioners, and providers so we can route work on codebases we don't support to the correct teams
+* point requests for help to the community forum and close the issue
+* close reports against old versions we no longer support
+* prompt users who have submitted obviously incomplete reproduction cases for additional information
+
+If an issue requires discussion with the user to get it out of this initial state, leave "new" on there and label it "waiting-response" until this phase of triage is done.
+
+Once this initial filtering has been done, remove the new label. If an issue subjectively looks very high-impact and likely to impact many users, assign it to the [appropriate milestone](https://github.com/hashicorp/terraform/milestones) to mark it as being urgent.
+
+### 2. Clarify [unreproduced issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+created%3A%3E2020-05-01+-label%3Abackend%2Fk8s+-label%3Aprovisioner%2Fsalt-masterless+-label%3Adocumentation+-label%3Aprovider%2Fazuredevops+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3Anew+-label%3A%22waiting+for+reproduction%22+-label%3Awaiting-response+-label%3Aexplained+sort%3Acreated-asc+)
+
+A core team member initially determines whether the issue is immediately reproducible. If they cannot readily reproduce it, they label it "waiting for reproduction" and correspond with the reporter to describe what is needed. When the issue is reproduced by a core team member, they label it "confirmed". 
+
+"confirmed" issues should have a clear reproduction case. Anyone who picks it up should be able to reproduce it readily without having to invent any details.
+
+Note that the link above excludes issues reported before May 2020; this is to avoid including issues that were reported prior to this new process being implemented. [Unreproduced issues reported before May 2020](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+created%3A%3C2020-05-01+-label%3Aprovisioner%2Fsalt-masterless+-label%3Adocumentation+-label%3Aprovider%2Fazuredevops+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3Anew+-label%3A%22waiting+for+reproduction%22+-label%3Awaiting-response+-label%3Aexplained+sort%3Areactions-%2B1-desc) will be triaged as capacity permits.
+
+
+### 3. Explain or fix [confirmed issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Aexplained+-label%3Abackend%2Foss+-label%3Abackend%2Fk8s+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+)
+The next step for confirmed issues is to either:
+
+* explain why the behavior is expected, label the issue as "working as designed", and close it, or
+* locate the cause of the defect in the codebase. When the defect is located, and that description is posted on the issue, the issue is labeled "explained". In many cases, this step will get skipped if the fix is obvious, and engineers will jump forward and make a PR. 
+
+ [Confirmed crashes](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Acrash+label%3Abug+-label%3Abackend%2Fk8s+-label%3Aexplained+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+) should generally be considered high impact
+
+### 4. The last step for [explained issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Aexplained+no%3Amilestone+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+) is to make a PR to fix them. 
+
+Explained issues that are expected to be fixed in a future release should be assigned to a milestone
+
+## GitHub Issue Labels
+label                    | description
+------------------------ | -----------
+new                      | new issue not yet triaged
+explained                | a Terraform Core team member has described the root cause of this issue in code
+waiting for reproduction | unable to reproduce issue without further information 
+not reproducible         | closed because a reproduction case could not be generated
+duplicate                | issue closed because another issue already tracks this problem
+confirmed                | a Terraform Core team member has reproduced this issue
+working as designed      | confirmed as reported and closed because the behavior is intended
+pending project          | issue is confirmed but will require a significant project to fix
+
+## Lack of response and unreproducible issues
+When bugs that have been [labeled waiting response](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Abackend%2Foss+-label%3Abackend%2Fk8s+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3A%22waiting+for+reproduction%22+label%3Awaiting-response+-label%3Aexplained+sort%3Aupdated-asc+) or [labeled "waiting for reproduction"](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+label%3A%22waiting+for+reproduction%22+-label%3Aexplained+sort%3Aupdated-asc+) for more than 30 days, we'll use our best judgement to determine whether it's more helpful to close it or prompt the reporter again. If they again go without a response for 30 days, they can be closed with a polite message explaining why and inviting the person to submit the needed information or reproduction case in the future.
+
+The intent of this process is to get fix the maximum number of bugs in Terraform as quickly as possible, and having un-actionable bug reports makes it harder for Terraform Core team members and community contributors to find bugs they can actually work on.
+
+## Helpful GitHub Filters
+
+### Triage Process
+1. [Newly created issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Anew+label%3Abug+-label%3Abackend%2Foss+-label%3Abackend%2Fk8s+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3A%22waiting+for+reproduction%22+-label%3A%22waiting-response%22+-label%3Aexplained+) require initial filtering.
+2. Clarify [unreproduced issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+created%3A%3E2020-05-01+-label%3Abackend%2Fk8s+-label%3Aprovisioner%2Fsalt-masterless+-label%3Adocumentation+-label%3Aprovider%2Fazuredevops+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3Anew+-label%3A%22waiting+for+reproduction%22+-label%3Awaiting-response+-label%3Aexplained+sort%3Acreated-asc+)
+3. Explain or fix [confirmed issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Aexplained+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+). Prioritize [confirmed crashes](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Acrash+label%3Abug+-label%3Aexplained+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+).
+4. Fix [explained issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Aexplained+no%3Amilestone+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+)
+
+### Other Backlog
+
+[Confirmed needs for documentation fixes](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Adocumentation++label%3Aconfirmed+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+)
+
+[Confirmed bugs that will require significant projects to fix](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Aconfirmed+label%3A%22pending+project%22+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+)
+
+### Milestone Use
+
+Milestones ending in .x indicate issues assigned to that milestone are intended to be fixed during that release lifecycle. Milestones ending in .0 indicate issues that will be fixed in that major release. For example:
+
+[0.13.x Milestone](https://github.com/hashicorp/terraform/milestone/17). Issues in this milestone should be considered high-priority but do not block a patch release. All issues in this milestone should be resolved in a 13.x release before the 0.14.0 RC1 ships.
+
+[0.14.0 Milestone](https://github.com/hashicorp/terraform/milestone/18). All issues in this milestone must be fixed before 0.14.0 RC1 ships, and should ideally be fixed before 0.14.0 beta 1 ships.
+
+[0.14.x Milestone](https://github.com/hashicorp/terraform/milestone/20). Issues in this milestone are expected to be addressed at some point in the 0.14.x lifecycle, before 0.15.0. All issues in this milestone should be resolved in a 14.x release before the 0.15.0 RC1 ships.
+
+[0.15.0 Milestone](https://github.com/hashicorp/terraform/milestone/19). All issues in this milestone must be fixed before 0.15.0 RC1 ships, and should ideally be fixed before 0.15.0 beta 1 ships.
diff --git a/v1.4.7/CHANGELOG.md b/v1.4.7/CHANGELOG.md
new file mode 100644
index 0000000..2e6676a
--- /dev/null
+++ b/v1.4.7/CHANGELOG.md
@@ -0,0 +1,130 @@
+## 1.4.7 (September 13, 2023)
+
+BUG FIXES:
+
+* `terraform_remote_state`: fix incompatibility with states produced by Terraform 1.5 or later which include `check` block results. ([#33814](https://github.com/hashicorp/terraform/pull/33814))
+ 
+## 1.4.6 (April 26, 2023)
+
+BUG FIXES
+
+* Fix bug when rendering plans that include null strings. ([#33029](https://github.com/hashicorp/terraform/issues/33029))
+* Fix bug when rendering plans that include unknown values in maps. ([#33029](https://github.com/hashicorp/terraform/issues/33029))
+* Fix bug where the plan would render twice when using older versions of TFE as a backend. ([#33018](https://github.com/hashicorp/terraform/issues/33018))
+* Fix bug where sensitive and unknown metadata was not being propagated to dynamic types while rendering plans. ([#33057](https://github.com/hashicorp/terraform/issues/33057))
+* Fix bug where sensitive metadata from the schema was not being included in the `terraform show -json` output. ([#33059](https://github.com/hashicorp/terraform/issues/33059))
+* Fix bug where computed attributes were not being rendered with the `# forces replacement` suffix. ([#33065](https://github.com/hashicorp/terraform/issues/33065))
+
+## 1.4.5 (April 12, 2023)
+
+* Revert change from [[#32892](https://github.com/hashicorp/terraform/issues/32892)] due to an upstream crash.
+* Fix planned destroy value which would cause `terraform_data` to fail when being replaced with `create_before_destroy` ([#32988](https://github.com/hashicorp/terraform/issues/32988))
+
+## 1.4.4 (March 30, 2023)
+
+Due to an incident while migrating build systems for the 1.4.3 release where 
+`CGO_ENABLED=0` was not set, we are rebuilding that version as 1.4.4 with the 
+flag set. No other changes have been made between 1.4.3 and 1.4.4.
+
+## 1.4.3 (March 30, 2023)
+
+BUG FIXES:
+* Prevent sensitive values in non-root module outputs from marking the entire output as sensitive ([#32891](https://github.com/hashicorp/terraform/issues/32891))
+* Fix the handling of planned data source objects when storing a failed plan ([#32876](https://github.com/hashicorp/terraform/issues/32876))
+* Don't fail during plan generation when targeting prevents resources with schema changes from performing a state upgrade ([#32900](https://github.com/hashicorp/terraform/issues/32900))
+* Skip planned changes in sensitive marks when the changed attribute is discarded by the provider ([#32892](https://github.com/hashicorp/terraform/issues/32892))
+
+## 1.4.2 (March 16, 2023)
+
+BUG FIXES:
+
+* Fix bug in which certain uses of `setproduct` caused Terraform to crash ([#32860](https://github.com/hashicorp/terraform/issues/32860))
+* Fix bug in which some provider plans were not being calculated correctly, leading to an "invalid plan" error ([#32860](https://github.com/hashicorp/terraform/issues/32860))
+
+## 1.4.1 (March 15, 2023)
+
+BUG FIXES:
+
+* Enables overriding modules that have the `depends_on` attribute set, while still preventing the `depends_on` attribute itself from being overridden. ([#32796](https://github.com/hashicorp/terraform/issues/32796))
+* `terraform providers mirror`: when a dependency lock file is present, mirror the resolved providers versions, not the latest available based on configuration. ([#32749](https://github.com/hashicorp/terraform/issues/32749))
+* Fixed module downloads from S3 URLs when using AWS IAM roles for service accounts (IRSA). ([#32700](https://github.com/hashicorp/terraform/issues/32700))
+* hcl: Fix a crash in Terraform when attempting to apply defaults into an incompatible type. ([#32775](https://github.com/hashicorp/terraform/issues/32775))
+* Prevent panic when creating a plan which errors before the planning process has begun. ([#32818](https://github.com/hashicorp/terraform/issues/32818))
+* Fix the plan renderer skipping the "no changes" messages when there are no-op outputs within the plan. ([#32820](https://github.com/hashicorp/terraform/issues/32820))
+* Prevent panic when rendering null nested primitive values in a state output. ([#32840](https://github.com/hashicorp/terraform/issues/32840))
+* Warn when an invalid path is specified in `TF_CLI_CONFIG_FILE` ([#32846](https://github.com/hashicorp/terraform/issues/32846))
+
+## 1.4.0 (March 08, 2023)
+
+UPGRADE NOTES:
+
+* config: The `textencodebase64` function when called with encoding "GB18030" will now encode the euro symbol € as the two-byte sequence `0xA2,0xE3`, as required by the GB18030 standard, before applying base64 encoding.
+* config: The `textencodebase64` function when called with encoding "GBK" or "CP936" will now encode the euro symbol € as the single byte `0x80` before applying base64 encoding. This matches the behavior of the Windows API when encoding to this Windows-specific character encoding.
+* `terraform init`: When interpreting the hostname portion of a provider source address or the address of a module in a module registry, Terraform will now use _non-transitional_ IDNA2008 mapping rules instead of the transitional mapping rules previously used.
+
+    This matches a change to [the WHATWG URL spec's rules for interpreting non-ASCII domain names](https://url.spec.whatwg.org/#concept-domain-to-ascii) which is being gradually adopted by web browsers. Terraform aims to follow the interpretation of hostnames used by web browsers for consistency. For some hostnames containing non-ASCII characters this may cause Terraform to now request a different "punycode" hostname when resolving.
+* `terraform init` will now ignore entries in the optional global provider cache directory unless they match a checksum already tracked in the current configuration's dependency lock file. This therefore avoids the long-standing problem that when installing a new provider for the first time from the cache we can't determine the full set of checksums to include in the lock file. Once the lock file has been updated to include a checksum covering the item in the global cache, Terraform will then use the cache entry for subsequent installation of the same provider package. There is an interim CLI configuration opt-out for those who rely on the previous incorrect behavior. ([#32129](https://github.com/hashicorp/terraform/issues/32129))
+* The Terraform plan renderer has been completely rewritten to aid with future Terraform Cloud integration. Users should not see any material change in the plan output between 1.3 and 1.4. If you notice any significant differences, or if Terraform fails to plan successfully due to rendering problems, please open a bug report issue.
+
+BUG FIXES:
+
+* The module installer will now record in its manifest a correct module source URL after normalization when the URL given as input contains both a query string portion and a subdirectory portion. Terraform itself doesn't currently make use of this information and so this is just a cosmetic fix to make the recorded metadata more correct. ([#31636](https://github.com/hashicorp/terraform/issues/31636))
+* config: The `yamldecode` function now correctly handles entirely-nil YAML documents. Previously it would incorrectly return an unknown value instead of a null value. It will now return a null value as documented. ([#32151](https://github.com/hashicorp/terraform/issues/32151))
+* Ensure correct ordering between data sources and the deletion of managed resource dependencies. ([#32209](https://github.com/hashicorp/terraform/issues/32209))
+* Fix Terraform creating objects that should not exist in variables that specify default attributes in optional objects. ([#32178](https://github.com/hashicorp/terraform/issues/32178))
+* Fix several Terraform crashes that are caused by HCL creating objects that should not exist in variables that specify default attributes in optional objects within collections. ([#32178](https://github.com/hashicorp/terraform/issues/32178))
+* Fix inconsistent behaviour in empty vs null collections. ([#32178](https://github.com/hashicorp/terraform/issues/32178))
+* `terraform workspace` now returns a non-zero exit when given an invalid argument ([#31318](https://github.com/hashicorp/terraform/issues/31318))
+* Terraform would always plan changes when using a nested set attribute ([#32536](https://github.com/hashicorp/terraform/issues/32536))
+* Terraform can now better detect when complex optional+computed object attributes are removed from configuration ([#32551](https://github.com/hashicorp/terraform/issues/32551))
+* A new methodology for planning set elements can now better detect optional+computed changes within sets ([#32563](https://github.com/hashicorp/terraform/issues/32563))
+* Fix state locking and releasing messages when in `-json` mode, messages will now be written in JSON format ([#32451](https://github.com/hashicorp/terraform/issues/32451))
+* Fixes a race condition where the Terraform CLI checks if a run is confirmable before the run status gets updated and exits early.
+
+NEW FEATURES:
+
+* When showing the progress of a remote operation running in Terraform Cloud, Terraform CLI will include information about OPA policy evaluation (#32303)
+
+ENHANCEMENTS:
+
+* `terraform plan` can now store a plan file even when encountering errors, which can later be inspected to help identify the source of the failures ([#32395](https://github.com/hashicorp/terraform/issues/32395))
+* `terraform_data` is a new builtin managed resource type, which can replace the use of `null_resource`, and can store data of any type ([#31757](https://github.com/hashicorp/terraform/issues/31757))
+* `terraform init` will now ignore entries in the optional global provider cache directory unless they match a checksum already tracked in the current configuration's dependency lock file. This therefore avoids the long-standing problem that when installing a new provider for the first time from the cache we can't determine the full set of checksums to include in the lock file. Once the lock file has been updated to include a checksum covering the item in the global cache, Terraform will then use the cache entry for subsequent installation of the same provider package. There is an interim CLI configuration opt-out for those who rely on the previous incorrect behavior. ([#32129](https://github.com/hashicorp/terraform/issues/32129))
+* Interactive input for sensitive variables is now masked in the UI ([#29520](https://github.com/hashicorp/terraform/issues/29520))
+* A new `-or-create` flag was added to `terraform workspace select`, to aid in creating workspaces in automated situations ([#31633](https://github.com/hashicorp/terraform/issues/31633))
+* A new command was added for exporting Terraform function signatures in machine-readable format: `terraform metadata functions -json` ([#32487](https://github.com/hashicorp/terraform/issues/32487))
+* The "Failed to install provider" error message now includes the reason a provider could not be installed. ([#31898](https://github.com/hashicorp/terraform/issues/31898))
+* backend/gcs: Add `kms_encryption_key` argument, to allow encryption of state files using Cloud KMS keys. ([#24967](https://github.com/hashicorp/terraform/issues/24967))
+* backend/gcs: Add `storage_custom_endpoint` argument, to allow communication with the backend via a Private Service Connect endpoint. ([#28856](https://github.com/hashicorp/terraform/issues/28856))
+* backend/gcs: Update documentation for usage of `gcs` with `terraform_remote_state` ([#32065](https://github.com/hashicorp/terraform/issues/32065))
+* backend/gcs: Update storage package to v1.28.0 ([#29656](https://github.com/hashicorp/terraform/issues/29656))
+* When removing a workspace from the `cloud` backend `terraform workspace delete` will use Terraform Cloud's [Safe Delete](https://developer.hashicorp.com/terraform/cloud-docs/api-docs/workspaces#safe-delete-a-workspace) API if the `-force` flag is not provided. ([#31949](https://github.com/hashicorp/terraform/pull/31949))
+* backend/oss: More robustly handle endpoint retrieval error ([#32295](https://github.com/hashicorp/terraform/issues/32295))
+* local-exec provisioner: Added `quiet` argument. If `quiet` is set to `true`, Terraform will not print the entire command to stdout during plan. ([#32116](https://github.com/hashicorp/terraform/issues/32116))
+* backend/http: Add support for mTLS authentication. ([#31699](https://github.com/hashicorp/terraform/issues/31699))
+* cloud: Add support for using the [generic hostname](https://developer.hashicorp.com/terraform/cloud-docs/registry/using#generic-hostname-terraform-enterprise) localterraform.com in module and provider sources as a substitute for the currently configured cloud backend hostname. This enhancement was also applied to the remote backend.
+* `terraform show` will now print an explanation when called on a Terraform workspace with empty state detailing why no resources are shown. ([#32629](https://github.com/hashicorp/terraform/issues/32629))
+* backend/gcs: Added support for `GOOGLE_BACKEND_IMPERSONATE_SERVICE_ACCOUNT` env var to allow impersonating a different service account when `GOOGLE_IMPERSONATE_SERVICE_ACCOUNT` is configured for the GCP provider. ([#32557](https://github.com/hashicorp/terraform/issues/32557))
+* backend/cos: Add support for the `assume_role` authentication method with the `tencentcloud` provider. This can be configured via the Terraform config or environment variables.
+* backend/cos: Add support for the `security_token` authentication method with the `tencentcloud` provider. This can be configured via the Terraform config or environment variables.
+
+
+EXPERIMENTS:
+
+* Since its introduction the `yamlencode` function's documentation carried a warning that it was experimental. This predated our more formalized idea of language experiments and so wasn't guarded by an explicit opt-in, but the intention was to allow for small adjustments to its behavior if we learned it was producing invalid YAML in some cases, due to the relative complexity of the YAML specification.
+
+    From Terraform v1.4 onwards, `yamlencode` is no longer documented as experimental and is now subject to the Terraform v1.x Compatibility Promises. There are no changes to its previous behavior in v1.3 and so no special action is required when upgrading.
+
+## Previous Releases
+
+For information on prior major and minor releases, see their changelogs:
+
+* [v1.3](https://github.com/hashicorp/terraform/blob/v1.3/CHANGELOG.md)
+* [v1.2](https://github.com/hashicorp/terraform/blob/v1.2/CHANGELOG.md)
+* [v1.1](https://github.com/hashicorp/terraform/blob/v1.1/CHANGELOG.md)
+* [v1.0](https://github.com/hashicorp/terraform/blob/v1.0/CHANGELOG.md)
+* [v0.15](https://github.com/hashicorp/terraform/blob/v0.15/CHANGELOG.md)
+* [v0.14](https://github.com/hashicorp/terraform/blob/v0.14/CHANGELOG.md)
+* [v0.13](https://github.com/hashicorp/terraform/blob/v0.13/CHANGELOG.md)
+* [v0.12](https://github.com/hashicorp/terraform/blob/v0.12/CHANGELOG.md)
+* [v0.11 and earlier](https://github.com/hashicorp/terraform/blob/v0.11/CHANGELOG.md)
diff --git a/v1.4.7/CODEOWNERS b/v1.4.7/CODEOWNERS
new file mode 100644
index 0000000..b02fd51
--- /dev/null
+++ b/v1.4.7/CODEOWNERS
@@ -0,0 +1,27 @@
+# Each line is a file pattern followed by one or more owners.
+# More on CODEOWNERS files: https://help.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners
+
+# Remote-state backend                  # Maintainer
+/internal/backend/remote-state/artifactory       Unmaintained
+/internal/backend/remote-state/azure             @hashicorp/terraform-azure
+/internal/backend/remote-state/consul            @hashicorp/consul @remilapeyre
+/internal/backend/remote-state/cos               @likexian
+/internal/backend/remote-state/etcdv2            Unmaintained
+/internal/backend/remote-state/etcdv3            Unmaintained
+/internal/backend/remote-state/gcs               @hashicorp/terraform-google @hashicorp/terraform-ecosystem-strategic
+/internal/backend/remote-state/http              @hashicorp/terraform-core
+/internal/backend/remote-state/manta             Unmaintained
+/internal/backend/remote-state/oss               @xiaozhu36
+/internal/backend/remote-state/pg                @remilapeyre
+/internal/backend/remote-state/s3                @hashicorp/terraform-aws
+/internal/backend/remote-state/swift             Unmaintained
+/internal/backend/remote-state/kubernetes        @jrhouston @alexsomesan
+
+# Provisioners
+builtin/provisioners/chef               Deprecated
+builtin/provisioners/file               @hashicorp/terraform-core
+builtin/provisioners/habitat            Deprecated
+builtin/provisioners/local-exec         @hashicorp/terraform-core
+builtin/provisioners/puppet             Deprecated
+builtin/provisioners/remote-exec        @hashicorp/terraform-core
+builtin/provisioners/salt-masterless    Deprecated
diff --git a/v1.4.7/Dockerfile b/v1.4.7/Dockerfile
new file mode 100644
index 0000000..1e1bb97
--- /dev/null
+++ b/v1.4.7/Dockerfile
@@ -0,0 +1,23 @@
+# This Dockerfile builds on golang:alpine by building Terraform from source
+# using the current working directory.
+#
+# This produces a docker image that contains a working Terraform binary along
+# with all of its source code. This is not what produces the official releases
+# in the "terraform" namespace on Dockerhub; those images include only
+# the officially-released binary from releases.hashicorp.com and are
+# built by the (closed-source) official release process.
+
+FROM docker.mirror.hashicorp.services/golang:alpine
+LABEL maintainer="HashiCorp Terraform Team <terraform@hashicorp.com>"
+
+RUN apk add --no-cache git bash openssh
+
+ENV TF_DEV=true
+ENV TF_RELEASE=1
+
+WORKDIR $GOPATH/src/github.com/hashicorp/terraform
+COPY . .
+RUN /bin/bash ./scripts/build.sh
+
+WORKDIR $GOPATH
+ENTRYPOINT ["terraform"]
diff --git a/v1.4.7/LICENSE b/v1.4.7/LICENSE
new file mode 100644
index 0000000..1409d6a
--- /dev/null
+++ b/v1.4.7/LICENSE
@@ -0,0 +1,356 @@
+Copyright (c) 2014 HashiCorp, Inc.
+
+Mozilla Public License, version 2.0
+
+1. Definitions
+
+1.1. “Contributor”
+
+     means each individual or legal entity that creates, contributes to the
+     creation of, or owns Covered Software.
+
+1.2. “Contributor Version”
+
+     means the combination of the Contributions of others (if any) used by a
+     Contributor and that particular Contributor’s Contribution.
+
+1.3. “Contribution”
+
+     means Covered Software of a particular Contributor.
+
+1.4. “Covered Software”
+
+     means Source Code Form to which the initial Contributor has attached the
+     notice in Exhibit A, the Executable Form of such Source Code Form, and
+     Modifications of such Source Code Form, in each case including portions
+     thereof.
+
+1.5. “Incompatible With Secondary Licenses”
+     means
+
+     a. that the initial Contributor has attached the notice described in
+        Exhibit B to the Covered Software; or
+
+     b. that the Covered Software was made available under the terms of version
+        1.1 or earlier of the License, but not also under the terms of a
+        Secondary License.
+
+1.6. “Executable Form”
+
+     means any form of the work other than Source Code Form.
+
+1.7. “Larger Work”
+
+     means a work that combines Covered Software with other material, in a separate
+     file or files, that is not Covered Software.
+
+1.8. “License”
+
+     means this document.
+
+1.9. “Licensable”
+
+     means having the right to grant, to the maximum extent possible, whether at the
+     time of the initial grant or subsequently, any and all of the rights conveyed by
+     this License.
+
+1.10. “Modifications”
+
+     means any of the following:
+
+     a. any file in Source Code Form that results from an addition to, deletion
+        from, or modification of the contents of Covered Software; or
+
+     b. any new file in Source Code Form that contains any Covered Software.
+
+1.11. “Patent Claims” of a Contributor
+
+      means any patent claim(s), including without limitation, method, process,
+      and apparatus claims, in any patent Licensable by such Contributor that
+      would be infringed, but for the grant of the License, by the making,
+      using, selling, offering for sale, having made, import, or transfer of
+      either its Contributions or its Contributor Version.
+
+1.12. “Secondary License”
+
+      means either the GNU General Public License, Version 2.0, the GNU Lesser
+      General Public License, Version 2.1, the GNU Affero General Public
+      License, Version 3.0, or any later versions of those licenses.
+
+1.13. “Source Code Form”
+
+      means the form of the work preferred for making modifications.
+
+1.14. “You” (or “Your”)
+
+      means an individual or a legal entity exercising rights under this
+      License. For legal entities, “You” includes any entity that controls, is
+      controlled by, or is under common control with You. For purposes of this
+      definition, “control” means (a) the power, direct or indirect, to cause
+      the direction or management of such entity, whether by contract or
+      otherwise, or (b) ownership of more than fifty percent (50%) of the
+      outstanding shares or beneficial ownership of such entity.
+
+
+2. License Grants and Conditions
+
+2.1. Grants
+
+     Each Contributor hereby grants You a world-wide, royalty-free,
+     non-exclusive license:
+
+     a. under intellectual property rights (other than patent or trademark)
+        Licensable by such Contributor to use, reproduce, make available,
+        modify, display, perform, distribute, and otherwise exploit its
+        Contributions, either on an unmodified basis, with Modifications, or as
+        part of a Larger Work; and
+
+     b. under Patent Claims of such Contributor to make, use, sell, offer for
+        sale, have made, import, and otherwise transfer either its Contributions
+        or its Contributor Version.
+
+2.2. Effective Date
+
+     The licenses granted in Section 2.1 with respect to any Contribution become
+     effective for each Contribution on the date the Contributor first distributes
+     such Contribution.
+
+2.3. Limitations on Grant Scope
+
+     The licenses granted in this Section 2 are the only rights granted under this
+     License. No additional rights or licenses will be implied from the distribution
+     or licensing of Covered Software under this License. Notwithstanding Section
+     2.1(b) above, no patent license is granted by a Contributor:
+
+     a. for any code that a Contributor has removed from Covered Software; or
+
+     b. for infringements caused by: (i) Your and any other third party’s
+        modifications of Covered Software, or (ii) the combination of its
+        Contributions with other software (except as part of its Contributor
+        Version); or
+
+     c. under Patent Claims infringed by Covered Software in the absence of its
+        Contributions.
+
+     This License does not grant any rights in the trademarks, service marks, or
+     logos of any Contributor (except as may be necessary to comply with the
+     notice requirements in Section 3.4).
+
+2.4. Subsequent Licenses
+
+     No Contributor makes additional grants as a result of Your choice to
+     distribute the Covered Software under a subsequent version of this License
+     (see Section 10.2) or under the terms of a Secondary License (if permitted
+     under the terms of Section 3.3).
+
+2.5. Representation
+
+     Each Contributor represents that the Contributor believes its Contributions
+     are its original creation(s) or it has sufficient rights to grant the
+     rights to its Contributions conveyed by this License.
+
+2.6. Fair Use
+
+     This License is not intended to limit any rights You have under applicable
+     copyright doctrines of fair use, fair dealing, or other equivalents.
+
+2.7. Conditions
+
+     Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
+     Section 2.1.
+
+
+3. Responsibilities
+
+3.1. Distribution of Source Form
+
+     All distribution of Covered Software in Source Code Form, including any
+     Modifications that You create or to which You contribute, must be under the
+     terms of this License. You must inform recipients that the Source Code Form
+     of the Covered Software is governed by the terms of this License, and how
+     they can obtain a copy of this License. You may not attempt to alter or
+     restrict the recipients’ rights in the Source Code Form.
+
+3.2. Distribution of Executable Form
+
+     If You distribute Covered Software in Executable Form then:
+
+     a. such Covered Software must also be made available in Source Code Form,
+        as described in Section 3.1, and You must inform recipients of the
+        Executable Form how they can obtain a copy of such Source Code Form by
+        reasonable means in a timely manner, at a charge no more than the cost
+        of distribution to the recipient; and
+
+     b. You may distribute such Executable Form under the terms of this License,
+        or sublicense it under different terms, provided that the license for
+        the Executable Form does not attempt to limit or alter the recipients’
+        rights in the Source Code Form under this License.
+
+3.3. Distribution of a Larger Work
+
+     You may create and distribute a Larger Work under terms of Your choice,
+     provided that You also comply with the requirements of this License for the
+     Covered Software. If the Larger Work is a combination of Covered Software
+     with a work governed by one or more Secondary Licenses, and the Covered
+     Software is not Incompatible With Secondary Licenses, this License permits
+     You to additionally distribute such Covered Software under the terms of
+     such Secondary License(s), so that the recipient of the Larger Work may, at
+     their option, further distribute the Covered Software under the terms of
+     either this License or such Secondary License(s).
+
+3.4. Notices
+
+     You may not remove or alter the substance of any license notices (including
+     copyright notices, patent notices, disclaimers of warranty, or limitations
+     of liability) contained within the Source Code Form of the Covered
+     Software, except that You may alter any license notices to the extent
+     required to remedy known factual inaccuracies.
+
+3.5. Application of Additional Terms
+
+     You may choose to offer, and to charge a fee for, warranty, support,
+     indemnity or liability obligations to one or more recipients of Covered
+     Software. However, You may do so only on Your own behalf, and not on behalf
+     of any Contributor. You must make it absolutely clear that any such
+     warranty, support, indemnity, or liability obligation is offered by You
+     alone, and You hereby agree to indemnify every Contributor for any
+     liability incurred by such Contributor as a result of warranty, support,
+     indemnity or liability terms You offer. You may include additional
+     disclaimers of warranty and limitations of liability specific to any
+     jurisdiction.
+
+4. Inability to Comply Due to Statute or Regulation
+
+   If it is impossible for You to comply with any of the terms of this License
+   with respect to some or all of the Covered Software due to statute, judicial
+   order, or regulation then You must: (a) comply with the terms of this License
+   to the maximum extent possible; and (b) describe the limitations and the code
+   they affect. Such description must be placed in a text file included with all
+   distributions of the Covered Software under this License. Except to the
+   extent prohibited by statute or regulation, such description must be
+   sufficiently detailed for a recipient of ordinary skill to be able to
+   understand it.
+
+5. Termination
+
+5.1. The rights granted under this License will terminate automatically if You
+     fail to comply with any of its terms. However, if You become compliant,
+     then the rights granted under this License from a particular Contributor
+     are reinstated (a) provisionally, unless and until such Contributor
+     explicitly and finally terminates Your grants, and (b) on an ongoing basis,
+     if such Contributor fails to notify You of the non-compliance by some
+     reasonable means prior to 60 days after You have come back into compliance.
+     Moreover, Your grants from a particular Contributor are reinstated on an
+     ongoing basis if such Contributor notifies You of the non-compliance by
+     some reasonable means, this is the first time You have received notice of
+     non-compliance with this License from such Contributor, and You become
+     compliant prior to 30 days after Your receipt of the notice.
+
+5.2. If You initiate litigation against any entity by asserting a patent
+     infringement claim (excluding declaratory judgment actions, counter-claims,
+     and cross-claims) alleging that a Contributor Version directly or
+     indirectly infringes any patent, then the rights granted to You by any and
+     all Contributors for the Covered Software under Section 2.1 of this License
+     shall terminate.
+
+5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
+     license agreements (excluding distributors and resellers) which have been
+     validly granted by You or Your distributors under this License prior to
+     termination shall survive termination.
+
+6. Disclaimer of Warranty
+
+   Covered Software is provided under this License on an “as is” basis, without
+   warranty of any kind, either expressed, implied, or statutory, including,
+   without limitation, warranties that the Covered Software is free of defects,
+   merchantable, fit for a particular purpose or non-infringing. The entire
+   risk as to the quality and performance of the Covered Software is with You.
+   Should any Covered Software prove defective in any respect, You (not any
+   Contributor) assume the cost of any necessary servicing, repair, or
+   correction. This disclaimer of warranty constitutes an essential part of this
+   License. No use of  any Covered Software is authorized under this License
+   except under this disclaimer.
+
+7. Limitation of Liability
+
+   Under no circumstances and under no legal theory, whether tort (including
+   negligence), contract, or otherwise, shall any Contributor, or anyone who
+   distributes Covered Software as permitted above, be liable to You for any
+   direct, indirect, special, incidental, or consequential damages of any
+   character including, without limitation, damages for lost profits, loss of
+   goodwill, work stoppage, computer failure or malfunction, or any and all
+   other commercial damages or losses, even if such party shall have been
+   informed of the possibility of such damages. This limitation of liability
+   shall not apply to liability for death or personal injury resulting from such
+   party’s negligence to the extent applicable law prohibits such limitation.
+   Some jurisdictions do not allow the exclusion or limitation of incidental or
+   consequential damages, so this exclusion and limitation may not apply to You.
+
+8. Litigation
+
+   Any litigation relating to this License may be brought only in the courts of
+   a jurisdiction where the defendant maintains its principal place of business
+   and such litigation shall be governed by laws of that jurisdiction, without
+   reference to its conflict-of-law provisions. Nothing in this Section shall
+   prevent a party’s ability to bring cross-claims or counter-claims.
+
+9. Miscellaneous
+
+   This License represents the complete agreement concerning the subject matter
+   hereof. If any provision of this License is held to be unenforceable, such
+   provision shall be reformed only to the extent necessary to make it
+   enforceable. Any law or regulation which provides that the language of a
+   contract shall be construed against the drafter shall not be used to construe
+   this License against a Contributor.
+
+
+10. Versions of the License
+
+10.1. New Versions
+
+      Mozilla Foundation is the license steward. Except as provided in Section
+      10.3, no one other than the license steward has the right to modify or
+      publish new versions of this License. Each version will be given a
+      distinguishing version number.
+
+10.2. Effect of New Versions
+
+      You may distribute the Covered Software under the terms of the version of
+      the License under which You originally received the Covered Software, or
+      under the terms of any subsequent version published by the license
+      steward.
+
+10.3. Modified Versions
+
+      If you create software not governed by this License, and you want to
+      create a new license for such software, you may create and use a modified
+      version of this License if you rename the license and remove any
+      references to the name of the license steward (except to note that such
+      modified license differs from this License).
+
+10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
+      If You choose to distribute Source Code Form that is Incompatible With
+      Secondary Licenses under the terms of this version of the License, the
+      notice described in Exhibit B of this License must be attached.
+
+Exhibit A - Source Code Form License Notice
+
+      This Source Code Form is subject to the
+      terms of the Mozilla Public License, v.
+      2.0. If a copy of the MPL was not
+      distributed with this file, You can
+      obtain one at
+      http://mozilla.org/MPL/2.0/.
+
+If it is not possible or desirable to put the notice in a particular file, then
+You may include the notice in a location (such as a LICENSE file in a relevant
+directory) where a recipient would be likely to look for such a notice.
+
+You may add additional accurate notices of copyright ownership.
+
+Exhibit B - “Incompatible With Secondary Licenses” Notice
+
+      This Source Code Form is “Incompatible
+      With Secondary Licenses”, as defined by
+      the Mozilla Public License, v. 2.0.
+
diff --git a/v1.4.7/Makefile b/v1.4.7/Makefile
new file mode 100644
index 0000000..84a5dfa
--- /dev/null
+++ b/v1.4.7/Makefile
@@ -0,0 +1,46 @@
+# generate runs `go generate` to build the dynamically generated
+# source files, except the protobuf stubs which are built instead with
+# "make protobuf".
+generate:
+	go generate ./...
+
+# We separate the protobuf generation because most development tasks on
+# Terraform do not involve changing protobuf files and protoc is not a
+# go-gettable dependency and so getting it installed can be inconvenient.
+#
+# If you are working on changes to protobuf interfaces, run this Makefile
+# target to be sure to regenerate all of the protobuf stubs using the expected
+# versions of protoc and the protoc Go plugins.
+protobuf:
+	go run ./tools/protobuf-compile .
+
+fmtcheck:
+	"$(CURDIR)/scripts/gofmtcheck.sh"
+
+importscheck:
+	"$(CURDIR)/scripts/goimportscheck.sh"
+
+staticcheck:
+	"$(CURDIR)/scripts/staticcheck.sh"
+
+exhaustive:
+	"$(CURDIR)/scripts/exhaustive.sh"
+
+# Run this if working on the website locally to run in watch mode.
+website:
+	$(MAKE) -C website website
+
+# Use this if you have run `website/build-local` to use the locally built image.
+website/local:
+	$(MAKE) -C website website/local
+
+# Run this to generate a new local Docker image.
+website/build-local:
+	$(MAKE) -C website website/build-local
+
+# disallow any parallelism (-j) for Make. This is necessary since some
+# commands during the build process create temporary files that collide
+# under parallel conditions.
+.NOTPARALLEL:
+
+.PHONY: fmtcheck importscheck generate protobuf staticcheck website website/local website/build-local
\ No newline at end of file
diff --git a/v1.4.7/README.md b/v1.4.7/README.md
new file mode 100644
index 0000000..e8509e3
--- /dev/null
+++ b/v1.4.7/README.md
@@ -0,0 +1,48 @@
+# Terraform
+
+- Website: https://www.terraform.io
+- Forums: [HashiCorp Discuss](https://discuss.hashicorp.com/c/terraform-core)
+- Documentation: [https://www.terraform.io/docs/](https://www.terraform.io/docs/)
+- Tutorials: [HashiCorp's Learn Platform](https://learn.hashicorp.com/terraform)
+- Certification Exam: [HashiCorp Certified: Terraform Associate](https://www.hashicorp.com/certification/#hashicorp-certified-terraform-associate)
+
+<img alt="Terraform" src="https://www.datocms-assets.com/2885/1629941242-logo-terraform-main.svg" width="600px">
+
+Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
+
+The key features of Terraform are:
+
+- **Infrastructure as Code**: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.
+
+- **Execution Plans**: Terraform has a "planning" step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.
+
+- **Resource Graph**: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.
+
+- **Change Automation**: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.
+
+For more information, refer to the [What is Terraform?](https://www.terraform.io/intro) page on the Terraform website.
+
+## Getting Started & Documentation
+
+Documentation is available on the [Terraform website](https://www.terraform.io):
+
+- [Introduction](https://www.terraform.io/intro)
+- [Documentation](https://www.terraform.io/docs)
+
+If you're new to Terraform and want to get started creating infrastructure, please check out our [Getting Started guides](https://learn.hashicorp.com/terraform#getting-started) on HashiCorp's learning platform. There are also [additional guides](https://learn.hashicorp.com/terraform#operations-and-development) to continue your learning.
+
+Show off your Terraform knowledge by passing a certification exam. Visit the [certification page](https://www.hashicorp.com/certification/) for information about exams and find [study materials](https://learn.hashicorp.com/terraform/certification/terraform-associate) on HashiCorp's learning platform.
+
+## Developing Terraform
+
+This repository contains only Terraform core, which includes the command line interface and the main graph engine. Providers are implemented as plugins, and Terraform can automatically download providers that are published on [the Terraform Registry](https://registry.terraform.io). HashiCorp develops some providers, and others are developed by other organizations. For more information, see [Extending Terraform](https://www.terraform.io/docs/extend/index.html).
+
+- To learn more about compiling Terraform and contributing suggested changes, refer to [the contributing guide](.github/CONTRIBUTING.md).
+
+- To learn more about how we handle bug reports, refer to the [bug triage guide](./BUGPROCESS.md).
+
+- To learn how to contribute to the Terraform documentation in this repository, refer to the [Terraform Documentation README](/website/README.md).
+
+## License
+
+[Mozilla Public License v2.0](https://github.com/hashicorp/terraform/blob/main/LICENSE)
diff --git a/v1.4.7/checkpoint.go b/v1.4.7/checkpoint.go
new file mode 100644
index 0000000..31cc29b
--- /dev/null
+++ b/v1.4.7/checkpoint.go
@@ -0,0 +1,82 @@
+package main
+
+import (
+	"fmt"
+	"log"
+	"path/filepath"
+
+	"github.com/hashicorp/go-checkpoint"
+	"github.com/hashicorp/terraform/internal/command"
+	"github.com/hashicorp/terraform/internal/command/cliconfig"
+)
+
+func init() {
+	checkpointResult = make(chan *checkpoint.CheckResponse, 1)
+}
+
+var checkpointResult chan *checkpoint.CheckResponse
+
+// runCheckpoint runs a HashiCorp Checkpoint request. You can read about
+// Checkpoint here: https://github.com/hashicorp/go-checkpoint.
+func runCheckpoint(c *cliconfig.Config) {
+	// If the user doesn't want checkpoint at all, then return.
+	if c.DisableCheckpoint {
+		log.Printf("[INFO] Checkpoint disabled. Not running.")
+		checkpointResult <- nil
+		return
+	}
+
+	configDir, err := cliconfig.ConfigDir()
+	if err != nil {
+		log.Printf("[ERR] Checkpoint setup error: %s", err)
+		checkpointResult <- nil
+		return
+	}
+
+	version := Version
+	if VersionPrerelease != "" {
+		version += fmt.Sprintf("-%s", VersionPrerelease)
+	}
+
+	signaturePath := filepath.Join(configDir, "checkpoint_signature")
+	if c.DisableCheckpointSignature {
+		log.Printf("[INFO] Checkpoint signature disabled")
+		signaturePath = ""
+	}
+
+	resp, err := checkpoint.Check(&checkpoint.CheckParams{
+		Product:       "terraform",
+		Version:       version,
+		SignatureFile: signaturePath,
+		CacheFile:     filepath.Join(configDir, "checkpoint_cache"),
+	})
+	if err != nil {
+		log.Printf("[ERR] Checkpoint error: %s", err)
+		resp = nil
+	}
+
+	checkpointResult <- resp
+}
+
+// commandVersionCheck implements command.VersionCheckFunc and is used
+// as the version checker.
+func commandVersionCheck() (command.VersionCheckInfo, error) {
+	// Wait for the result to come through
+	info := <-checkpointResult
+	if info == nil {
+		var zero command.VersionCheckInfo
+		return zero, nil
+	}
+
+	// Build the alerts that we may have received about our version
+	alerts := make([]string, len(info.Alerts))
+	for i, a := range info.Alerts {
+		alerts[i] = a.Message
+	}
+
+	return command.VersionCheckInfo{
+		Outdated: info.Outdated,
+		Latest:   info.CurrentVersion,
+		Alerts:   alerts,
+	}, nil
+}
diff --git a/v1.4.7/codecov.yml b/v1.4.7/codecov.yml
new file mode 100644
index 0000000..5aeb0a3
--- /dev/null
+++ b/v1.4.7/codecov.yml
@@ -0,0 +1,24 @@
+comment:
+  layout: "flags, files"
+  behavior: default
+  require_changes: true   # only comment on changes in coverage
+  require_base: yes       # [yes :: must have a base report to post]
+  require_head: yes       # [yes :: must have a head report to post]
+  branches:               # branch names that can post comment
+    - "main"
+
+coverage:
+  status:
+    project:
+      default:
+        informational: true
+        target: auto
+        threshold: "0.5%"
+    patch:
+      default:
+        informational: true
+        target: auto
+        threshold: "0.5%"
+
+github_checks:
+    annotations: false
diff --git a/v1.4.7/commands.go b/v1.4.7/commands.go
new file mode 100644
index 0000000..aadfc51
--- /dev/null
+++ b/v1.4.7/commands.go
@@ -0,0 +1,448 @@
+package main
+
+import (
+	"os"
+	"os/signal"
+
+	"github.com/mitchellh/cli"
+
+	"github.com/hashicorp/go-plugin"
+	svchost "github.com/hashicorp/terraform-svchost"
+	"github.com/hashicorp/terraform-svchost/auth"
+	"github.com/hashicorp/terraform-svchost/disco"
+	"github.com/hashicorp/terraform/internal/addrs"
+	"github.com/hashicorp/terraform/internal/command"
+	"github.com/hashicorp/terraform/internal/command/cliconfig"
+	"github.com/hashicorp/terraform/internal/command/views"
+	"github.com/hashicorp/terraform/internal/command/webbrowser"
+	"github.com/hashicorp/terraform/internal/getproviders"
+	pluginDiscovery "github.com/hashicorp/terraform/internal/plugin/discovery"
+	"github.com/hashicorp/terraform/internal/terminal"
+)
+
+// runningInAutomationEnvName gives the name of an environment variable that
+// can be set to any non-empty value in order to suppress certain messages
+// that assume that Terraform is being run from a command prompt.
+const runningInAutomationEnvName = "TF_IN_AUTOMATION"
+
+// Commands is the mapping of all the available Terraform commands.
+var Commands map[string]cli.CommandFactory
+
+// PrimaryCommands is an ordered sequence of the top-level commands (not
+// subcommands) that we emphasize at the top of our help output. This is
+// ordered so that we can show them in the typical workflow order, rather
+// than in alphabetical order. Anything not in this sequence or in the
+// HiddenCommands set appears under "all other commands".
+var PrimaryCommands []string
+
+// HiddenCommands is a set of top-level commands (not subcommands) that are
+// not advertised in the top-level help at all. This is typically because
+// they are either just stubs that return an error message about something
+// no longer being supported or backward-compatibility aliases for other
+// commands.
+//
+// No commands in the PrimaryCommands sequence should also appear in the
+// HiddenCommands set, because that would be rather silly.
+var HiddenCommands map[string]struct{}
+
+// Ui is the cli.Ui used for communicating to the outside world.
+var Ui cli.Ui
+
+func initCommands(
+	originalWorkingDir string,
+	streams *terminal.Streams,
+	config *cliconfig.Config,
+	services *disco.Disco,
+	providerSrc getproviders.Source,
+	providerDevOverrides map[addrs.Provider]getproviders.PackageLocalDir,
+	unmanagedProviders map[addrs.Provider]*plugin.ReattachConfig,
+) {
+	var inAutomation bool
+	if v := os.Getenv(runningInAutomationEnvName); v != "" {
+		inAutomation = true
+	}
+
+	for userHost, hostConfig := range config.Hosts {
+		host, err := svchost.ForComparison(userHost)
+		if err != nil {
+			// We expect the config was already validated by the time we get
+			// here, so we'll just ignore invalid hostnames.
+			continue
+		}
+		services.ForceHostServices(host, hostConfig.Services)
+	}
+
+	configDir, err := cliconfig.ConfigDir()
+	if err != nil {
+		configDir = "" // No config dir available (e.g. looking up a home directory failed)
+	}
+
+	wd := WorkingDir(originalWorkingDir, os.Getenv("TF_DATA_DIR"))
+
+	meta := command.Meta{
+		WorkingDir: wd,
+		Streams:    streams,
+		View:       views.NewView(streams).SetRunningInAutomation(inAutomation),
+
+		Color:            true,
+		GlobalPluginDirs: globalPluginDirs(),
+		Ui:               Ui,
+
+		Services:        services,
+		BrowserLauncher: webbrowser.NewNativeLauncher(),
+
+		RunningInAutomation: inAutomation,
+		CLIConfigDir:        configDir,
+		PluginCacheDir:      config.PluginCacheDir,
+
+		PluginCacheMayBreakDependencyLockFile: config.PluginCacheMayBreakDependencyLockFile,
+
+		ShutdownCh: makeShutdownCh(),
+
+		ProviderSource:       providerSrc,
+		ProviderDevOverrides: providerDevOverrides,
+		UnmanagedProviders:   unmanagedProviders,
+
+		AllowExperimentalFeatures: ExperimentsAllowed(),
+	}
+
+	// The command list is included in the terraform -help
+	// output, which is in turn included in the docs at
+	// website/docs/cli/commands/index.html.markdown; if you
+	// add, remove or reclassify commands then consider updating
+	// that to match.
+
+	Commands = map[string]cli.CommandFactory{
+		"apply": func() (cli.Command, error) {
+			return &command.ApplyCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"console": func() (cli.Command, error) {
+			return &command.ConsoleCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"destroy": func() (cli.Command, error) {
+			return &command.ApplyCommand{
+				Meta:    meta,
+				Destroy: true,
+			}, nil
+		},
+
+		"env": func() (cli.Command, error) {
+			return &command.WorkspaceCommand{
+				Meta:       meta,
+				LegacyName: true,
+			}, nil
+		},
+
+		"env list": func() (cli.Command, error) {
+			return &command.WorkspaceListCommand{
+				Meta:       meta,
+				LegacyName: true,
+			}, nil
+		},
+
+		"env select": func() (cli.Command, error) {
+			return &command.WorkspaceSelectCommand{
+				Meta:       meta,
+				LegacyName: true,
+			}, nil
+		},
+
+		"env new": func() (cli.Command, error) {
+			return &command.WorkspaceNewCommand{
+				Meta:       meta,
+				LegacyName: true,
+			}, nil
+		},
+
+		"env delete": func() (cli.Command, error) {
+			return &command.WorkspaceDeleteCommand{
+				Meta:       meta,
+				LegacyName: true,
+			}, nil
+		},
+
+		"fmt": func() (cli.Command, error) {
+			return &command.FmtCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"get": func() (cli.Command, error) {
+			return &command.GetCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"graph": func() (cli.Command, error) {
+			return &command.GraphCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"import": func() (cli.Command, error) {
+			return &command.ImportCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"init": func() (cli.Command, error) {
+			return &command.InitCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"login": func() (cli.Command, error) {
+			return &command.LoginCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"logout": func() (cli.Command, error) {
+			return &command.LogoutCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"metadata": func() (cli.Command, error) {
+			return &command.MetadataCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"metadata functions": func() (cli.Command, error) {
+			return &command.MetadataFunctionsCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"output": func() (cli.Command, error) {
+			return &command.OutputCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"plan": func() (cli.Command, error) {
+			return &command.PlanCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"providers": func() (cli.Command, error) {
+			return &command.ProvidersCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"providers lock": func() (cli.Command, error) {
+			return &command.ProvidersLockCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"providers mirror": func() (cli.Command, error) {
+			return &command.ProvidersMirrorCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"providers schema": func() (cli.Command, error) {
+			return &command.ProvidersSchemaCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"push": func() (cli.Command, error) {
+			return &command.PushCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"refresh": func() (cli.Command, error) {
+			return &command.RefreshCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"show": func() (cli.Command, error) {
+			return &command.ShowCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"taint": func() (cli.Command, error) {
+			return &command.TaintCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"test": func() (cli.Command, error) {
+			return &command.TestCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"validate": func() (cli.Command, error) {
+			return &command.ValidateCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"version": func() (cli.Command, error) {
+			return &command.VersionCommand{
+				Meta:              meta,
+				Version:           Version,
+				VersionPrerelease: VersionPrerelease,
+				Platform:          getproviders.CurrentPlatform,
+				CheckFunc:         commandVersionCheck,
+			}, nil
+		},
+
+		"untaint": func() (cli.Command, error) {
+			return &command.UntaintCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"workspace": func() (cli.Command, error) {
+			return &command.WorkspaceCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"workspace list": func() (cli.Command, error) {
+			return &command.WorkspaceListCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"workspace select": func() (cli.Command, error) {
+			return &command.WorkspaceSelectCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"workspace show": func() (cli.Command, error) {
+			return &command.WorkspaceShowCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"workspace new": func() (cli.Command, error) {
+			return &command.WorkspaceNewCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"workspace delete": func() (cli.Command, error) {
+			return &command.WorkspaceDeleteCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		//-----------------------------------------------------------
+		// Plumbing
+		//-----------------------------------------------------------
+
+		"force-unlock": func() (cli.Command, error) {
+			return &command.UnlockCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"state": func() (cli.Command, error) {
+			return &command.StateCommand{}, nil
+		},
+
+		"state list": func() (cli.Command, error) {
+			return &command.StateListCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"state rm": func() (cli.Command, error) {
+			return &command.StateRmCommand{
+				StateMeta: command.StateMeta{
+					Meta: meta,
+				},
+			}, nil
+		},
+
+		"state mv": func() (cli.Command, error) {
+			return &command.StateMvCommand{
+				StateMeta: command.StateMeta{
+					Meta: meta,
+				},
+			}, nil
+		},
+
+		"state pull": func() (cli.Command, error) {
+			return &command.StatePullCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"state push": func() (cli.Command, error) {
+			return &command.StatePushCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"state show": func() (cli.Command, error) {
+			return &command.StateShowCommand{
+				Meta: meta,
+			}, nil
+		},
+
+		"state replace-provider": func() (cli.Command, error) {
+			return &command.StateReplaceProviderCommand{
+				StateMeta: command.StateMeta{
+					Meta: meta,
+				},
+			}, nil
+		},
+	}
+
+	PrimaryCommands = []string{
+		"init",
+		"validate",
+		"plan",
+		"apply",
+		"destroy",
+	}
+
+	HiddenCommands = map[string]struct{}{
+		"env":             struct{}{},
+		"internal-plugin": struct{}{},
+		"push":            struct{}{},
+	}
+
+}
+
+// makeShutdownCh creates an interrupt listener and returns a channel.
+// A message will be sent on the channel for every interrupt received.
+func makeShutdownCh() <-chan struct{} {
+	resultCh := make(chan struct{})
+
+	signalCh := make(chan os.Signal, 4)
+	signal.Notify(signalCh, ignoreSignals...)
+	signal.Notify(signalCh, forwardSignals...)
+	go func() {
+		for {
+			<-signalCh
+			resultCh <- struct{}{}
+		}
+	}()
+
+	return resultCh
+}
+
+func credentialsSource(config *cliconfig.Config) (auth.CredentialsSource, error) {
+	helperPlugins := pluginDiscovery.FindPlugins("credentials", globalPluginDirs())
+	return config.CredentialsSource(helperPlugins)
+}
diff --git a/v1.4.7/docs/README.md b/v1.4.7/docs/README.md
new file mode 100644
index 0000000..1c0ea1c
--- /dev/null
+++ b/v1.4.7/docs/README.md
@@ -0,0 +1,40 @@
+# Terraform Core Codebase Documentation
+
+This directory contains some documentation about the Terraform Core codebase,
+aimed at readers who are interested in making code contributions.
+
+If you're looking for information on _using_ Terraform, please instead refer
+to [the main Terraform CLI documentation](https://www.terraform.io/docs/cli/index.html).
+
+## Terraform Core Architecture Documents
+
+* [Terraform Core Architecture Summary](./architecture.md): an overview of the
+  main components of Terraform Core and how they interact. This is the best
+  starting point if you are diving in to this codebase for the first time.
+
+* [Resource Instance Change Lifecycle](./resource-instance-change-lifecycle.md):
+  a description of the steps in validating, planning, and applying a change
+  to a resource instance, from the perspective of the provider plugin RPC
+  operations. This may be useful for understanding the various expectations
+  Terraform enforces about provider behavior, either if you intend to make
+  changes to those behaviors or if you are implementing a new Terraform plugin
+  SDK and so wish to conform to them.
+
+  (If you are planning to write a new provider using the _official_ SDK then
+  please refer to [the Extend documentation](https://www.terraform.io/docs/extend/index.html)
+  instead; it presents similar information from the perspective of the SDK
+  API, rather than the plugin wire protocol.)
+
+* [Plugin Protocol](./plugin-protocol/): gRPC/protobuf definitions for the
+  plugin wire protocol and information about its versioning strategy.
+
+  This documentation is for SDK developers, and is not necessary reading for
+  those implementing a provider using the official SDK.
+
+* [How Terraform Uses Unicode](./unicode.md): an overview of the various
+  features of Terraform that rely on Unicode and how to change those features
+  to adopt new versions of Unicode.
+
+## Contribution Guides
+
+* [Contributing to Terraform](.github/CONTRIBUTING.md): a complete guideline for those who want to contribute to this project.
diff --git a/v1.4.7/docs/architecture.md b/v1.4.7/docs/architecture.md
new file mode 100644
index 0000000..0c93b16
--- /dev/null
+++ b/v1.4.7/docs/architecture.md
@@ -0,0 +1,375 @@
+# Terraform Core Architecture Summary
+
+This document is a summary of the main components of Terraform Core and how
+data and requests flow between these components. It's intended as a primer
+to help navigate the codebase to dig into more details.
+
+We assume some familiarity with user-facing Terraform concepts like
+configuration, state, CLI workflow, etc. The Terraform website has
+documentation on these ideas.
+
+## Terraform Request Flow
+
+The following diagram shows an approximation of how a user command is
+executed in Terraform:
+
+![Terraform Architecture Diagram, described in text below](./images/architecture-overview.png)
+
+Each of the different subsystems (solid boxes) in this diagram is described
+in more detail in a corresponding section below.
+
+## CLI (`command` package)
+
+Each time a user runs the `terraform` program, aside from some initial
+bootstrapping in the root package (not shown in the diagram) execution
+transfers immediately into one of the "command" implementations in
+[the `command` package](https://pkg.go.dev/github.com/hashicorp/terraform/internal/command).
+The mapping between the user-facing command names and
+their corresponding `command` package types can be found in the `commands.go`
+file in the root of the repository.
+
+The full flow illustrated above does not actually apply to _all_ commands,
+but it applies to the main Terraform workflow commands `terraform plan` and
+`terraform apply`, along with a few others.
+
+For these commands, the role of the command implementation is to read and parse
+any command line arguments, command line options, and environment variables
+that are needed for the given command and use them to produce a
+[`backend.Operation`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/backend#Operation)
+object that describes an action to be taken.
+
+An _operation_ consists of:
+
+* The action to be taken (e.g. "plan", "apply").
+* The name of the [workspace](https://www.terraform.io/docs/state/workspaces.html)
+  where the action will be taken.
+* Root module input variables to use for the action.
+* For the "plan" operation, a path to the directory containing the configuration's root module.
+* For the "apply" operation, the plan to apply.
+* Various other less-common options/settings such as `-target` addresses, the
+"force" flag, etc.
+
+The operation is then passed to the currently-selected
+[backend](https://www.terraform.io/docs/backends/index.html). Each backend name
+corresponds to an implementation of
+[`backend.Backend`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/backend#Backend), using a
+mapping table in
+[the `backend/init` package](https://pkg.go.dev/github.com/hashicorp/terraform/internal/backend/init).
+
+Backends that are able to execute operations additionally implement
+[`backend.Enhanced`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/backend#Enhanced);
+the command-handling code calls `Operation` with the operation it has
+constructed, and then the backend is responsible for executing that action.
+
+Backends that execute operations, however, do so as an architectural implementation detail and not a
+general feature of backends. That is, the term 'backend' as a Terraform feature is used to refer to
+a plugin that determines where Terraform stores its state snapshots - only the default `local`
+backend and Terraform Cloud's backends (`remote`, `cloud`) perform operations.
+
+Thus, most backends do _not_ implement this interface, and so the `command` package wraps these
+backends in an instance of
+[`local.Local`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/backend/local#Local),
+causing the operation to be executed locally within the `terraform` process itself.
+
+## Backends
+
+A _backend_ determines where Terraform should store its state snapshots.
+
+As described above, the `local` backend also executes operations on behalf of most other
+backends. It uses a _state manager_
+(either
+[`statemgr.Filesystem`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/states/statemgr#Filesystem) if the
+local backend is being used directly, or an implementation provided by whatever
+backend is being wrapped) to retrieve the current state for the workspace
+specified in the operation, then uses the _config loader_ to load and do
+initial processing/validation of the configuration specified in the
+operation. It then uses these, along with the other settings given in the
+operation, to construct a
+[`terraform.Context`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#Context),
+which is the main object that actually performs Terraform operations.
+
+The `local` backend finally calls an appropriate method on that context to
+begin execution of the relevant command, such as
+[`Plan`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#Context.Plan)
+or
+[`Apply`](), which in turn constructs a graph using a _graph builder_,
+described in a later section.
+
+## Configuration Loader
+
+The top-level configuration structure is represented by model types in
+[package `configs`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/configs).
+A whole configuration (the root module plus all of its descendent modules)
+is represented by
+[`configs.Config`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/configs#Config).
+
+The `configs` package contains some low-level functionality for constructing
+configuration objects, but the main entry point is in the sub-package
+[`configload`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/configs/configload]),
+via
+[`configload.Loader`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/configs/configload#Loader).
+A loader deals with all of the details of installing child modules
+(during `terraform init`) and then locating those modules again when a
+configuration is loaded by a backend. It takes the path to a root module
+and recursively loads all of the child modules to produce a single
+[`configs.Config`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/configs#Config)
+representing the entire configuration.
+
+Terraform expects configuration files written in the Terraform language, which
+is a DSL built on top of
+[HCL](https://github.com/hashicorp/hcl). Some parts of the configuration
+cannot be interpreted until we build and walk the graph, since they depend
+on the outcome of other parts of the configuration, and so these parts of
+the configuration remain represented as the low-level HCL types
+[`hcl.Body`](https://pkg.go.dev/github.com/hashicorp/hcl/v2/#Body)
+and
+[`hcl.Expression`](https://pkg.go.dev/github.com/hashicorp/hcl/v2/#Expression),
+allowing Terraform to interpret them at a more appropriate time.
+
+## State Manager
+
+A _state manager_ is responsible for storing and retrieving snapshots of the
+[Terraform state](https://www.terraform.io/docs/language/state/index.html)
+for a particular workspace. Each manager is an implementation of
+some combination of interfaces in
+[the `statemgr` package](https://pkg.go.dev/github.com/hashicorp/terraform/internal/states/statemgr),
+with most practical managers implementing the full set of operations
+described by
+[`statemgr.Full`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/states/statemgr#Full)
+provided by a _backend_. The smaller interfaces exist primarily for use in
+other function signatures to be explicit about what actions the function might
+take on the state manager; there is little reason to write a state manager
+that does not implement all of `statemgr.Full`.
+
+The implementation
+[`statemgr.Filesystem`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/states/statemgr#Filesystem) is used
+by default (by the `local` backend) and is responsible for the familiar
+`terraform.tfstate` local file that most Terraform users start with, before
+they switch to [remote state](https://www.terraform.io/docs/language/state/remote.html).
+Other implementations of `statemgr.Full` are used to implement remote state.
+Each of these saves and retrieves state via a remote network service
+appropriate to the backend that creates it.
+
+A state manager accepts and returns a state snapshot as a
+[`states.State`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/states#State)
+object. The state manager is responsible for exactly how that object is
+serialized and stored, but all state managers at the time of writing use
+the same JSON serialization format, storing the resulting JSON bytes in some
+kind of arbitrary blob store.
+
+## Graph Builder
+
+A _graph builder_ is called by a
+[`terraform.Context`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#Context)
+method (e.g. `Plan` or `Apply`) to produce the graph that will be used
+to represent the necessary steps for that operation and the dependency
+relationships between them.
+
+In most cases, the
+[vertices](https://en.wikipedia.org/wiki/Vertex_(graph_theory)) of Terraform's
+graphs each represent a specific object in the configuration, or something
+derived from those configuration objects. For example, each `resource` block
+in the configuration has one corresponding
+[`GraphNodeConfigResource`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#GraphNodeConfigResource)
+vertex representing it in the "plan" graph. (Terraform Core uses terminology
+inconsistently, describing graph _vertices_ also as graph _nodes_ in various
+places. These both describe the same concept.)
+
+The [edges](https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#edge)
+in the graph represent "must happen after" relationships. These define the
+order in which the vertices are evaluated, ensuring that e.g. one resource is
+created before another resource that depends on it.
+
+Each operation has its own graph builder, because the graph building process
+is different for each. For example, a "plan" operation needs a graph built
+directly from the configuration, but an "apply" operation instead builds its
+graph from the set of changes described in the plan that is being applied.
+
+The graph builders all work in terms of a sequence of _transforms_, which
+are implementations of
+[`terraform.GraphTransformer`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#GraphTransformer).
+Implementations of this interface just take a graph and mutate it in any
+way needed, and so the set of available transforms is quite varied. Some
+important examples include:
+
+* [`ConfigTransformer`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#ConfigTransformer),
+  which creates a graph vertex for each `resource` block in the configuration.
+
+* [`StateTransformer`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#StateTransformer),
+  which creates a graph vertex for each resource instance currently tracked
+  in the state.
+
+* [`ReferenceTransformer`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#ReferenceTransformer),
+  which analyses the configuration to find dependencies between resources and
+  other objects and creates any necessary "happens after" edges for these.
+
+* [`ProviderTransformer`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#ProviderTransformer),
+  which associates each resource or resource instance with exactly one
+  provider configuration (implementing
+  [the inheritance rules](https://www.terraform.io/docs/language/modules/develop/providers.html))
+  and then creates "happens after" edges to ensure that the providers are
+  initialized before taking any actions with the resources that belong to
+  them.
+
+There are many more different graph transforms, which can be discovered
+by reading the source code for the different graph builders. Each graph
+builder uses a different subset of these depending on the needs of the
+operation that is being performed.
+
+The result of graph building is a
+[`terraform.Graph`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#Graph), which
+can then be processed using a _graph walker_.
+
+## Graph Walk
+
+The process of walking the graph visits each vertex of that graph in a way
+which respects the "happens after" edges in the graph. The walk algorithm
+itself is implemented in
+[the low-level `dag` package](https://pkg.go.dev/github.com/hashicorp/terraform/internal/dag#AcyclicGraph.Walk)
+(where "DAG" is short for [_Directed Acyclic Graph_](https://en.wikipedia.org/wiki/Directed_acyclic_graph)), in
+[`AcyclicGraph.Walk`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/dag#AcyclicGraph.Walk).
+However, the "interesting" Terraform walk functionality is implemented in
+[`terraform.ContextGraphWalker`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#ContextGraphWalker),
+which implements a small set of higher-level operations that are performed
+during the graph walk:
+
+* `EnterPath` is called once for each module in the configuration, taking a
+  module address and returning a
+  [`terraform.EvalContext`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#EvalContext)
+  that tracks objects within that module. `terraform.Context` is the _global_
+  context for the entire operation, while `terraform.EvalContext` is a
+  context for processing within a single module, and is the primary means
+  by which the namespaces in each module are kept separate.
+
+Each vertex in the graph is evaluated, in an order that guarantees that the
+"happens after" edges will be respected. If possible, the graph walk algorithm
+will evaluate multiple vertices concurrently. Vertex evaluation code must
+therefore make careful use of concurrency primitives such as mutexes in order
+to coordinate access to shared objects such as the `states.State` object.
+In most cases, we use the helper wrapper
+[`states.SyncState`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/states#SyncState)
+to safely implement concurrent reads and writes from the shared state.
+
+## Vertex Evaluation
+
+The action taken for each vertex during the graph walk is called
+_execution_. Execution runs a sequence of arbitrary actions that make sense
+for a particular vertex type.
+
+For example, evaluation of a vertex representing a resource instance during
+a plan operation would include the following high-level steps:
+
+* Retrieve the resource's associated provider from the `EvalContext`. This
+  should already be initialized earlier by the provider's own graph vertex,
+  due to the "happens after" edge between the resource node and the provider
+  node.
+
+* Retrieve from the state the portion relevant to the specific resource
+  instance being evaluated.
+
+* Evaluate the attribute expressions given for the resource in configuration.
+  This often involves retrieving the state of _other_ resource instances so
+  that their values can be copied or transformed into the current instance's
+  attributes, which is coordinated by the `EvalContext`.
+
+* Pass the current instance state and the resource configuration to the
+  provider, asking the provider to produce an _instance diff_ representing the
+  differences between the state and the configuration.
+
+* Save the instance diff as part of the plan that is being constructed by
+  this operation.
+
+Each execution step for a vertex is an implementation of
+[`terraform.Execute`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/erraform#Execute).
+As with graph transforms, the behavior of these implementations varies widely:
+whereas graph transforms can take any action against the graph, an `Execute`
+implementation can take any action against the `EvalContext`.
+
+The implementation of `terraform.EvalContext` used in real processing
+(as opposed to testing) is
+[`terraform.BuiltinEvalContext`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#BuiltinEvalContext).
+It provides coordinated access to plugins, the current state, and the current
+plan via the `EvalContext` interface methods.
+
+In order to be executed, a vertex must implement
+[`terraform.GraphNodeExecutable`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#GraphNodeExecutable),
+which has a single `Execute` method that handles. There are numerous `Execute`
+implementations with different behaviors, but some prominent examples are:
+
+* [NodePlannableResource.Execute](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#NodePlannableResourceInstance.Execute), which handles the `plan` operation.
+
+* [`NodeApplyableResourceInstance.Execute`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#NodeApplyableResourceInstance.Execute), which handles the main `apply` operation.
+
+* [`NodeDestroyResourceInstance.Execute`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#EvalWriteState), which handles the main `destroy` operation.
+
+A vertex must complete successfully before the graph walk will begin evaluation
+for other vertices that have "happens after" edges. Evaluation can fail with one
+or more errors, in which case the graph walk is halted and the errors are
+returned to the user.
+
+### Expression Evaluation
+
+An important part of vertex evaluation for most vertex types is evaluating
+any expressions in the configuration block associated with the vertex. This
+completes the processing of the portions of the configuration that were not
+processed by the configuration loader.
+
+The high-level process for expression evaluation is:
+
+1. Analyze the configuration expressions to see which other objects they refer
+  to. For example, the expression `aws_instance.example[1]` refers to one of
+  the instances created by a `resource "aws_instance" "example"` block in
+  configuration. This analysis is performed by
+  [`lang.References`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/lang#References),
+  or more often one of the helper wrappers around it:
+  [`lang.ReferencesInBlock`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/lang#ReferencesInBlock)
+  or
+  [`lang.ReferencesInExpr`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/lang#ReferencesInExpr)
+
+1. Retrieve from the state the data for the objects that are referred to and
+  create a lookup table of the values from these objects that the
+  HCL evaluation code can refer to.
+
+1. Prepare the table of built-in functions so that HCL evaluation can refer to
+  them.
+
+1. Ask HCL to evaluate each attribute's expression (a
+  [`hcl.Expression`](https://pkg.go.dev/github.com/hashicorp/hcl/v2/#Expression)
+  object) against the data and function lookup tables.
+
+In practice, steps 2 through 4 are usually run all together using one
+of the methods on [`lang.Scope`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/lang#Scope);
+most commonly,
+[`lang.EvalBlock`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/lang#Scope.EvalBlock)
+or
+[`lang.EvalExpr`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/lang#Scope.EvalExpr).
+
+Expression evaluation produces a dynamic value represented as a
+[`cty.Value`](https://pkg.go.dev/github.com/zclconf/go-cty/cty#Value).
+This Go type represents values from the Terraform language and such values
+are eventually passed to provider plugins.
+
+### Sub-graphs
+
+Some vertices have a special additional behavior that happens after their
+evaluation steps are complete, where the vertex implementation is given
+the opportunity to build another separate graph which will be walked as part
+of the evaluation of the vertex.
+
+The main example of this is when a `resource` block has the `count` argument
+set. In that case, the plan graph initially contains one vertex for each
+`resource` block, but that graph then _dynamically expands_ to have a sub-graph
+containing one vertex for each instance requested by the count. That is, the
+sub-graph of `aws_instance.example` might contain vertices for
+`aws_instance.example[0]`, `aws_instance.example[1]`, etc. This is necessary
+because the `count` argument may refer to other objects whose values are not
+known when the main graph is constructed, but become known while evaluating
+other vertices in the main graph.
+
+This special behavior applies to vertex objects that implement
+[`terraform.GraphNodeDynamicExpandable`](https://pkg.go.dev/github.com/hashicorp/terraform/internal/terraform#GraphNodeDynamicExpandable).
+Such vertices have their own nested _graph builder_, _graph walk_,
+and _vertex evaluation_ steps, with the same behaviors as described in these
+sections for the main graph. The difference is in which graph transforms
+are used to construct the graph and in which evaluation steps apply to the
+nodes in that sub-graph.
diff --git a/v1.4.7/docs/destroying.md b/v1.4.7/docs/destroying.md
new file mode 100644
index 0000000..9643e26
--- /dev/null
+++ b/v1.4.7/docs/destroying.md
@@ -0,0 +1,361 @@
+# Terraform Core Resource Destruction Notes
+
+This document intends to describe some of the details and complications
+involved in the destruction of resources. It covers the ordering defined for
+related create and destroy operations, as well as changes to the lifecycle
+ordering imposed by `create_before_destroy`. It is not intended to enumerate
+all possible combinations of dependency ordering, only to outline the basics
+and document some of the more complicated aspects of resource destruction.
+
+The graph diagrams here will continue to use the inverted graph structure used
+internally by Terraform, where edges represent dependencies rather than order
+of operations. 
+
+## Simple Resource Creation
+
+In order to describe resource destruction, we first need to create the
+resources and define their order. The order of creation is that which fulfills
+the dependencies for each resource. In this example, `A` has no dependencies,
+`B` depends on `A`, and `C` depends on `B`, and transitively depends on `A`.
+
+![Simple Resource Creation](./images/simple_create.png)
+<!--
+digraph create {
+    subgraph nodes {
+        rank=same;
+        a [label="A create"];
+        b [label="B create"];
+        c [label="C create"];
+        b -> c [dir=back];
+        a -> b [dir=back];
+    }
+}
+-->
+
+Order of operations:
+1. `A` is created
+1. `B` is created
+1. `C` is created
+
+## Resource Updates
+
+An existing resource may be updated with references to a newly created
+resource. The ordering here is exactly the same as one would expect for
+creation.
+
+![Simple Resource Updates](./images/simple_update.png)
+<!--
+digraph update {
+    subgraph nodes {
+        rank=same;
+        a [label="A create"];
+        b [label="B update"];
+        c [label="C update"];
+        b -> c [dir=back];
+        a -> b [dir=back];
+    }
+}
+-->
+
+Order of operations:
+1. `A` is created
+1. `B` is created
+1. `C` is created
+
+## Simple Resource Destruction
+
+The order for destroying resource is exactly the inverse used to create them.
+This example shows the graph for the destruction of the same nodes defined
+above. While destroy nodes will not contain attribute references, we will
+continue to use the inverted edges showing dependencies for destroy, so the
+operational ordering is still opposite the flow of the arrows.
+
+![Simple Resource Destruction](./images/simple_destroy.png)
+<!--
+digraph destroy {
+    subgraph nodes {
+        rank=same;
+        a [label="A destroy"];
+        b [label="B destroy"];
+        c [label="C destroy"];
+        a -> b;
+        b -> c;
+    }
+}
+-->
+
+Order of operations:
+1. `C` is destroyed
+1. `B` is destroyed
+1. `A` is Destroyed
+
+## Resource Replacement
+
+Resource replacement is the logical combination of the above scenarios. Here we
+will show the replacement steps involved when `B` depends on `A`.
+
+In this first example, we simultaneously replace both `A` and `B`. Here `B` is
+destroyed before `A`, then `A` is recreated before `B`.
+
+![Replace All](./images/replace_all.png)
+<!--
+digraph replacement {
+    subgraph create {
+        rank=same;
+        a [label="A create"];
+        b [label="B create"];
+        a -> b [dir=back];
+    }
+    subgraph destroy {
+        rank=same;
+        a_d [label="A destroy"];
+        b_d [label="B destroy"];
+        a_d -> b_d;
+    }
+
+    a -> a_d;
+    a -> b_d [style=dotted];
+    b -> a_d [style=dotted];
+    b -> b_d;
+}
+-->
+
+Order of operations:
+1. `B` is destroyed
+1. `A` is destroyed
+1. `A` is created
+1. `B` is created
+
+
+This second example replaces only `A`, while updating `B`. Resource `B` is only
+updated once `A` has been destroyed and recreated.
+
+![Replace Dependency](./images/replace_one.png)
+<!--
+digraph replacement {
+    subgraph create {
+        rank=same;
+        a [label="A create"];
+        b [label="B update"];
+        a -> b [dir=back];
+    }
+    subgraph destroy {
+        rank=same;
+        a_d [label="A destroy"];
+    }
+
+    a -> a_d;
+    b -> a_d [style=dotted];
+}
+-->
+
+Order of operations:
+1. `A` is destroyed
+1. `A` is created
+1. `B` is updated
+
+
+While the dependency edge from `B update` to `A destroy` isn't necessary in
+these examples, it is shown here as an implementation detail which will be
+mentioned later on.
+
+A final example based on the replacement graph; starting with the above
+configuration where `B` depends on `A`. The graph is reduced to an update of
+`A` while only destroying `B`. The interesting feature here is the remaining
+dependency of `A update` on `B destroy`. We can derive this ordering of
+operations from the full replacement example above, by replacing `A create`
+with `A update` and removing the unused nodes.
+
+![Replace All](./images/destroy_then_update.png)
+<!--
+digraph destroy_then_update {
+    subgraph update {
+        rank=same;
+        a [label="A update"];
+    }
+    subgraph destroy {
+        rank=same;
+        b_d [label="B destroy"];
+    }
+
+    a -> b_d;
+}
+-->
+## Create Before Destroy
+
+Currently, the only user-controllable method for changing the ordering of
+create and destroy operations is with the `create_before_destroy` resource
+`lifecycle` attribute. This has the obvious effect of causing a resource to be
+created before it is destroyed when replacement is required, but has a couple
+of other effects we will detail here.
+
+Taking the previous replacement examples, we can change the behavior of `A` to
+be that of `create_before_destroy`.
+
+![Replace all, dependency is create_before_destroy](./images/replace_all_cbd_dep.png)
+<!--
+digraph replacement {
+    subgraph create {
+        rank=same;
+        a [label="A create"];
+        b [label="B create"];
+        a -> b [dir=back];
+    }
+    subgraph destroy {
+        rank=same;
+        a_d [label="A destroy"];
+        b_d [label="B destroy"];
+        a_d -> b_d;
+    }
+
+    a -> a_d [dir=back];
+    a -> b_d;
+    b -> a_d [dir=back];
+    b -> b_d;
+}
+-->
+
+
+Order of operations:
+1. `B` is destroyed
+2. `A` is created
+1. `B` is created
+1. `A` is destroyed
+
+Note that in this first example, the creation of `B` is inserted in between the
+creation of `A` and the destruction of `A`. This becomes more important in the
+update example below.
+
+
+![Replace dependency, dependency is create_before_destroy](./images/replace_dep_cbd_dep.png)
+<!--
+digraph replacement {
+    subgraph create {
+        rank=same;
+        a [label="A create"];
+        b [label="B update"];
+        a -> b [dir=back];
+    }
+    subgraph destroy {
+        rank=same;
+        a_d [label="A destroy"];
+    }
+
+    a -> a_d [dir=back, style=dotted];
+    b -> a_d [dir=back];
+}
+-->
+
+Order of operations:
+1. `A` is created
+1. `B` is updated
+1. `A` is destroyed
+
+Here we can see clearly how `B` is updated after the creation of `A` and before
+the destruction of the _deposed_ resource `A`. (The prior resource `A` is
+sometimes referred to as "deposed" before it is destroyed, to disambiguate it
+from the newly created `A`.) This ordering is important for resource that
+"register" other resources, and require updating before the dependent resource
+can be destroyed.
+
+The transformation used to create these graphs is also where we use the extra
+edges mentioned above connecting `B` to `A destroy`. The algorithm to change a
+resource from the default ordering to `create_before_destroy` simply inverts
+any incoming edges from other resources, which automatically creates the
+necessary dependency ordering for dependent updates. This also ensures that
+reduced versions of this example still adhere to the same ordering rules, such
+as when the dependency is only being removed:
+
+![Update a destroyed create_before_destroy dependency](./images/update_destroy_cbd.png)
+<!--
+digraph update {
+    subgraph create {
+        rank=same;
+        b [label="B update"];
+    }
+    subgraph destroy {
+        rank=same;
+        a_d [label="A destroy"];
+    }
+
+    b -> a_d [dir=back];
+}
+-->
+
+Order of operations:
+1. `B` is updated
+1. `A` is destroyed
+
+### Forced Create Before Destroy
+
+In the previous examples, only resource `A` was being used as is it were
+`create_before_destroy`. The minimal graphs used show that it works in
+isolation, but that is only when the `create_before_destroy` resource has no
+dependencies of it own. When a `create_before_resource` depends on another
+resource, that dependency is "infected" by the `create_before_destroy`
+lifecycle attribute.
+
+This example demonstrates why forcing `create_before_destroy` is necessary. `B`
+has `create_before_destroy` while `A` does not. If we only invert the ordering
+for `B`, we can see that results in a cycle.
+
+![Incorrect create_before_destroy replacement](./images/replace_cbd_incorrect.png)
+<!--
+digraph replacement {
+    subgraph create {
+        rank=same;
+        a [label="A create"];
+        b [label="B create"];
+        a -> b [dir=back];
+    }
+    subgraph destroy {
+        rank=same;
+        a_d [label="A destroy"];
+        b_d [label="B destroy"];
+        a_d -> b_d;
+    }
+
+    a -> a_d;
+    a -> b_d [style=dotted];
+    b -> a_d [style=dotted];
+    b -> b_d [dir=back];
+}
+-->
+
+In order to resolve these cycles, all resources that precede a resource
+with `create_before_destroy` must in turn be handled in the same manner.
+Reversing the incoming edges to `A destroy` resolves the problem:
+
+![Correct create_before_destroy replacement](./images/replace_all_cbd.png)
+<!--
+digraph replacement {
+    subgraph create {
+        rank=same;
+        a [label="A create"];
+        b [label="B create"];
+        a -> b [dir=back];
+    }
+    subgraph destroy {
+        rank=same;
+        a_d [label="A destroy"];
+        b_d [label="B destroy"];
+        a_d -> b_d;
+    }
+
+    a -> a_d [dir=back];
+    a -> b_d [dir=back, style=dotted];
+    b -> a_d [dir=back, style=dotted];
+    b -> b_d [dir=back];
+}
+-->
+
+Order of operations:
+1. `A` is created
+1. `B` is created
+1. `B` is destroyed
+1. `A` is destroyed
+
+This also demonstrates why `create_before_destroy` cannot be overridden when
+it is inherited; changing the behavior here isn't possible without removing
+the initial reason for `create_before_destroy`; otherwise cycles are always
+introduced into the graph.
diff --git a/v1.4.7/docs/images/architecture-overview.png b/v1.4.7/docs/images/architecture-overview.png
new file mode 100644
index 0000000..40f2a04
--- /dev/null
+++ b/v1.4.7/docs/images/architecture-overview.png
Binary files differ
diff --git a/v1.4.7/docs/images/destroy_then_update.png b/v1.4.7/docs/images/destroy_then_update.png
new file mode 100644
index 0000000..f4f3d20
--- /dev/null
+++ b/v1.4.7/docs/images/destroy_then_update.png
Binary files differ
diff --git a/v1.4.7/docs/images/replace_all.png b/v1.4.7/docs/images/replace_all.png
new file mode 100644
index 0000000..54d5ad6
--- /dev/null
+++ b/v1.4.7/docs/images/replace_all.png
Binary files differ
diff --git a/v1.4.7/docs/images/replace_all_cbd.png b/v1.4.7/docs/images/replace_all_cbd.png
new file mode 100644
index 0000000..da72fe4
--- /dev/null
+++ b/v1.4.7/docs/images/replace_all_cbd.png
Binary files differ
diff --git a/v1.4.7/docs/images/replace_all_cbd_dep.png b/v1.4.7/docs/images/replace_all_cbd_dep.png
new file mode 100644
index 0000000..98bdbde
--- /dev/null
+++ b/v1.4.7/docs/images/replace_all_cbd_dep.png
Binary files differ
diff --git a/v1.4.7/docs/images/replace_cbd_incorrect.png b/v1.4.7/docs/images/replace_cbd_incorrect.png
new file mode 100644
index 0000000..72591d0
--- /dev/null
+++ b/v1.4.7/docs/images/replace_cbd_incorrect.png
Binary files differ
diff --git a/v1.4.7/docs/images/replace_dep_cbd_dep.png b/v1.4.7/docs/images/replace_dep_cbd_dep.png
new file mode 100644
index 0000000..35b7936
--- /dev/null
+++ b/v1.4.7/docs/images/replace_dep_cbd_dep.png
Binary files differ
diff --git a/v1.4.7/docs/images/replace_one.png b/v1.4.7/docs/images/replace_one.png
new file mode 100644
index 0000000..fe1aa1d
--- /dev/null
+++ b/v1.4.7/docs/images/replace_one.png
Binary files differ
diff --git a/v1.4.7/docs/images/resource-instance-change-lifecycle.png b/v1.4.7/docs/images/resource-instance-change-lifecycle.png
new file mode 100644
index 0000000..b6cf16e
--- /dev/null
+++ b/v1.4.7/docs/images/resource-instance-change-lifecycle.png
Binary files differ
diff --git a/v1.4.7/docs/images/simple_create.png b/v1.4.7/docs/images/simple_create.png
new file mode 100644
index 0000000..5c82954
--- /dev/null
+++ b/v1.4.7/docs/images/simple_create.png
Binary files differ
diff --git a/v1.4.7/docs/images/simple_destroy.png b/v1.4.7/docs/images/simple_destroy.png
new file mode 100644
index 0000000..be2e8fc
--- /dev/null
+++ b/v1.4.7/docs/images/simple_destroy.png
Binary files differ
diff --git a/v1.4.7/docs/images/simple_update.png b/v1.4.7/docs/images/simple_update.png
new file mode 100644
index 0000000..ada18b2
--- /dev/null
+++ b/v1.4.7/docs/images/simple_update.png
Binary files differ
diff --git a/v1.4.7/docs/images/update_destroy_cbd.png b/v1.4.7/docs/images/update_destroy_cbd.png
new file mode 100644
index 0000000..2ad04c9
--- /dev/null
+++ b/v1.4.7/docs/images/update_destroy_cbd.png
Binary files differ
diff --git a/v1.4.7/docs/maintainer-etiquette.md b/v1.4.7/docs/maintainer-etiquette.md
new file mode 100644
index 0000000..e273f22
--- /dev/null
+++ b/v1.4.7/docs/maintainer-etiquette.md
@@ -0,0 +1,95 @@
+# Maintainer's Etiquette
+
+Are you a core maintainer of Terraform? Great! Here's a few notes
+to help you get comfortable when working on the project.
+
+This documentation is somewhat outdated since it still includes provider-related
+information even though providers are now developed in their own separate
+codebases, but the general information is still valid.
+
+## Expectations
+
+We value the time you spend on the project and as such your maintainer status
+doesn't imply any obligations to do any specific work.
+
+### Your PRs
+
+These apply to all contributors, but maintainers should lead by examples! :wink:
+
+ - for `provider/*` PRs it's useful to attach test results & advise on how to run the relevant tests
+ - for `bug` fixes it's useful to attach repro case, ideally in a form of a test
+
+### PRs/issues from others
+
+ - you're welcomed to triage (attach labels to) other PRs and issues
+   - we generally use 2-label system (= at least 2 labels per issue/PR) where one label is generic and other one API-specific, e.g. `enhancement` & `provider/aws`
+
+## Merging
+
+ - you're free to review PRs from the community or other HC employees and give :+1: / :-1:
+ - if the PR submitter has push privileges (recognizable via `Collaborator`, `Member` or `Owner` badge) - we expect **the submitter** to merge their own PR after receiving a positive review from either HC employee or another maintainer. _Exceptions apply - see below._
+ - we prefer to use the GitHub's interface or API to do this, just click the green button
+ - squash?
+   - squash when you think the commit history is irrelevant (will not be helpful for any readers in T+6months)
+ - Add the new PR to the **Changelog** if it may affect the user (almost any PR except test changes and docs updates)
+   - we prefer to use the GitHub's web interface to modify the Changelog and use `[GH-12345]` to format the PR number. These will be turned into links as part of the release process. Breaking changes should be always documented separately.
+
+## Release process
+
+ - HC employees are responsible for cutting new releases
+ - The employee cutting the release will always notify all maintainers via Slack channel before & after each release
+	so you can avoid merging PRs during the release process.
+
+## Exceptions
+
+Any PR that is significantly changing or even breaking user experience cross-providers should always get at least one :+1: from a HC employee prior to merge.
+
+It is generally advisable to leave PRs labelled as `core` for HC employees to review and merge.
+
+Examples include:
+ - adding/changing/removing a CLI (sub)command or a [flag](https://github.com/hashicorp/terraform/pull/12939)
+ - introduce a new feature like [Environments](https://github.com/hashicorp/terraform/pull/12182) or [Shadow Graph](https://github.com/hashicorp/terraform/pull/9334)
+ - changing config (HCL) like [adding support for lists](https://github.com/hashicorp/terraform/pull/6322)
+ - change of the [build process or test environment](https://github.com/hashicorp/terraform/pull/9355)
+
+## Breaking Changes
+
+ - we always try to avoid breaking changes where possible and/or defer them to the nearest major release
+   - [state migration](https://github.com/hashicorp/terraform/blob/2fe5976aec290f4b53f07534f4cde13f6d877a3f/helper/schema/resource.go#L33-L56) may help you avoid breaking changes, see [example](https://github.com/hashicorp/terraform/blob/351c6bed79abbb40e461d3f7d49fe4cf20bced41/builtin/providers/aws/resource_aws_route53_record_migrate.go)
+   - either way BCs should be clearly documented in special section of the Changelog
+ - Any BC must always receive at least one :+1: from HC employee prior to merge, two :+1:s are advisable
+
+ ### Examples of Breaking Changes
+
+  - https://github.com/hashicorp/terraform/pull/12396
+  - https://github.com/hashicorp/terraform/pull/13872
+  - https://github.com/hashicorp/terraform/pull/13752
+
+## Unsure?
+
+If you're unsure about anything, ask in the committer's Slack channel.
+
+## New Providers
+
+These will require :+1: and some extra effort from HC employee.
+
+We expect all acceptance tests to be as self-sustainable as possible
+to keep the bar for running any acceptance test low for anyone
+outside of HashiCorp or core maintainers team.
+
+We expect any test to run **in parallel** alongside any other test (even the same test).
+To ensure this is possible, we need all tests to avoid sharing namespaces or using static unique names.
+In rare occasions this may require the use of mutexes in the resource code.
+
+### New Remote-API-based provider (e.g. AWS, Google Cloud, PagerDuty, Atlas)
+
+We will need some details about who to contact or where to register for a new account
+and generally we can't merge providers before ensuring we have a way to test them nightly,
+which usually involves setting up a new account and obtaining API credentials.
+
+### Local provider (e.g. MySQL, PostgreSQL, Kubernetes, Vault)
+
+We will need either Terraform configs that will set up the underlying test infrastructure
+(e.g. GKE cluster for Kubernetes) or Dockerfile(s) that will prepare test environment (e.g. MySQL)
+and expose the endpoint for testing.
+
diff --git a/v1.4.7/docs/planning-behaviors.md b/v1.4.7/docs/planning-behaviors.md
new file mode 100644
index 0000000..ecb6fb3
--- /dev/null
+++ b/v1.4.7/docs/planning-behaviors.md
@@ -0,0 +1,294 @@
+# Planning Behaviors
+
+A key design tenet for Terraform is that any actions with externally-visible
+side-effects should be carried out via the standard process of creating a
+plan and then applying it. Any new features should typically fit within this
+model.
+
+There are also some historical exceptions to this rule, which we hope to
+supplement with plan-and-apply-based equivalents over time.
+
+This document describes the default planning behavior of Terraform in the
+absence of any special instructions, and also describes the three main
+design approaches we can choose from when modelling non-default behaviors that
+require additional information from outside of Terraform Core.
+
+This document focuses primarily on actions relating to _resource instances_,
+because that is Terraform's main concern. However, these design principles can
+potentially generalize to other externally-visible objects, if we can describe
+their behaviors in a way comparable to the resource instance behaviors.
+
+This is developer-oriented documentation rather than user-oriented
+documentation. See
+[the main Terraform documentation](https://www.terraform.io/docs) for
+information on existing planning behaviors and other behaviors as viewed from
+an end-user perspective.
+
+## Default Planning Behavior
+
+When given no explicit information to the contrary, Terraform Core will
+automatically propose taking the following actions in the appropriate
+situations:
+
+- **Create**, if either of the following are true:
+  - There is a `resource` block in the configuration that has no corresponding
+    managed resource in the prior state.
+  - There is a `resource` block in the configuration that is recorded in the
+    prior state but whose `count` or `for_each` argument (or lack thereof)
+    describes an instance key that is not tracked in the prior state.
+- **Delete**, if either of the following are true:
+  - There is a managed resource tracked in the prior state which has no
+    corresponding `resource` block in the configuration.
+  - There is a managed resource tracked in the prior state which has a
+    corresponding `resource` block in the configuration _but_ its `count`
+    or `for_each` argument (or lack thereof) lacks an instance key that is
+    tracked in the prior state.
+- **Update**, if there is a corresponding resource instance both declared in the
+  configuration (in a `resource` block) and recorded in the prior state
+  (unless it's marked as "tainted") but there are differences between the prior
+  state and the configuration which the corresponding provider doesn't
+  explicitly classify as just being normalization.
+- **Replace**, if there is a corresponding resource instance both declared in
+  the configuration (in a `resource` block) and recorded in the prior state
+  _marked as "tainted"_. The special "tainted" status means that the process
+  of creating the object failed partway through and so the existing object does
+  not necessarily match the configuration, so Terraform plans to replace it
+  in order to ensure that the resulting object is complete.
+- **Read**, if there is a `data` block in the configuration.
+  - If possible, Terraform will eagerly perform this action during the planning
+    phase, rather than waiting until the apply phase.
+  - If the configuration contains at least one unknown value, or if the
+    data resource directly depends on a managed resource that has any change
+    proposed elsewhere in the plan, Terraform will instead delay this action
+    to the apply phase so that it can react to the completion of modification
+    actions on other objects.
+- **No-op**, to explicitly represent that Terraform considered a particular
+  resource instance but concluded that no action was required.
+
+The **Replace** action described above is really a sort of "meta-action", which
+Terraform expands into separate **Create** and **Delete** operations. There are
+two possible orderings, and the first one is the default planning behavior
+unless overridden by a special planning behavior as described later. The
+two possible lowerings of **Replace** are:
+1. **Delete** then **Create**: first delete the existing object bound to an
+  instance, and then create a new object at the same address based on the
+  current configuration.
+2. **Create** then **Delete**: mark the existing object bound to an instance as
+  "deposed" (still exists but not current), create a new current object at the
+  same address based on the current configuration, and then delete the deposed
+  object.
+
+## Special Planning Behaviors
+
+For the sake of this document, a "special" planning behavior is one where
+Terraform Core will select a different action than the defaults above,
+based on explicit instructions given either by a module author, an operator,
+or a provider.
+
+There are broadly three different design patterns for special planning
+behaviors, and so each "special" use-case will typically be met by one or more
+of the following depending on which stakeholder is activating the behavior:
+
+- [Configuration-driven Behaviors](#configuration-driven-behaviors) are
+  activated by additional annotations given in the source code of a module.
+
+    This design pattern is good for situations where the behavior relates to
+    a particular module and so should be activated for anyone using that
+    module. These behaviors are therefore specified by the module author, such
+    that any caller of the module will automatically benefit with no additional
+    work.
+- [Provider-driven Behaviors](#provider-driven-behaviors) are activated by
+  optional fields in a provider's response when asked to help plan one of the
+  default actions given above.
+
+    This design pattern is good for situations where the behavior relates to
+    the behavior of the remote system that a provider is wrapping, and so from
+    the perspective of a user of the provider the behavior should appear
+    "automatic".
+
+    Because these special behaviors are activated by values in the provider's
+    response to the planning request from Terraform Core, behaviors of this
+    sort will typically represent "tweaks" to or variants of the default
+    planning behaviors, rather than entirely different behaviors.
+- [Single-run Behaviors](#single-run-behaviors) are activated by explicitly
+  setting additional "plan options" when calling Terraform Core's plan
+  operation.
+
+    This design pattern is good for situations where the direct operator of
+    Terraform needs to do something exceptional or one-off, such as when the
+    configuration is correct but the real system has become degraded or damaged
+    in a way that Terraform cannot automatically understand.
+
+    However, this design pattern has the disadvantage that each new single-run
+    behavior type requires custom work in every wrapping UI or automaton around
+    Terraform Core, in order provide the user of that wrapper some way
+    to directly activate the special option, or to offer an "escape hatch" to
+    use Terraform CLI directly and bypass the wrapping automation for a
+    particular change.
+
+We've also encountered use-cases that seem to call for a hybrid between these
+different patterns. For example, a configuration construct might cause Terraform
+Core to _invite_ a provider to activate a special behavior, but let the
+provider make the final call about whether to do it. Or conversely, a provider
+might advertise the possibility of a special behavior but require the user to
+specify something in the configuration to activate it. The above are just
+broad categories to help us think through potential designs; some problems
+will require more creative combinations of these patterns than others.
+
+### Configuration-driven Behaviors
+
+Within the space of configuration-driven behaviors, we've encountered two
+main sub-categories:
+- Resource-specific behaviors, whose effect is scoped to a particular resource.
+  The configuration for these often lives inside the `resource` or `data`
+  block that declares the resource.
+- Global behaviors, whose effect can span across more than one resource and
+  sometimes between resources in different modules. The configuration for
+  these often lives in a separate location in a module, such as a separate
+  top-level block which refers to other resources using the typical address
+  syntax.
+
+The following is a non-exhaustive list of existing examples of
+configuration-driven behaviors, selected to illustrate some different variations
+that might be useful inspiration for new designs:
+
+- The `ignore_changes` argument inside `resource` block `lifecycle` blocks
+  tells Terraform that if there is an existing object bound to a particular
+  resource instance address then Terraform should ignore the configured value
+  for a particular argument and use the corresponding value from the prior
+  state instead.
+
+    This can therefore potentially cause what would've been an **Update** to be
+    a **No-op** instead.
+- The `replace_triggered_by` argument inside `resource` block `lifecycle`
+  blocks can use a proposed change elsewhere in a module to force Terraform
+  to propose one of the two **Replace** variants for a particular resource.
+- The `create_before_destroy` argument inside `resource` block `lifecycle`
+  blocks only takes effect if a particular resource instance has a proposed
+  **Replace** action. If not set or set to `false`, Terraform will decompose
+  it to **Destroy** then **Create**, but if set to `true` Terraform will use
+  the inverted ordering.
+
+    Because Terraform Core will never select a **Replace** action automatically
+    by itself, this is an example of a hybrid design where the config-driven
+    `create_before_destroy` combines with any other behavior (config-driven or
+    otherwise) that might cause **Replace** to customize exactly what that
+    **Replace** will mean.
+- Top-level `moved` blocks in a module activate a special behavior during the
+  planning phase, where Terraform will first try to change the bindings of
+  existing objects in the prior state to attach to new addresses before running
+  the normal planning process. This therefore allows a module author to
+  document certain kinds of refactoring so that Terraform can update the
+  state automatically once users upgrade to a new version of the module.
+
+    This special behavior is interesting because it doesn't _directly_ change
+    what actions Terraform will propose, but instead it adds an extra
+    preparation step before the typical planning process which changes the
+    addresses that the planning process will consider. It can therefore
+    _indirectly_ cause different proposed actions for affected resource
+    instances, such as transforming what by default might've been a **Delete**
+    of one instance and a **Create** of another into just a **No-op** or
+    **Update** of the second instance.
+
+    This one is an example of a "global behavior", because at minimum it
+    affects two resource instance addresses and, if working with whole resource
+    or whole module addresses, can potentially affect a large number of resource
+    instances all at once.
+
+### Provider-driven Behaviors
+
+Providers get an opportunity to activate some special behaviors for a particular
+resource instance when they respond to the `PlanResourceChange` function of
+the provider plugin protocol.
+
+When Terraform Core executes this RPC, it has already selected between
+**Create**, **Delete**, or **Update** actions for the particular resource
+instance, and so the special behaviors a provider may activate will typically
+serve as modifiers or tweaks to that base action, and will not allow
+the provider to select another base action altogether. The provider wire
+protocol does not talk about the action types explicitly, and instead only
+implies them via other content of the request and response, with Terraform Core
+making the final decision about how to react to that information.
+
+The following is a non-exhaustive list of existing examples of
+provider-driven behaviors, selected to illustrate some different variations
+that might be useful inspiration for new designs:
+
+- When the base action is **Update**, a provider may optionally return one or
+  more paths to attributes which have changes that the provider cannot
+  implement as an in-place update due to limitations of the remote system.
+
+    In that case, Terraform Core will replace the **Update** action with one of
+    the two **Replace** variants, which means that from the provider's
+    perspective the apply phase will really be two separate calls for the
+    decomposed **Create** and **Delete** actions (in either order), rather
+    than **Update** directly.
+- When the base action is **Update**, a provider may optionally return a
+  proposed new object where one or more of the arguments has its value set
+  to what was in the prior state rather than what was set in the configuration.
+  This represents any situation where a remote system supports multiple
+  different serializations of the same value that are all equivalent, and
+  so changing from one to another doesn't represent a real change in the
+  remote system.
+
+    If all of those taken together causes the new object to match the prior
+    state, Terraform Core will treat the update as a **No-op** instead.
+
+Of the three genres of special behaviors, provider-driven behaviors is the one
+we've made the least use of historically but one that seems to have a lot of
+opportunities for future exploration. Provider-driven behaviors can often be
+ideal because their effects appear as if they are built in to Terraform so
+that "it just works", with Terraform automatically deciding and explaining what
+needs to happen and why, without any special effort on the user's part.
+
+### Single-run Behaviors
+
+Terraform Core's "plan" operation takes a set of arguments that we collectively
+call "plan options", that can modify Terraform's planning behavior on a per-run
+basis without any configuration changes or special provider behaviors.
+
+As noted above, this particular genre of designs is the most burdensome to
+implement because any wrapping software that can ask Terraform Core to create
+a plan must ideally offer some way to set all of the available planning options,
+or else some part of Terraform's functionality won't be available to anyone
+using that wrapper.
+
+However, we've seen various situations where single-run behaviors really are the
+most appropriate way to handle a particular use-case, because the need for the
+behavior originates in some process happening outside of the scope of any
+particular Terraform module or provider.
+
+The following is a non-exhaustive list of existing examples of
+single-run behaviors, selected to illustrate some different variations
+that might be useful inspiration for new designs:
+
+- The "replace" planning option specifies zero or more resource instance
+  addresses.
+
+    For any resource instance specified, Terraform Core will transform any
+    **Update** or **No-op** action for that instance into one of the
+    **Replace** actions, thereby allowing an operator to respond to something
+    having become degraded in a way that Terraform and providers cannot
+    automatically detect and force Terraform to replace that object with
+    a new one that will hopefully function correctly.
+- The "refresh only" planning mode ("planning mode" is a single planning option
+  that selects between a few mutually-exclusive behaviors) forces Terraform
+  to treat every resource instance as **No-op**, regardless of what is bound
+  to that address in state or present in the configuration.
+
+## Legacy Operations
+
+Some of the legacy operations Terraform CLI offers that _aren't_ integrated
+with the plan and apply flow could be thought of as various degenerate kinds
+of single-run behaviors. Most don't offer any opportunity to preview an effect
+before applying it, but do meet a similar set of use-cases where an operator
+needs to take some action to respond to changes to the context Terraform is
+in rather than to the Terraform configuration itself.
+
+Most of these legacy operations could therefore most readily be translated to
+single-run behaviors, but before doing so it's worth researching whether people
+are using them as a workaround for missing configuration-driven and/or
+provider-driven behaviors. A particular legacy operation might be better
+replaced with a different sort of special behavior, or potentially by multiple
+different special behaviors of different genres if it's currently serving as
+a workaround for many different unmet needs.
diff --git a/v1.4.7/docs/plugin-protocol/README.md b/v1.4.7/docs/plugin-protocol/README.md
new file mode 100644
index 0000000..de92501
--- /dev/null
+++ b/v1.4.7/docs/plugin-protocol/README.md
@@ -0,0 +1,213 @@
+# Terraform Plugin Protocol
+
+This directory contains documentation about the physical wire protocol that
+Terraform Core uses to communicate with provider plugins.
+
+Most providers are not written directly against this protocol. Instead, prefer
+to use an SDK that implements this protocol and write the provider against
+the SDK's API.
+
+----
+
+**If you want to write a plugin for Terraform, please refer to
+[Extending Terraform](https://www.terraform.io/docs/extend/index.html) instead.**
+
+This documentation is for those who are developing _Terraform SDKs_, rather
+than those implementing plugins.
+
+----
+
+From Terraform v0.12.0 onwards, Terraform's plugin protocol is built on
+[gRPC](https://grpc.io/). This directory contains `.proto` definitions of
+different versions of Terraform's protocol.
+
+Only `.proto` files published as part of Terraform release tags are actually
+official protocol versions. If you are reading this directory on the `main`
+branch or any other development branch then it may contain protocol definitions
+that are not yet finalized and that may change before final release.
+
+## RPC Plugin Model
+
+Terraform plugins are normal executable programs that, when launched, expose
+gRPC services on a server accessed via the loopback interface. Terraform Core
+discovers and launches plugins, waits for a handshake to be printed on the
+plugin's `stdout`, and then connects to the indicated port number as a
+gRPC client.
+
+For this reason, we commonly refer to Terraform Core itself as the plugin
+"client" and the plugin program itself as the plugin "server". Both of these
+processes run locally, with the server process appearing as a child process
+of the client. Terraform Core controls the lifecycle of these server processes
+and will terminate them when they are no longer required.
+
+The startup and handshake protocol is not currently documented. We hope to
+document it here or to link to external documentation on it in future.
+
+## Versioning Strategy
+
+The Plugin Protocol uses a versioning strategy that aims to allow gradual
+enhancements to the protocol while retaining compatibility, but also to allow
+more significant breaking changes from time to time while allowing old and
+new plugins to be used together for some period.
+
+The versioning strategy described below was introduced with protocol version
+5.0 in Terraform v0.12. Prior versions of Terraform and prior protocol versions
+do not follow this strategy.
+
+The authoritative definition for each protocol version is in this directory
+as a Protocol Buffers (protobuf) service definition. The files follow the
+naming pattern `tfpluginX.Y.proto`, where X is the major version and Y
+is the minor version.
+
+### Major and minor versioning
+
+The minor version increases for each change introducing optional new
+functionality that can be ignored by implementations of prior versions. For
+example, if a new field were added to an response message, it could be a minor
+release as long as Terraform Core can provide some default behavior when that
+field is not populated.
+
+The major version increases for any significant change to the protocol where
+compatibility is broken. However, Terraform Core and an SDK may both choose
+to support multiple major versions at once: the plugin handshake includes a
+negotiation step where client and server can work together to select a
+mutually-supported major version.
+
+The major version number is encoded into the protobuf package name: major
+version 5 uses the package name `tfplugin5`, and one day major version 6
+will switch to `tfplugin6`. This change of name allows a plugin server to
+implement multiple major versions at once, by exporting multiple gRPC services.
+Minor version differences rely instead on feature-detection mechanisms, so they
+are not represented directly on the wire and exist primarily as a human
+communication tool to help us easily talk about which software supports which
+features.
+
+## Version compatibility for Core, SDK, and Providers
+
+A particular version of Terraform Core has both a minimum minor version it
+requires and a maximum major version that it supports. A particular version of
+Terraform Core may also be able to optionally use a newer minor version when
+available, but fall back on older behavior when that functionality is not
+available.
+
+Likewise, each provider plugin release is compatible with a set of versions.
+The compatible versions for a provider are a list of major and minor version
+pairs, such as "4.0", "5.2", which indicates that the provider supports the
+baseline features of major version 4 and supports major version 5 including
+the enhancements from both minor versions 1 and 2. This provider would
+therefore be compatible with a Terraform Core release that supports only
+protocol version 5.0, since major version 5 is supported and the optional
+5.1 and 5.2 enhancements will be ignored.
+
+If Terraform Core and the plugin do not have at least one mutually-supported
+major version, Terraform Core will return an error from `terraform init`
+during plugin installation:
+
+```
+Provider "aws" v1.0.0 is not compatible with Terraform v0.12.0.
+
+Provider version v2.0.0 is the earliest compatible version.
+Select it with the following version constraint:
+
+    version = "~> 2.0.0"
+```
+
+```
+Provider "aws" v3.0.0 is not compatible with Terraform v0.12.0.
+Provider version v2.34.0 is the latest compatible version. Select 
+it with the following constraint:
+
+    version = "~> 2.34.0"
+
+Alternatively, upgrade to the latest version of Terraform for compatibility with newer provider releases.
+```
+
+The above messages are for plugins installed via `terraform init` from a
+Terraform registry, where the registry API allows Terraform Core to recognize
+the protocol compatibility for each provider release. For plugins that are
+installed manually to a local plugin directory, Terraform Core has no way to
+suggest specific versions to upgrade or downgrade to, and so the error message
+is more generic:
+
+```
+The installed version of provider "example" is not compatible with Terraform v0.12.0.
+
+This provider was loaded from:
+     /usr/local/bin/terraform-provider-example_v0.1.0
+```
+
+## Adding/removing major version support in SDK and Providers
+
+The set of supported major versions is decided by the SDK used by the plugin.
+Over time, SDKs will add support for new major versions and phase out support
+for older major versions.
+
+In doing so, the SDK developer passes those capabilities and constraints on to
+any provider using their SDK, and that will in turn affect the compatibility
+of the plugin in ways that affect its semver-based version numbering:
+
+- If an SDK upgrade adds support for a new provider protocol, that will usually
+  be considered a new feature and thus warrant a new minor version.
+- If an SDK upgrade removes support for an old provider protocol, that is
+  always a breaking change and thus requires a major release of the provider.
+
+For this reason, SDK developers must be clear in their release notes about
+the addition and removal of support for major versions.
+
+Terraform Core also makes an assumption about major version support when
+it produces actionable error messages for users about incompatibilities:
+a particular protocol major version is supported for a single consecutive
+range of provider releases, with no "gaps".
+
+## Using the protobuf specifications in an SDK
+
+If you wish to build an SDK for Terraform plugins, an early step will be to
+copy one or more `.proto` files from this directory into your own repository
+(depending on which protocol versions you intend to support) and use the
+`protoc` protocol buffers compiler (with gRPC extensions) to generate suitable
+RPC stubs and types for your target language.
+
+For example, if you happen to be targeting Python, you might generate the
+stubs using a command like this:
+
+```
+protoc --python_out=. --grpc_python_out=. tfplugin5.1.proto
+```
+
+You can find out more about the tool usage for each target language in
+[the gRPC Quick Start guides](https://grpc.io/docs/quickstart/).
+
+The protobuf specification for a version is immutable after it has been
+included in at least one Terraform release. Any changes will be documented in
+a new `.proto` file establishing a new protocol version.
+
+The protocol buffer compiler will produce some sort of library object appropriate
+for the target language, which depending on the language might be called a
+module, or a package, or something else. We recommend to include the protocol
+major version in your module or package name so that you can potentially
+support multiple versions concurrently in future. For example, if you are
+targeting major version 5 you might call your package or module `tfplugin5`.
+
+To upgrade to a newer minor protocol version, copy the new `.proto` file
+from this directory into the same location as your previous version, delete
+the previous version, and then run the protocol buffers compiler again
+against the new `.proto` file. Because minor releases are backward-compatible,
+you can simply update your previous stubs in-place rather than creating a
+new set alongside.
+
+To support a new _major_ protocol version, create a new package or module
+and copy the relevant `.proto` file into it, creating a separate set of stubs
+that can in principle allow your SDK to support both major versions at the
+same time. We recommend supporting both the previous and current major versions
+together for a while across a major version upgrade so that users can avoid
+having to upgrade both Terraform Core and all of their providers at the same
+time, but you can delete the previous major version stubs once you remove
+support for that version.
+
+**Note:** Some of the `.proto` files contain statements about being updated
+in-place for minor versions. This reflects an earlier version management
+strategy which is no longer followed. The current process is to create a
+new file in this directory for each new minor version and consider all
+previously-tagged definitions as immutable. The outdated comments in those
+files are retained in order to keep the promise of immutability, even though
+it is now incorrect.
diff --git a/v1.4.7/docs/plugin-protocol/object-wire-format.md b/v1.4.7/docs/plugin-protocol/object-wire-format.md
new file mode 100644
index 0000000..5e1809c
--- /dev/null
+++ b/v1.4.7/docs/plugin-protocol/object-wire-format.md
@@ -0,0 +1,210 @@
+# Wire Format for Terraform Objects and Associated Values
+
+The provider wire protocol (as of major version 5) includes a protobuf message
+type `DynamicValue` which Terraform uses to represent values from the Terraform
+Language type system, which result from evaluating the content of `resource`,
+`data`, and `provider` blocks, based on a schema defined by the corresponding
+provider.
+
+Because the structure of these values is determined at runtime, `DynamicValue`
+uses one of two possible dynamic serialization formats for the values
+themselves: MessagePack or JSON. Terraform most commonly uses MessagePack,
+because it offers a compact binary representation of a value. However, a server
+implementation of the provider protocol should fall back to JSON if the
+MessagePack field is not populated, in order to support both formats.
+
+The remainder of this document describes how Terraform translates from its own
+type system into the type system of the two supported serialization formats.
+A server implementation of the Terraform provider protocol can use this
+information to decode `DynamicValue` values from incoming messages into
+whatever representation is convenient for the provider implementation.
+
+A server implementation must also be able to _produce_ `DynamicValue` messages
+as part of various response messages. When doing so, servers should always
+use MessagePack encoding, because Terraform does not consistently support
+JSON responses across all request types and all Terraform versions.
+
+Both the MessagePack and JSON serializations are driven by information the
+provider previously returned in a `Schema` message. Terraform will encode each
+value depending on the type constraint given for it in the corresponding schema,
+using the closest possible MessagePack or JSON type to the Terraform language
+type. Therefore a server implementation can decode a serialized value using a
+standard MessagePack or JSON library and assume it will conform to the
+serialization rules described below.
+
+## MessagePack Serialization Rules
+
+The MessagePack types referenced in this section are those defined in
+[The MessagePack type system specification](https://github.com/msgpack/msgpack/blob/master/spec.md#type-system).
+
+Note that MessagePack defines several possible serialization formats for each
+type, and Terraform may choose any of the formats of a specified type.
+The exact serialization chosen for a given value may vary between Terraform
+versions, but the types given here are contractual.
+
+Conversely, server implementations that are _producing_ MessagePack-encoded
+values are free to use any of the valid serialization formats for a particular
+type. However, we recommend choosing the most compact format that can represent
+the value without a loss of range.
+
+### `Schema.Block` Mapping Rules for MessagePack
+
+To represent the content of a block as MessagePack, Terraform constructs a
+MessagePack map that contains one key-value pair per attribute and one
+key-value pair per distinct nested block described in the `Schema.Block` message.
+
+The key-value pairs representing attributes have values based on
+[the `Schema.Attribute` mapping rules](#Schema.Attribute-mapping-rules-for-messagepack).
+The key-value pairs representing nested block types have values based on
+[the `Schema.NestedBlock` mapping rules](#Schema.NestedBlock-mapping-rules-for-messagepack).
+
+### `Schema.Attribute` Mapping Rules for MessagePack
+
+The MessagePack serialization of an attribute value depends on the value of the
+`type` field of the corresponding `Schema.Attribute` message. The `type` field is
+a compact JSON serialization of a
+[Terraform type constraint](https://www.terraform.io/docs/configuration/types.html),
+which consists either of a single
+string value (for primitive types) or a two-element array giving a type kind
+and a type argument.
+
+The following table describes the type-specific mapping rules. Along with those
+type-specific rules there are two special rules that override the mappings
+in the table below, regardless of type:
+
+* A null value is represented as a MessagePack nil value.
+* An unknown value (that is, a placeholder for a value that will be decided
+  only during the apply operation) is represented as a
+  [MessagePack extension](https://github.com/msgpack/msgpack/blob/master/spec.md#extension-types)
+  value whose type identifier is zero and whose value is unspecified and
+  meaningless.
+
+| `type` Pattern | MessagePack Representation |
+|---|---|
+| `"string"` | A MessagePack string containing the Unicode characters from the string value serialized as normalized UTF-8. |
+| `"number"` | Either MessagePack integer, MessagePack float, or MessagePack string representing the number. If a number is represented as a string then the string contains a decimal representation of the number which may have a larger mantissa than can be represented by a 64-bit float. |
+| `"bool"` | A MessagePack boolean value corresponding to the value. |
+| `["list",T]` | A MessagePack array with the same number of elements as the list value, each of which is represented by the result of applying these same mapping rules to the nested type `T`. |
+| `["set",T]` | Identical in representation to `["list",T]`, but the order of elements is undefined because Terraform sets are unordered. |
+| `["map",T]` | A MessagePack map with one key-value pair per element of the map value, where the element key is serialized as the map key (always a MessagePack string) and the element value is represented by a value constructed by applying these same mapping rules to the nested type `T`. |
+| `["object",ATTRS]` | A MessagePack map with one key-value pair per attribute defined in the `ATTRS` object. The attribute name is serialized as the map key (always a MessagePack string) and the attribute value is represented by a value constructed by applying these same mapping rules to each attribute's own type. |
+| `["tuple",TYPES]` | A MessagePack array with one element per element described by the `TYPES` array. The element values are constructed by applying these same mapping rules to the corresponding element of `TYPES`. |
+| `"dynamic"` | A MessagePack array with exactly two elements. The first element is a MessagePack binary value containing a JSON-serialized type constraint in the same format described in this table. The second element is the result of applying these same mapping rules to the value with the type given in the first element. This special type constraint represents values whose types will be decided only at runtime. |
+
+### `Schema.NestedBlock` Mapping Rules for MessagePack
+
+The MessagePack serialization of a collection of blocks of a particular type
+depends on the `nesting` field of the corresponding `Schema.NestedBlock` message.
+The `nesting` field is a value from the `Schema.NestingBlock.NestingMode`
+enumeration.
+
+All `nesting` values cause the individual blocks of a type to be represented
+by applying
+[the `Schema.Block` mapping rules](#Schema.Block-mapping-rules-for-messagepack)
+to the block's contents based on the `block` field, producing what we'll call
+a _block value_ in the table below.
+
+The `nesting` value then in turn defines how Terraform will collect all of the
+individual block values together to produce a single property value representing
+the nested block type. For all `nesting` values other than `MAP`, blocks may
+not have any labels. For the `nesting` value `MAP`, blocks must have exactly
+one label, which is a string we'll call a _block label_ in the table below.
+
+| `nesting` Value | MessagePack Representation |
+|---|---|
+| `SINGLE` | The block value of the single block of this type, or nil if there is no block of that type. |
+| `LIST` | A MessagePack array of all of the block values, preserving the order of definition of the blocks in the configuration. |
+| `SET` | A MessagePack array of all of the block values in no particular order. |
+| `MAP` | A MessagePack map with one key-value pair per block value, where the key is the block label and the value is the block value. |
+| `GROUP` | The same as with `SINGLE`, except that if there is no block of that type Terraform will synthesize a block value by pretending that all of the declared attributes are null and that there are zero blocks of each declared block type. |
+
+For the `LIST` and `SET` nesting modes, Terraform guarantees that the
+MessagePack array will have a number of elements between the `min_items` and
+`max_items` values given in the schema, _unless_ any of the block values contain
+nested unknown values. When unknown values are present, Terraform considers
+the value to be potentially incomplete and so Terraform defers validation of
+the number of blocks. For example, if the configuration includes a `dynamic`
+block whose `for_each` argument is unknown then the final number of blocks is
+not predictable until the apply phase.
+
+## JSON Serialization Rules
+
+The JSON serialization is a secondary representation for `DynamicValue`, with
+MessagePack preferred due to its ability to represent unknown values via an
+extension.
+
+The JSON encoding described in this section is also used for the `json` field
+of the `RawValue` message that forms part of an `UpgradeResourceState` request.
+However, in that case the data is serialized per the schema of the provider
+version that created it, which won't necessarily match the schema of the
+_current_ version of that provider.
+
+### `Schema.Block` Mapping Rules for JSON
+
+To represent the content of a block as JSON, Terraform constructs a
+JSON object that contains one property per attribute and one property per
+distinct nested block described in the `Schema.Block` message.
+
+The properties representing attributes have property values based on
+[the `Schema.Attribute` mapping rules](#Schema.Attribute-mapping-rules-for-json).
+The properties representing nested block types have property values based on
+[the `Schema.NestedBlock` mapping rules](#Schema.NestedBlock-mapping-rules-for-json).
+
+### `Schema.Attribute` Mapping Rules for JSON
+
+The JSON serialization of an attribute value depends on the value of the `type`
+field of the corresponding `Schema.Attribute` message. The `type` field is
+a compact JSON serialization of a
+[Terraform type constraint](https://www.terraform.io/docs/configuration/types.html),
+which consists either of a single
+string value (for primitive types) or a two-element array giving a type kind
+and a type argument.
+
+The following table describes the type-specific mapping rules. Along with those
+type-specific rules there is one special rule that overrides the rules in the
+table regardless of type:
+
+* A null value is always represented as JSON `null`.
+
+| `type` Pattern | JSON Representation |
+|---|---|
+| `"string"` | A JSON string containing the Unicode characters from the string value. |
+| `"number"` | A JSON number representing the number value. Terraform numbers are arbitrary-precision floating point, so the value may have a larger mantissa than can be represented by a 64-bit float. |
+| `"bool"` | Either JSON `true` or JSON `false`, depending on the boolean value. |
+| `["list",T]` | A JSON array with the same number of elements as the list value, each of which is represented by the result of applying these same mapping rules to the nested type `T`. |
+| `["set",T]` | Identical in representation to `["list",T]`, but the order of elements is undefined because Terraform sets are unordered. |
+| `["map",T]` | A JSON object with one property per element of the map value, where the element key is serialized as the property name string and the element value is represented by a property value constructed by applying these same mapping rules to the nested type `T`. |
+| `["object",ATTRS]` | A JSON object with one property per attribute defined in the `ATTRS` object. The attribute name is serialized as the property name string and the attribute value is represented by a property value constructed by applying these same mapping rules to each attribute's own type. |
+| `["tuple",TYPES]` | A JSON array with one element per element described by the `TYPES` array. The element values are constructed by applying these same mapping rules to the corresponding element of `TYPES`. |
+| `"dynamic"` | A JSON object with two properties: `"type"` specifying one of the `type` patterns described in this table in-band, giving the exact runtime type of the value, and `"value"` specifying the result of applying these same mapping rules to the table for the specified runtime type. This special type constraint represents values whose types will be decided only at runtime. |
+
+### `Schema.NestedBlock` Mapping Rules for JSON
+
+The JSON serialization of a collection of blocks of a particular type depends
+on the `nesting` field of the corresponding `Schema.NestedBlock` message.
+The `nesting` field is a value from the `Schema.NestingBlock.NestingMode`
+enumeration.
+
+All `nesting` values cause the individual blocks of a type to be represented
+by applying
+[the `Schema.Block` mapping rules](#Schema.Block-mapping-rules-for-json)
+to the block's contents based on the `block` field, producing what we'll call
+a _block value_ in the table below.
+
+The `nesting` value then in turn defines how Terraform will collect all of the
+individual block values together to produce a single property value representing
+the nested block type. For all `nesting` values other than `MAP`, blocks may
+not have any labels. For the `nesting` value `MAP`, blocks must have exactly
+one label, which is a string we'll call a _block label_ in the table below.
+
+| `nesting` Value | JSON Representation |
+|---|---|
+| `SINGLE` | The block value of the single block of this type, or `null` if there is no block of that type. |
+| `LIST` | A JSON array of all of the block values, preserving the order of definition of the blocks in the configuration. |
+| `SET` | A JSON array of all of the block values in no particular order. |
+| `MAP` | A JSON object with one property per block value, where the property name is the block label and the value is the block value. |
+| `GROUP` | The same as with `SINGLE`, except that if there is no block of that type Terraform will synthesize a block value by pretending that all of the declared attributes are null and that there are zero blocks of each declared block type. |
+
+For the `LIST` and `SET` nesting modes, Terraform guarantees that the JSON
+array will have a number of elements between the `min_items` and `max_items`
+values given in the schema.
diff --git a/v1.4.7/docs/plugin-protocol/releasing-new-version.md b/v1.4.7/docs/plugin-protocol/releasing-new-version.md
new file mode 100644
index 0000000..197a1a5
--- /dev/null
+++ b/v1.4.7/docs/plugin-protocol/releasing-new-version.md
@@ -0,0 +1,53 @@
+# Releasing a New Version of the Protocol
+
+Terraform's plugin protocol is the contract between Terraform's plugins and
+Terraform, and as such releasing a new version requires some coordination
+between those pieces. This document is intended to be a checklist to consult
+when adding a new major version of the protocol (X in X.Y) to ensure that
+everything that needs to be is aware of it.
+
+## New Protobuf File
+
+The protocol is defined in protobuf files that live in the hashicorp/terraform
+repository. Adding a new version of the protocol involves creating a new
+`.proto` file in that directory. It is recommended that you copy the latest
+protocol file, and modify it accordingly.
+
+## New terraform-plugin-go Package
+
+The
+[hashicorp/terraform-plugin-go](https://github.com/hashicorp/terraform-plugin-go)
+repository serves as the foundation for Terraform's plugin ecosystem. It needs
+to know about the new major protocol version. Either open an issue in that repo
+to have the Plugin SDK team add the new package, or if you would like to
+contribute it yourself, open a PR. It is recommended that you copy the package
+for the latest protocol version and modify it accordingly.
+
+## Update the Registry's List of Allowed Versions
+
+The Terraform Registry validates the protocol versions a provider advertises
+support for when ingesting providers. Providers will not be able to advertise
+support for the new protocol version until it is added to that list.
+
+## Update Terraform's Version Constraints
+
+Terraform only downloads providers that speak protocol versions it is
+compatible with from the Registry during `terraform init`. When adding support
+for a new protocol, you need to tell Terraform it knows that protocol version.
+Modify the `SupportedPluginProtocols` variable in hashicorp/terraform's
+`internal/getproviders/registry_client.go` file to include the new protocol.
+
+## Test Running a Provider With the Test Framework
+
+Use the provider test framework to test a provider written with the new
+protocol. This end-to-end test ensures that providers written with the new
+protocol work correctly with the test framework, especially in communicating
+the protocol version between the test framework and Terraform.
+
+## Test Retrieving and Running a Provider From the Registry
+
+Publish a provider, either to the public registry or to the staging registry,
+and test running `terraform init` and `terraform apply`, along with exercising
+any of the new functionality the protocol version introduces. This end-to-end
+test ensures that all the pieces needing to be updated before practitioners can
+use providers built with the new protocol have been updated.
diff --git a/v1.4.7/docs/plugin-protocol/tfplugin5.0.proto b/v1.4.7/docs/plugin-protocol/tfplugin5.0.proto
new file mode 100644
index 0000000..624ad2a
--- /dev/null
+++ b/v1.4.7/docs/plugin-protocol/tfplugin5.0.proto
@@ -0,0 +1,353 @@
+// Terraform Plugin RPC protocol version 5.0
+//
+// This file defines version 5.0 of the RPC protocol. To implement a plugin
+// against this protocol, copy this definition into your own codebase and
+// use protoc to generate stubs for your target language.
+//
+// This file will be updated in-place in the source Terraform repository for
+// any minor versions of protocol 5, but later minor versions will always be
+// backwards compatible. Breaking changes, if any are required, will come
+// in a subsequent major version with its own separate proto definition.
+//
+// Note that only the proto files included in a release tag of Terraform are
+// official protocol releases. Proto files taken from other commits may include
+// incomplete changes or features that did not make it into a final release.
+// In all reasonable cases, plugin developers should take the proto file from
+// the tag of the most recent release of Terraform, and not from the main
+// branch or any other development branch.
+//
+syntax = "proto3";
+
+package tfplugin5;
+
+// DynamicValue is an opaque encoding of terraform data, with the field name
+// indicating the encoding scheme used.
+message DynamicValue {
+    bytes msgpack = 1;
+    bytes json = 2;
+}
+
+message Diagnostic {
+    enum Severity {
+        INVALID = 0;
+        ERROR = 1;
+        WARNING = 2;
+    }
+    Severity severity = 1;
+    string summary = 2;
+    string detail = 3;
+    AttributePath attribute = 4;
+}
+
+message AttributePath {
+    message Step {
+        oneof selector {
+            // Set "attribute_name" to represent looking up an attribute
+            // in the current object value.
+            string attribute_name = 1;
+            // Set "element_key_*" to represent looking up an element in
+            // an indexable collection type.
+            string element_key_string = 2;
+            int64 element_key_int = 3;
+        }
+    }
+    repeated Step steps = 1;
+}
+
+message Stop {
+    message Request {
+    }
+    message Response {
+		string Error = 1;
+    }
+}
+
+// RawState holds the stored state for a resource to be upgraded by the
+// provider. It can be in one of two formats, the current json encoded format
+// in bytes, or the legacy flatmap format as a map of strings.
+message RawState {
+    bytes json = 1;
+    map<string, string> flatmap = 2;
+}
+
+// Schema is the configuration schema for a Resource, Provider, or Provisioner.
+message Schema {
+    message Block {
+        int64 version = 1;
+        repeated Attribute attributes = 2;
+        repeated NestedBlock block_types = 3;
+    }
+
+    message Attribute {
+        string name = 1;
+        bytes type = 2;
+        string description = 3;
+        bool required = 4;
+        bool optional = 5;
+        bool computed = 6;
+        bool sensitive = 7;
+    }
+
+    message NestedBlock {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+            GROUP = 5;
+        }
+
+        string type_name = 1;
+        Block block = 2;
+        NestingMode nesting = 3;
+        int64 min_items = 4;
+        int64 max_items = 5;
+    }
+
+    // The version of the schema.
+    // Schemas are versioned, so that providers can upgrade a saved resource
+    // state when the schema is changed. 
+    int64 version = 1;
+
+    // Block is the top level configuration block for this schema.
+    Block block = 2;
+}
+
+service Provider {
+    //////// Information about what a provider supports/expects
+    rpc GetSchema(GetProviderSchema.Request) returns (GetProviderSchema.Response);
+    rpc PrepareProviderConfig(PrepareProviderConfig.Request) returns (PrepareProviderConfig.Response);
+    rpc ValidateResourceTypeConfig(ValidateResourceTypeConfig.Request) returns (ValidateResourceTypeConfig.Response);
+    rpc ValidateDataSourceConfig(ValidateDataSourceConfig.Request) returns (ValidateDataSourceConfig.Response);
+    rpc UpgradeResourceState(UpgradeResourceState.Request) returns (UpgradeResourceState.Response);
+
+    //////// One-time initialization, called before other functions below
+    rpc Configure(Configure.Request) returns (Configure.Response);
+
+    //////// Managed Resource Lifecycle
+    rpc ReadResource(ReadResource.Request) returns (ReadResource.Response);
+    rpc PlanResourceChange(PlanResourceChange.Request) returns (PlanResourceChange.Response);
+    rpc ApplyResourceChange(ApplyResourceChange.Request) returns (ApplyResourceChange.Response);
+    rpc ImportResourceState(ImportResourceState.Request) returns (ImportResourceState.Response);
+
+    rpc ReadDataSource(ReadDataSource.Request) returns (ReadDataSource.Response);
+
+    //////// Graceful Shutdown
+    rpc Stop(Stop.Request) returns (Stop.Response);
+}
+
+message GetProviderSchema {
+    message Request {
+    }
+    message Response {
+        Schema provider = 1;
+        map<string, Schema> resource_schemas = 2;
+        map<string, Schema> data_source_schemas = 3;
+        repeated Diagnostic diagnostics = 4;
+    }
+}
+
+message PrepareProviderConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        DynamicValue prepared_config = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message UpgradeResourceState {
+    message Request {
+        string type_name = 1;
+
+        // version is the schema_version number recorded in the state file
+        int64 version = 2;
+
+        // raw_state is the raw states as stored for the resource.  Core does
+        // not have access to the schema of prior_version, so it's the
+        // provider's responsibility to interpret this value using the
+        // appropriate older schema. The raw_state will be the json encoded
+        // state, or a legacy flat-mapped format.
+        RawState raw_state = 3;
+    }
+    message Response {
+        // new_state is a msgpack-encoded data structure that, when interpreted with
+        // the _current_ schema for this resource type, is functionally equivalent to
+        // that which was given in prior_state_raw.
+        DynamicValue upgraded_state = 1;
+
+        // diagnostics describes any errors encountered during migration that could not
+        // be safely resolved, and warnings about any possibly-risky assumptions made
+        // in the upgrade process.
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateResourceTypeConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ValidateDataSourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message Configure {
+    message Request {
+        string terraform_version = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ReadResource {
+    message Request {
+        string type_name = 1;
+        DynamicValue current_state = 2;
+        bytes private = 3;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        repeated Diagnostic diagnostics = 2;
+        bytes private = 3;
+    }
+}
+
+message PlanResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue proposed_new_state = 3;
+        DynamicValue config = 4;
+        bytes prior_private = 5; 
+    }
+
+    message Response {
+        DynamicValue planned_state = 1;
+        repeated AttributePath requires_replace = 2;
+        bytes planned_private = 3; 
+        repeated Diagnostic diagnostics = 4;
+
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 5;
+    }
+}
+
+message ApplyResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue planned_state = 3;
+        DynamicValue config = 4;
+        bytes planned_private = 5; 
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        bytes private = 2; 
+        repeated Diagnostic diagnostics = 3;
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 4;
+    }
+}
+
+message ImportResourceState {
+    message Request {
+        string type_name = 1;
+        string id = 2;
+    }
+
+    message ImportedResource {
+        string type_name = 1;
+        DynamicValue state = 2;
+        bytes private = 3;
+    }
+
+    message Response {
+        repeated ImportedResource imported_resources = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ReadDataSource {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        DynamicValue state = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+service Provisioner {
+    rpc GetSchema(GetProvisionerSchema.Request) returns (GetProvisionerSchema.Response);
+    rpc ValidateProvisionerConfig(ValidateProvisionerConfig.Request) returns (ValidateProvisionerConfig.Response);
+    rpc ProvisionResource(ProvisionResource.Request) returns (stream ProvisionResource.Response);
+    rpc Stop(Stop.Request) returns (Stop.Response);
+}
+
+message GetProvisionerSchema {
+    message Request {
+    }
+    message Response {
+        Schema provisioner = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateProvisionerConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ProvisionResource {
+    message Request {
+        DynamicValue config = 1;
+        DynamicValue connection = 2;
+    }
+    message Response {
+        string output  = 1;
+        repeated Diagnostic diagnostics = 2;
+    }   
+}
diff --git a/v1.4.7/docs/plugin-protocol/tfplugin5.1.proto b/v1.4.7/docs/plugin-protocol/tfplugin5.1.proto
new file mode 100644
index 0000000..8f01ad9
--- /dev/null
+++ b/v1.4.7/docs/plugin-protocol/tfplugin5.1.proto
@@ -0,0 +1,353 @@
+// Terraform Plugin RPC protocol version 5.1
+//
+// This file defines version 5.1 of the RPC protocol. To implement a plugin
+// against this protocol, copy this definition into your own codebase and
+// use protoc to generate stubs for your target language.
+//
+// This file will be updated in-place in the source Terraform repository for
+// any minor versions of protocol 5, but later minor versions will always be
+// backwards compatible. Breaking changes, if any are required, will come
+// in a subsequent major version with its own separate proto definition.
+//
+// Note that only the proto files included in a release tag of Terraform are
+// official protocol releases. Proto files taken from other commits may include
+// incomplete changes or features that did not make it into a final release.
+// In all reasonable cases, plugin developers should take the proto file from
+// the tag of the most recent release of Terraform, and not from the main
+// branch or any other development branch.
+//
+syntax = "proto3";
+
+package tfplugin5;
+
+// DynamicValue is an opaque encoding of terraform data, with the field name
+// indicating the encoding scheme used.
+message DynamicValue {
+    bytes msgpack = 1;
+    bytes json = 2;
+}
+
+message Diagnostic {
+    enum Severity {
+        INVALID = 0;
+        ERROR = 1;
+        WARNING = 2;
+    }
+    Severity severity = 1;
+    string summary = 2;
+    string detail = 3;
+    AttributePath attribute = 4;
+}
+
+message AttributePath {
+    message Step {
+        oneof selector {
+            // Set "attribute_name" to represent looking up an attribute
+            // in the current object value.
+            string attribute_name = 1;
+            // Set "element_key_*" to represent looking up an element in
+            // an indexable collection type.
+            string element_key_string = 2;
+            int64 element_key_int = 3;
+        }
+    }
+    repeated Step steps = 1;
+}
+
+message Stop {
+    message Request {
+    }
+    message Response {
+		string Error = 1;
+    }
+}
+
+// RawState holds the stored state for a resource to be upgraded by the
+// provider. It can be in one of two formats, the current json encoded format
+// in bytes, or the legacy flatmap format as a map of strings.
+message RawState {
+    bytes json = 1;
+    map<string, string> flatmap = 2;
+}
+
+// Schema is the configuration schema for a Resource, Provider, or Provisioner.
+message Schema {
+    message Block {
+        int64 version = 1;
+        repeated Attribute attributes = 2;
+        repeated NestedBlock block_types = 3;
+    }
+
+    message Attribute {
+        string name = 1;
+        bytes type = 2;
+        string description = 3;
+        bool required = 4;
+        bool optional = 5;
+        bool computed = 6;
+        bool sensitive = 7;
+    }
+
+    message NestedBlock {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+            GROUP = 5;
+        }
+
+        string type_name = 1;
+        Block block = 2;
+        NestingMode nesting = 3;
+        int64 min_items = 4;
+        int64 max_items = 5;
+    }
+
+    // The version of the schema.
+    // Schemas are versioned, so that providers can upgrade a saved resource
+    // state when the schema is changed. 
+    int64 version = 1;
+
+    // Block is the top level configuration block for this schema.
+    Block block = 2;
+}
+
+service Provider {
+    //////// Information about what a provider supports/expects
+    rpc GetSchema(GetProviderSchema.Request) returns (GetProviderSchema.Response);
+    rpc PrepareProviderConfig(PrepareProviderConfig.Request) returns (PrepareProviderConfig.Response);
+    rpc ValidateResourceTypeConfig(ValidateResourceTypeConfig.Request) returns (ValidateResourceTypeConfig.Response);
+    rpc ValidateDataSourceConfig(ValidateDataSourceConfig.Request) returns (ValidateDataSourceConfig.Response);
+    rpc UpgradeResourceState(UpgradeResourceState.Request) returns (UpgradeResourceState.Response);
+
+    //////// One-time initialization, called before other functions below
+    rpc Configure(Configure.Request) returns (Configure.Response);
+
+    //////// Managed Resource Lifecycle
+    rpc ReadResource(ReadResource.Request) returns (ReadResource.Response);
+    rpc PlanResourceChange(PlanResourceChange.Request) returns (PlanResourceChange.Response);
+    rpc ApplyResourceChange(ApplyResourceChange.Request) returns (ApplyResourceChange.Response);
+    rpc ImportResourceState(ImportResourceState.Request) returns (ImportResourceState.Response);
+
+    rpc ReadDataSource(ReadDataSource.Request) returns (ReadDataSource.Response);
+
+    //////// Graceful Shutdown
+    rpc Stop(Stop.Request) returns (Stop.Response);
+}
+
+message GetProviderSchema {
+    message Request {
+    }
+    message Response {
+        Schema provider = 1;
+        map<string, Schema> resource_schemas = 2;
+        map<string, Schema> data_source_schemas = 3;
+        repeated Diagnostic diagnostics = 4;
+    }
+}
+
+message PrepareProviderConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        DynamicValue prepared_config = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message UpgradeResourceState {
+    message Request {
+        string type_name = 1;
+
+        // version is the schema_version number recorded in the state file
+        int64 version = 2;
+
+        // raw_state is the raw states as stored for the resource.  Core does
+        // not have access to the schema of prior_version, so it's the
+        // provider's responsibility to interpret this value using the
+        // appropriate older schema. The raw_state will be the json encoded
+        // state, or a legacy flat-mapped format.
+        RawState raw_state = 3;
+    }
+    message Response {
+        // new_state is a msgpack-encoded data structure that, when interpreted with
+        // the _current_ schema for this resource type, is functionally equivalent to
+        // that which was given in prior_state_raw.
+        DynamicValue upgraded_state = 1;
+
+        // diagnostics describes any errors encountered during migration that could not
+        // be safely resolved, and warnings about any possibly-risky assumptions made
+        // in the upgrade process.
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateResourceTypeConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ValidateDataSourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message Configure {
+    message Request {
+        string terraform_version = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ReadResource {
+    message Request {
+        string type_name = 1;
+        DynamicValue current_state = 2;
+        bytes private = 3;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        repeated Diagnostic diagnostics = 2;
+        bytes private = 3;
+    }
+}
+
+message PlanResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue proposed_new_state = 3;
+        DynamicValue config = 4;
+        bytes prior_private = 5; 
+    }
+
+    message Response {
+        DynamicValue planned_state = 1;
+        repeated AttributePath requires_replace = 2;
+        bytes planned_private = 3; 
+        repeated Diagnostic diagnostics = 4;
+
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 5;
+    }
+}
+
+message ApplyResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue planned_state = 3;
+        DynamicValue config = 4;
+        bytes planned_private = 5; 
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        bytes private = 2; 
+        repeated Diagnostic diagnostics = 3;
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 4;
+    }
+}
+
+message ImportResourceState {
+    message Request {
+        string type_name = 1;
+        string id = 2;
+    }
+
+    message ImportedResource {
+        string type_name = 1;
+        DynamicValue state = 2;
+        bytes private = 3;
+    }
+
+    message Response {
+        repeated ImportedResource imported_resources = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ReadDataSource {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        DynamicValue state = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+service Provisioner {
+    rpc GetSchema(GetProvisionerSchema.Request) returns (GetProvisionerSchema.Response);
+    rpc ValidateProvisionerConfig(ValidateProvisionerConfig.Request) returns (ValidateProvisionerConfig.Response);
+    rpc ProvisionResource(ProvisionResource.Request) returns (stream ProvisionResource.Response);
+    rpc Stop(Stop.Request) returns (Stop.Response);
+}
+
+message GetProvisionerSchema {
+    message Request {
+    }
+    message Response {
+        Schema provisioner = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateProvisionerConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ProvisionResource {
+    message Request {
+        DynamicValue config = 1;
+        DynamicValue connection = 2;
+    }
+    message Response {
+        string output  = 1;
+        repeated Diagnostic diagnostics = 2;
+    }   
+}
diff --git a/v1.4.7/docs/plugin-protocol/tfplugin5.2.proto b/v1.4.7/docs/plugin-protocol/tfplugin5.2.proto
new file mode 100644
index 0000000..1c29f03
--- /dev/null
+++ b/v1.4.7/docs/plugin-protocol/tfplugin5.2.proto
@@ -0,0 +1,369 @@
+// Terraform Plugin RPC protocol version 5.2
+//
+// This file defines version 5.2 of the RPC protocol. To implement a plugin
+// against this protocol, copy this definition into your own codebase and
+// use protoc to generate stubs for your target language.
+//
+// This file will not be updated. Any minor versions of protocol 5 to follow
+// should copy this file and modify the copy while maintaing backwards
+// compatibility. Breaking changes, if any are required, will come
+// in a subsequent major version with its own separate proto definition.
+//
+// Note that only the proto files included in a release tag of Terraform are
+// official protocol releases. Proto files taken from other commits may include
+// incomplete changes or features that did not make it into a final release.
+// In all reasonable cases, plugin developers should take the proto file from
+// the tag of the most recent release of Terraform, and not from the main
+// branch or any other development branch.
+//
+syntax = "proto3";
+option go_package = "github.com/hashicorp/terraform/internal/tfplugin5";
+
+package tfplugin5;
+
+// DynamicValue is an opaque encoding of terraform data, with the field name
+// indicating the encoding scheme used.
+message DynamicValue {
+    bytes msgpack = 1;
+    bytes json = 2;
+}
+
+message Diagnostic {
+    enum Severity {
+        INVALID = 0;
+        ERROR = 1;
+        WARNING = 2;
+    }
+    Severity severity = 1;
+    string summary = 2;
+    string detail = 3;
+    AttributePath attribute = 4;
+}
+
+message AttributePath {
+    message Step {
+        oneof selector {
+            // Set "attribute_name" to represent looking up an attribute
+            // in the current object value.
+            string attribute_name = 1;
+            // Set "element_key_*" to represent looking up an element in
+            // an indexable collection type.
+            string element_key_string = 2;
+            int64 element_key_int = 3;
+        }
+    }
+    repeated Step steps = 1;
+}
+
+message Stop {
+    message Request {
+    }
+    message Response {
+                string Error = 1;
+    }
+}
+
+// RawState holds the stored state for a resource to be upgraded by the
+// provider. It can be in one of two formats, the current json encoded format
+// in bytes, or the legacy flatmap format as a map of strings.
+message RawState {
+    bytes json = 1;
+    map<string, string> flatmap = 2;
+}
+
+enum StringKind {
+    PLAIN = 0;
+    MARKDOWN = 1;
+}
+
+// Schema is the configuration schema for a Resource, Provider, or Provisioner.
+message Schema {
+    message Block {
+        int64 version = 1;
+        repeated Attribute attributes = 2;
+        repeated NestedBlock block_types = 3;
+        string description = 4;
+        StringKind description_kind = 5;
+        bool deprecated = 6;
+    }
+
+    message Attribute {
+        string name = 1;
+        bytes type = 2;
+        string description = 3;
+        bool required = 4;
+        bool optional = 5;
+        bool computed = 6;
+        bool sensitive = 7;
+        StringKind description_kind = 8;
+        bool deprecated = 9;
+    }
+
+    message NestedBlock {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+            GROUP = 5;
+        }
+
+        string type_name = 1;
+        Block block = 2;
+        NestingMode nesting = 3;
+        int64 min_items = 4;
+        int64 max_items = 5;
+    }
+
+    // The version of the schema.
+    // Schemas are versioned, so that providers can upgrade a saved resource
+    // state when the schema is changed. 
+    int64 version = 1;
+
+    // Block is the top level configuration block for this schema.
+    Block block = 2;
+}
+
+service Provider {
+    //////// Information about what a provider supports/expects
+    rpc GetSchema(GetProviderSchema.Request) returns (GetProviderSchema.Response);
+    rpc PrepareProviderConfig(PrepareProviderConfig.Request) returns (PrepareProviderConfig.Response);
+    rpc ValidateResourceTypeConfig(ValidateResourceTypeConfig.Request) returns (ValidateResourceTypeConfig.Response);
+    rpc ValidateDataSourceConfig(ValidateDataSourceConfig.Request) returns (ValidateDataSourceConfig.Response);
+    rpc UpgradeResourceState(UpgradeResourceState.Request) returns (UpgradeResourceState.Response);
+
+    //////// One-time initialization, called before other functions below
+    rpc Configure(Configure.Request) returns (Configure.Response);
+
+    //////// Managed Resource Lifecycle
+    rpc ReadResource(ReadResource.Request) returns (ReadResource.Response);
+    rpc PlanResourceChange(PlanResourceChange.Request) returns (PlanResourceChange.Response);
+    rpc ApplyResourceChange(ApplyResourceChange.Request) returns (ApplyResourceChange.Response);
+    rpc ImportResourceState(ImportResourceState.Request) returns (ImportResourceState.Response);
+
+    rpc ReadDataSource(ReadDataSource.Request) returns (ReadDataSource.Response);
+
+    //////// Graceful Shutdown
+    rpc Stop(Stop.Request) returns (Stop.Response);
+}
+
+message GetProviderSchema {
+    message Request {
+    }
+    message Response {
+        Schema provider = 1;
+        map<string, Schema> resource_schemas = 2;
+        map<string, Schema> data_source_schemas = 3;
+        repeated Diagnostic diagnostics = 4;
+        Schema provider_meta = 5;
+    }
+}
+
+message PrepareProviderConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        DynamicValue prepared_config = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message UpgradeResourceState {
+    message Request {
+        string type_name = 1;
+
+        // version is the schema_version number recorded in the state file
+        int64 version = 2;
+
+        // raw_state is the raw states as stored for the resource.  Core does
+        // not have access to the schema of prior_version, so it's the
+        // provider's responsibility to interpret this value using the
+        // appropriate older schema. The raw_state will be the json encoded
+        // state, or a legacy flat-mapped format.
+        RawState raw_state = 3;
+    }
+    message Response {
+        // new_state is a msgpack-encoded data structure that, when interpreted with
+        // the _current_ schema for this resource type, is functionally equivalent to
+        // that which was given in prior_state_raw.
+        DynamicValue upgraded_state = 1;
+
+        // diagnostics describes any errors encountered during migration that could not
+        // be safely resolved, and warnings about any possibly-risky assumptions made
+        // in the upgrade process.
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateResourceTypeConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ValidateDataSourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message Configure {
+    message Request {
+        string terraform_version = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ReadResource {
+    message Request {
+        string type_name = 1;
+        DynamicValue current_state = 2;
+        bytes private = 3;
+        DynamicValue provider_meta = 4;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        repeated Diagnostic diagnostics = 2;
+        bytes private = 3;
+    }
+}
+
+message PlanResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue proposed_new_state = 3;
+        DynamicValue config = 4;
+        bytes prior_private = 5; 
+        DynamicValue provider_meta = 6;
+    }
+
+    message Response {
+        DynamicValue planned_state = 1;
+        repeated AttributePath requires_replace = 2;
+        bytes planned_private = 3; 
+        repeated Diagnostic diagnostics = 4;
+
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 5;
+    }
+}
+
+message ApplyResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue planned_state = 3;
+        DynamicValue config = 4;
+        bytes planned_private = 5; 
+        DynamicValue provider_meta = 6;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        bytes private = 2; 
+        repeated Diagnostic diagnostics = 3;
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 4;
+    }
+}
+
+message ImportResourceState {
+    message Request {
+        string type_name = 1;
+        string id = 2;
+    }
+
+    message ImportedResource {
+        string type_name = 1;
+        DynamicValue state = 2;
+        bytes private = 3;
+    }
+
+    message Response {
+        repeated ImportedResource imported_resources = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ReadDataSource {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+        DynamicValue provider_meta = 3;
+    }
+    message Response {
+        DynamicValue state = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+service Provisioner {
+    rpc GetSchema(GetProvisionerSchema.Request) returns (GetProvisionerSchema.Response);
+    rpc ValidateProvisionerConfig(ValidateProvisionerConfig.Request) returns (ValidateProvisionerConfig.Response);
+    rpc ProvisionResource(ProvisionResource.Request) returns (stream ProvisionResource.Response);
+    rpc Stop(Stop.Request) returns (Stop.Response);
+}
+
+message GetProvisionerSchema {
+    message Request {
+    }
+    message Response {
+        Schema provisioner = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateProvisionerConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ProvisionResource {
+    message Request {
+        DynamicValue config = 1;
+        DynamicValue connection = 2;
+    }
+    message Response {
+        string output  = 1;
+        repeated Diagnostic diagnostics = 2;
+    }   
+}
diff --git a/v1.4.7/docs/plugin-protocol/tfplugin5.3.proto b/v1.4.7/docs/plugin-protocol/tfplugin5.3.proto
new file mode 100644
index 0000000..0f98f04
--- /dev/null
+++ b/v1.4.7/docs/plugin-protocol/tfplugin5.3.proto
@@ -0,0 +1,398 @@
+// Terraform Plugin RPC protocol version 5.3
+//
+// This file defines version 5.3 of the RPC protocol. To implement a plugin
+// against this protocol, copy this definition into your own codebase and
+// use protoc to generate stubs for your target language.
+//
+// This file will not be updated. Any minor versions of protocol 5 to follow
+// should copy this file and modify the copy while maintaing backwards
+// compatibility. Breaking changes, if any are required, will come
+// in a subsequent major version with its own separate proto definition.
+//
+// Note that only the proto files included in a release tag of Terraform are
+// official protocol releases. Proto files taken from other commits may include
+// incomplete changes or features that did not make it into a final release.
+// In all reasonable cases, plugin developers should take the proto file from
+// the tag of the most recent release of Terraform, and not from the main
+// branch or any other development branch.
+//
+syntax = "proto3";
+option go_package = "github.com/hashicorp/terraform/internal/tfplugin5";
+
+package tfplugin5;
+
+// DynamicValue is an opaque encoding of terraform data, with the field name
+// indicating the encoding scheme used.
+message DynamicValue {
+    bytes msgpack = 1;
+    bytes json = 2;
+}
+
+message Diagnostic {
+    enum Severity {
+        INVALID = 0;
+        ERROR = 1;
+        WARNING = 2;
+    }
+    Severity severity = 1;
+    string summary = 2;
+    string detail = 3;
+    AttributePath attribute = 4;
+}
+
+message AttributePath {
+    message Step {
+        oneof selector {
+            // Set "attribute_name" to represent looking up an attribute
+            // in the current object value.
+            string attribute_name = 1;
+            // Set "element_key_*" to represent looking up an element in
+            // an indexable collection type.
+            string element_key_string = 2;
+            int64 element_key_int = 3;
+        }
+    }
+    repeated Step steps = 1;
+}
+
+message Stop {
+    message Request {
+    }
+    message Response {
+                string Error = 1;
+    }
+}
+
+// RawState holds the stored state for a resource to be upgraded by the
+// provider. It can be in one of two formats, the current json encoded format
+// in bytes, or the legacy flatmap format as a map of strings.
+message RawState {
+    bytes json = 1;
+    map<string, string> flatmap = 2;
+}
+
+enum StringKind {
+    PLAIN = 0;
+    MARKDOWN = 1;
+}
+
+// Schema is the configuration schema for a Resource, Provider, or Provisioner.
+message Schema {
+    message Block {
+        int64 version = 1;
+        repeated Attribute attributes = 2;
+        repeated NestedBlock block_types = 3;
+        string description = 4;
+        StringKind description_kind = 5;
+        bool deprecated = 6;
+    }
+
+    message Attribute {
+        string name = 1;
+        bytes type = 2;
+        string description = 3;
+        bool required = 4;
+        bool optional = 5;
+        bool computed = 6;
+        bool sensitive = 7;
+        StringKind description_kind = 8;
+        bool deprecated = 9;
+    }
+
+    message NestedBlock {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+            GROUP = 5;
+        }
+
+        string type_name = 1;
+        Block block = 2;
+        NestingMode nesting = 3;
+        int64 min_items = 4;
+        int64 max_items = 5;
+    }
+
+    // The version of the schema.
+    // Schemas are versioned, so that providers can upgrade a saved resource
+    // state when the schema is changed.
+    int64 version = 1;
+
+    // Block is the top level configuration block for this schema.
+    Block block = 2;
+}
+
+service Provider {
+    //////// Information about what a provider supports/expects
+    rpc GetSchema(GetProviderSchema.Request) returns (GetProviderSchema.Response);
+    rpc PrepareProviderConfig(PrepareProviderConfig.Request) returns (PrepareProviderConfig.Response);
+    rpc ValidateResourceTypeConfig(ValidateResourceTypeConfig.Request) returns (ValidateResourceTypeConfig.Response);
+    rpc ValidateDataSourceConfig(ValidateDataSourceConfig.Request) returns (ValidateDataSourceConfig.Response);
+    rpc UpgradeResourceState(UpgradeResourceState.Request) returns (UpgradeResourceState.Response);
+
+    //////// One-time initialization, called before other functions below
+    rpc Configure(Configure.Request) returns (Configure.Response);
+
+    //////// Managed Resource Lifecycle
+    rpc ReadResource(ReadResource.Request) returns (ReadResource.Response);
+    rpc PlanResourceChange(PlanResourceChange.Request) returns (PlanResourceChange.Response);
+    rpc ApplyResourceChange(ApplyResourceChange.Request) returns (ApplyResourceChange.Response);
+    rpc ImportResourceState(ImportResourceState.Request) returns (ImportResourceState.Response);
+
+    rpc ReadDataSource(ReadDataSource.Request) returns (ReadDataSource.Response);
+
+    //////// Graceful Shutdown
+    rpc Stop(Stop.Request) returns (Stop.Response);
+}
+
+message GetProviderSchema {
+    message Request {
+    }
+    message Response {
+        Schema provider = 1;
+        map<string, Schema> resource_schemas = 2;
+        map<string, Schema> data_source_schemas = 3;
+        repeated Diagnostic diagnostics = 4;
+        Schema provider_meta = 5;
+        ServerCapabilities server_capabilities = 6;
+    }
+
+
+    // ServerCapabilities allows providers to communicate extra information
+    // regarding supported protocol features. This is used to indicate
+    // availability of certain forward-compatible changes which may be optional
+    // in a major protocol version, but cannot be tested for directly.
+    message ServerCapabilities {
+        // The plan_destroy capability signals that a provider expects a call
+        // to PlanResourceChange when a resource is going to be destroyed.
+        bool plan_destroy = 1;
+    }
+}
+
+message PrepareProviderConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        DynamicValue prepared_config = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message UpgradeResourceState {
+    // Request is the message that is sent to the provider during the
+    // UpgradeResourceState RPC.
+    //
+    // This message intentionally does not include configuration data as any
+    // configuration-based or configuration-conditional changes should occur
+    // during the PlanResourceChange RPC. Additionally, the configuration is
+    // not guaranteed to exist (in the case of resource destruction), be wholly
+    // known, nor match the given prior state, which could lead to unexpected
+    // provider behaviors for practitioners.
+    message Request {
+        string type_name = 1;
+
+        // version is the schema_version number recorded in the state file
+        int64 version = 2;
+
+        // raw_state is the raw states as stored for the resource.  Core does
+        // not have access to the schema of prior_version, so it's the
+        // provider's responsibility to interpret this value using the
+        // appropriate older schema. The raw_state will be the json encoded
+        // state, or a legacy flat-mapped format.
+        RawState raw_state = 3;
+    }
+    message Response {
+        // new_state is a msgpack-encoded data structure that, when interpreted with
+        // the _current_ schema for this resource type, is functionally equivalent to
+        // that which was given in prior_state_raw.
+        DynamicValue upgraded_state = 1;
+
+        // diagnostics describes any errors encountered during migration that could not
+        // be safely resolved, and warnings about any possibly-risky assumptions made
+        // in the upgrade process.
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateResourceTypeConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ValidateDataSourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message Configure {
+    message Request {
+        string terraform_version = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ReadResource {
+    // Request is the message that is sent to the provider during the
+    // ReadResource RPC.
+    //
+    // This message intentionally does not include configuration data as any
+    // configuration-based or configuration-conditional changes should occur
+    // during the PlanResourceChange RPC. Additionally, the configuration is
+    // not guaranteed to be wholly known nor match the given prior state, which
+    // could lead to unexpected provider behaviors for practitioners.
+    message Request {
+        string type_name = 1;
+        DynamicValue current_state = 2;
+        bytes private = 3;
+        DynamicValue provider_meta = 4;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        repeated Diagnostic diagnostics = 2;
+        bytes private = 3;
+    }
+}
+
+message PlanResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue proposed_new_state = 3;
+        DynamicValue config = 4;
+        bytes prior_private = 5;
+        DynamicValue provider_meta = 6;
+    }
+
+    message Response {
+        DynamicValue planned_state = 1;
+        repeated AttributePath requires_replace = 2;
+        bytes planned_private = 3;
+        repeated Diagnostic diagnostics = 4;
+
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 5;
+    }
+}
+
+message ApplyResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue planned_state = 3;
+        DynamicValue config = 4;
+        bytes planned_private = 5;
+        DynamicValue provider_meta = 6;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        bytes private = 2;
+        repeated Diagnostic diagnostics = 3;
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 4;
+    }
+}
+
+message ImportResourceState {
+    message Request {
+        string type_name = 1;
+        string id = 2;
+    }
+
+    message ImportedResource {
+        string type_name = 1;
+        DynamicValue state = 2;
+        bytes private = 3;
+    }
+
+    message Response {
+        repeated ImportedResource imported_resources = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ReadDataSource {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+        DynamicValue provider_meta = 3;
+    }
+    message Response {
+        DynamicValue state = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+service Provisioner {
+    rpc GetSchema(GetProvisionerSchema.Request) returns (GetProvisionerSchema.Response);
+    rpc ValidateProvisionerConfig(ValidateProvisionerConfig.Request) returns (ValidateProvisionerConfig.Response);
+    rpc ProvisionResource(ProvisionResource.Request) returns (stream ProvisionResource.Response);
+    rpc Stop(Stop.Request) returns (Stop.Response);
+}
+
+message GetProvisionerSchema {
+    message Request {
+    }
+    message Response {
+        Schema provisioner = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateProvisionerConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ProvisionResource {
+    message Request {
+        DynamicValue config = 1;
+        DynamicValue connection = 2;
+    }
+    message Response {
+        string output  = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
diff --git a/v1.4.7/docs/plugin-protocol/tfplugin6.0.proto b/v1.4.7/docs/plugin-protocol/tfplugin6.0.proto
new file mode 100644
index 0000000..4d8dc06
--- /dev/null
+++ b/v1.4.7/docs/plugin-protocol/tfplugin6.0.proto
@@ -0,0 +1,321 @@
+// Terraform Plugin RPC protocol version 6.0
+//
+// This file defines version 6.0 of the RPC protocol. To implement a plugin
+// against this protocol, copy this definition into your own codebase and
+// use protoc to generate stubs for your target language.
+//
+// This file will not be updated. Any minor versions of protocol 6 to follow
+// should copy this file and modify the copy while maintaing backwards
+// compatibility. Breaking changes, if any are required, will come
+// in a subsequent major version with its own separate proto definition.
+//
+// Note that only the proto files included in a release tag of Terraform are
+// official protocol releases. Proto files taken from other commits may include
+// incomplete changes or features that did not make it into a final release.
+// In all reasonable cases, plugin developers should take the proto file from
+// the tag of the most recent release of Terraform, and not from the main
+// branch or any other development branch.
+//
+syntax = "proto3";
+option go_package = "github.com/hashicorp/terraform/internal/tfplugin6";
+
+package tfplugin6;
+
+// DynamicValue is an opaque encoding of terraform data, with the field name
+// indicating the encoding scheme used.
+message DynamicValue {
+    bytes msgpack = 1;
+    bytes json = 2;
+}
+
+message Diagnostic {
+    enum Severity {
+        INVALID = 0;
+        ERROR = 1;
+        WARNING = 2;
+    }
+    Severity severity = 1;
+    string summary = 2;
+    string detail = 3;
+    AttributePath attribute = 4;
+}
+
+message AttributePath {
+    message Step {
+        oneof selector {
+            // Set "attribute_name" to represent looking up an attribute
+            // in the current object value.
+            string attribute_name = 1;
+            // Set "element_key_*" to represent looking up an element in
+            // an indexable collection type.
+            string element_key_string = 2;
+            int64 element_key_int = 3;
+        }
+    }
+    repeated Step steps = 1;
+}
+
+message StopProvider {
+    message Request {
+    }
+    message Response {
+        string Error = 1;
+    }
+}
+
+// RawState holds the stored state for a resource to be upgraded by the
+// provider. It can be in one of two formats, the current json encoded format
+// in bytes, or the legacy flatmap format as a map of strings.
+message RawState {
+    bytes json = 1;
+    map<string, string> flatmap = 2;
+}
+
+enum StringKind {
+    PLAIN = 0;
+    MARKDOWN = 1;
+}
+
+// Schema is the configuration schema for a Resource or Provider.
+message Schema {
+    message Block {
+        int64 version = 1;
+        repeated Attribute attributes = 2;
+        repeated NestedBlock block_types = 3;
+        string description = 4;
+        StringKind description_kind = 5;
+        bool deprecated = 6;
+    }
+
+    message Attribute {
+        string name = 1;
+        bytes type = 2;
+        Object nested_type = 10;
+        string description = 3;
+        bool required = 4;
+        bool optional = 5;
+        bool computed = 6;
+        bool sensitive = 7;
+        StringKind description_kind = 8;
+        bool deprecated = 9;
+    }
+
+    message NestedBlock {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+            GROUP = 5;
+        }
+
+        string type_name = 1;
+        Block block = 2;
+        NestingMode nesting = 3;
+        int64 min_items = 4;
+        int64 max_items = 5;
+    }
+
+    message Object {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+        }
+
+        repeated Attribute attributes = 1;
+        NestingMode nesting = 3;
+        int64 min_items = 4;
+        int64 max_items = 5;
+    }
+
+    // The version of the schema.
+    // Schemas are versioned, so that providers can upgrade a saved resource
+    // state when the schema is changed. 
+    int64 version = 1;
+
+    // Block is the top level configuration block for this schema.
+    Block block = 2;
+}
+
+service Provider {
+    //////// Information about what a provider supports/expects
+    rpc GetProviderSchema(GetProviderSchema.Request) returns (GetProviderSchema.Response);
+    rpc ValidateProviderConfig(ValidateProviderConfig.Request) returns (ValidateProviderConfig.Response);
+    rpc ValidateResourceConfig(ValidateResourceConfig.Request) returns (ValidateResourceConfig.Response);
+    rpc ValidateDataResourceConfig(ValidateDataResourceConfig.Request) returns (ValidateDataResourceConfig.Response);
+    rpc UpgradeResourceState(UpgradeResourceState.Request) returns (UpgradeResourceState.Response);
+
+    //////// One-time initialization, called before other functions below
+    rpc ConfigureProvider(ConfigureProvider.Request) returns (ConfigureProvider.Response);
+
+    //////// Managed Resource Lifecycle
+    rpc ReadResource(ReadResource.Request) returns (ReadResource.Response);
+    rpc PlanResourceChange(PlanResourceChange.Request) returns (PlanResourceChange.Response);
+    rpc ApplyResourceChange(ApplyResourceChange.Request) returns (ApplyResourceChange.Response);
+    rpc ImportResourceState(ImportResourceState.Request) returns (ImportResourceState.Response);
+
+    rpc ReadDataSource(ReadDataSource.Request) returns (ReadDataSource.Response);
+
+    //////// Graceful Shutdown
+    rpc StopProvider(StopProvider.Request) returns (StopProvider.Response);
+}
+
+message GetProviderSchema {
+    message Request {
+    }
+    message Response {
+        Schema provider = 1;
+        map<string, Schema> resource_schemas = 2;
+        map<string, Schema> data_source_schemas = 3;
+        repeated Diagnostic diagnostics = 4;
+        Schema provider_meta = 5;
+    }
+}
+
+message ValidateProviderConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message UpgradeResourceState {
+    message Request {
+        string type_name = 1;
+
+        // version is the schema_version number recorded in the state file
+        int64 version = 2;
+
+        // raw_state is the raw states as stored for the resource.  Core does
+        // not have access to the schema of prior_version, so it's the
+        // provider's responsibility to interpret this value using the
+        // appropriate older schema. The raw_state will be the json encoded
+        // state, or a legacy flat-mapped format.
+        RawState raw_state = 3;
+    }
+    message Response {
+        // new_state is a msgpack-encoded data structure that, when interpreted with
+        // the _current_ schema for this resource type, is functionally equivalent to
+        // that which was given in prior_state_raw.
+        DynamicValue upgraded_state = 1;
+
+        // diagnostics describes any errors encountered during migration that could not
+        // be safely resolved, and warnings about any possibly-risky assumptions made
+        // in the upgrade process.
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateResourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ValidateDataResourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ConfigureProvider {
+    message Request {
+        string terraform_version = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ReadResource {
+    message Request {
+        string type_name = 1;
+        DynamicValue current_state = 2;
+        bytes private = 3;
+        DynamicValue provider_meta = 4;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        repeated Diagnostic diagnostics = 2;
+        bytes private = 3;
+    }
+}
+
+message PlanResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue proposed_new_state = 3;
+        DynamicValue config = 4;
+        bytes prior_private = 5; 
+        DynamicValue provider_meta = 6;
+    }
+
+    message Response {
+        DynamicValue planned_state = 1;
+        repeated AttributePath requires_replace = 2;
+        bytes planned_private = 3; 
+        repeated Diagnostic diagnostics = 4;
+    }
+}
+
+message ApplyResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue planned_state = 3;
+        DynamicValue config = 4;
+        bytes planned_private = 5; 
+        DynamicValue provider_meta = 6;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        bytes private = 2; 
+        repeated Diagnostic diagnostics = 3;
+    }
+}
+
+message ImportResourceState {
+    message Request {
+        string type_name = 1;
+        string id = 2;
+    }
+
+    message ImportedResource {
+        string type_name = 1;
+        DynamicValue state = 2;
+        bytes private = 3;
+    }
+
+    message Response {
+        repeated ImportedResource imported_resources = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ReadDataSource {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+        DynamicValue provider_meta = 3;
+    }
+    message Response {
+        DynamicValue state = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
diff --git a/v1.4.7/docs/plugin-protocol/tfplugin6.1.proto b/v1.4.7/docs/plugin-protocol/tfplugin6.1.proto
new file mode 100644
index 0000000..3f6dead
--- /dev/null
+++ b/v1.4.7/docs/plugin-protocol/tfplugin6.1.proto
@@ -0,0 +1,324 @@
+// Terraform Plugin RPC protocol version 6.1
+//
+// This file defines version 6.1 of the RPC protocol. To implement a plugin
+// against this protocol, copy this definition into your own codebase and
+// use protoc to generate stubs for your target language.
+//
+// This file will not be updated. Any minor versions of protocol 6 to follow
+// should copy this file and modify the copy while maintaing backwards
+// compatibility. Breaking changes, if any are required, will come
+// in a subsequent major version with its own separate proto definition.
+//
+// Note that only the proto files included in a release tag of Terraform are
+// official protocol releases. Proto files taken from other commits may include
+// incomplete changes or features that did not make it into a final release.
+// In all reasonable cases, plugin developers should take the proto file from
+// the tag of the most recent release of Terraform, and not from the main
+// branch or any other development branch.
+//
+syntax = "proto3";
+option go_package = "github.com/hashicorp/terraform/internal/tfplugin6";
+
+package tfplugin6;
+
+// DynamicValue is an opaque encoding of terraform data, with the field name
+// indicating the encoding scheme used.
+message DynamicValue {
+    bytes msgpack = 1;
+    bytes json = 2;
+}
+
+message Diagnostic {
+    enum Severity {
+        INVALID = 0;
+        ERROR = 1;
+        WARNING = 2;
+    }
+    Severity severity = 1;
+    string summary = 2;
+    string detail = 3;
+    AttributePath attribute = 4;
+}
+
+message AttributePath {
+    message Step {
+        oneof selector {
+            // Set "attribute_name" to represent looking up an attribute
+            // in the current object value.
+            string attribute_name = 1;
+            // Set "element_key_*" to represent looking up an element in
+            // an indexable collection type.
+            string element_key_string = 2;
+            int64 element_key_int = 3;
+        }
+    }
+    repeated Step steps = 1;
+}
+
+message StopProvider {
+    message Request {
+    }
+    message Response {
+        string Error = 1;
+    }
+}
+
+// RawState holds the stored state for a resource to be upgraded by the
+// provider. It can be in one of two formats, the current json encoded format
+// in bytes, or the legacy flatmap format as a map of strings.
+message RawState {
+    bytes json = 1;
+    map<string, string> flatmap = 2;
+}
+
+enum StringKind {
+    PLAIN = 0;
+    MARKDOWN = 1;
+}
+
+// Schema is the configuration schema for a Resource or Provider.
+message Schema {
+    message Block {
+        int64 version = 1;
+        repeated Attribute attributes = 2;
+        repeated NestedBlock block_types = 3;
+        string description = 4;
+        StringKind description_kind = 5;
+        bool deprecated = 6;
+    }
+
+    message Attribute {
+        string name = 1;
+        bytes type = 2;
+        Object nested_type = 10;
+        string description = 3;
+        bool required = 4;
+        bool optional = 5;
+        bool computed = 6;
+        bool sensitive = 7;
+        StringKind description_kind = 8;
+        bool deprecated = 9;
+    }
+
+    message NestedBlock {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+            GROUP = 5;
+        }
+
+        string type_name = 1;
+        Block block = 2;
+        NestingMode nesting = 3;
+        int64 min_items = 4;
+        int64 max_items = 5;
+    }
+
+    message Object {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+        }
+
+        repeated Attribute attributes = 1;
+        NestingMode nesting = 3;
+
+        // MinItems and MaxItems were never used in the protocol, and have no
+        // effect on validation.
+        int64 min_items = 4 [deprecated = true];
+        int64 max_items = 5 [deprecated = true];
+    }
+
+    // The version of the schema.
+    // Schemas are versioned, so that providers can upgrade a saved resource
+    // state when the schema is changed.
+    int64 version = 1;
+
+    // Block is the top level configuration block for this schema.
+    Block block = 2;
+}
+
+service Provider {
+    //////// Information about what a provider supports/expects
+    rpc GetProviderSchema(GetProviderSchema.Request) returns (GetProviderSchema.Response);
+    rpc ValidateProviderConfig(ValidateProviderConfig.Request) returns (ValidateProviderConfig.Response);
+    rpc ValidateResourceConfig(ValidateResourceConfig.Request) returns (ValidateResourceConfig.Response);
+    rpc ValidateDataResourceConfig(ValidateDataResourceConfig.Request) returns (ValidateDataResourceConfig.Response);
+    rpc UpgradeResourceState(UpgradeResourceState.Request) returns (UpgradeResourceState.Response);
+
+    //////// One-time initialization, called before other functions below
+    rpc ConfigureProvider(ConfigureProvider.Request) returns (ConfigureProvider.Response);
+
+    //////// Managed Resource Lifecycle
+    rpc ReadResource(ReadResource.Request) returns (ReadResource.Response);
+    rpc PlanResourceChange(PlanResourceChange.Request) returns (PlanResourceChange.Response);
+    rpc ApplyResourceChange(ApplyResourceChange.Request) returns (ApplyResourceChange.Response);
+    rpc ImportResourceState(ImportResourceState.Request) returns (ImportResourceState.Response);
+
+    rpc ReadDataSource(ReadDataSource.Request) returns (ReadDataSource.Response);
+
+    //////// Graceful Shutdown
+    rpc StopProvider(StopProvider.Request) returns (StopProvider.Response);
+}
+
+message GetProviderSchema {
+    message Request {
+    }
+    message Response {
+        Schema provider = 1;
+        map<string, Schema> resource_schemas = 2;
+        map<string, Schema> data_source_schemas = 3;
+        repeated Diagnostic diagnostics = 4;
+        Schema provider_meta = 5;
+    }
+}
+
+message ValidateProviderConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message UpgradeResourceState {
+    message Request {
+        string type_name = 1;
+
+        // version is the schema_version number recorded in the state file
+        int64 version = 2;
+
+        // raw_state is the raw states as stored for the resource.  Core does
+        // not have access to the schema of prior_version, so it's the
+        // provider's responsibility to interpret this value using the
+        // appropriate older schema. The raw_state will be the json encoded
+        // state, or a legacy flat-mapped format.
+        RawState raw_state = 3;
+    }
+    message Response {
+        // new_state is a msgpack-encoded data structure that, when interpreted with
+        // the _current_ schema for this resource type, is functionally equivalent to
+        // that which was given in prior_state_raw.
+        DynamicValue upgraded_state = 1;
+
+        // diagnostics describes any errors encountered during migration that could not
+        // be safely resolved, and warnings about any possibly-risky assumptions made
+        // in the upgrade process.
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateResourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ValidateDataResourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ConfigureProvider {
+    message Request {
+        string terraform_version = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ReadResource {
+    message Request {
+        string type_name = 1;
+        DynamicValue current_state = 2;
+        bytes private = 3;
+        DynamicValue provider_meta = 4;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        repeated Diagnostic diagnostics = 2;
+        bytes private = 3;
+    }
+}
+
+message PlanResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue proposed_new_state = 3;
+        DynamicValue config = 4;
+        bytes prior_private = 5;
+        DynamicValue provider_meta = 6;
+    }
+
+    message Response {
+        DynamicValue planned_state = 1;
+        repeated AttributePath requires_replace = 2;
+        bytes planned_private = 3;
+        repeated Diagnostic diagnostics = 4;
+    }
+}
+
+message ApplyResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue planned_state = 3;
+        DynamicValue config = 4;
+        bytes planned_private = 5;
+        DynamicValue provider_meta = 6;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        bytes private = 2;
+        repeated Diagnostic diagnostics = 3;
+    }
+}
+
+message ImportResourceState {
+    message Request {
+        string type_name = 1;
+        string id = 2;
+    }
+
+    message ImportedResource {
+        string type_name = 1;
+        DynamicValue state = 2;
+        bytes private = 3;
+    }
+
+    message Response {
+        repeated ImportedResource imported_resources = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ReadDataSource {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+        DynamicValue provider_meta = 3;
+    }
+    message Response {
+        DynamicValue state = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
diff --git a/v1.4.7/docs/plugin-protocol/tfplugin6.2.proto b/v1.4.7/docs/plugin-protocol/tfplugin6.2.proto
new file mode 100644
index 0000000..da5e58e
--- /dev/null
+++ b/v1.4.7/docs/plugin-protocol/tfplugin6.2.proto
@@ -0,0 +1,350 @@
+// Terraform Plugin RPC protocol version 6.2
+//
+// This file defines version 6.2 of the RPC protocol. To implement a plugin
+// against this protocol, copy this definition into your own codebase and
+// use protoc to generate stubs for your target language.
+//
+// This file will not be updated. Any minor versions of protocol 6 to follow
+// should copy this file and modify the copy while maintaing backwards
+// compatibility. Breaking changes, if any are required, will come
+// in a subsequent major version with its own separate proto definition.
+//
+// Note that only the proto files included in a release tag of Terraform are
+// official protocol releases. Proto files taken from other commits may include
+// incomplete changes or features that did not make it into a final release.
+// In all reasonable cases, plugin developers should take the proto file from
+// the tag of the most recent release of Terraform, and not from the main
+// branch or any other development branch.
+//
+syntax = "proto3";
+option go_package = "github.com/hashicorp/terraform/internal/tfplugin6";
+
+package tfplugin6;
+
+// DynamicValue is an opaque encoding of terraform data, with the field name
+// indicating the encoding scheme used.
+message DynamicValue {
+    bytes msgpack = 1;
+    bytes json = 2;
+}
+
+message Diagnostic {
+    enum Severity {
+        INVALID = 0;
+        ERROR = 1;
+        WARNING = 2;
+    }
+    Severity severity = 1;
+    string summary = 2;
+    string detail = 3;
+    AttributePath attribute = 4;
+}
+
+message AttributePath {
+    message Step {
+        oneof selector {
+            // Set "attribute_name" to represent looking up an attribute
+            // in the current object value.
+            string attribute_name = 1;
+            // Set "element_key_*" to represent looking up an element in
+            // an indexable collection type.
+            string element_key_string = 2;
+            int64 element_key_int = 3;
+        }
+    }
+    repeated Step steps = 1;
+}
+
+message StopProvider {
+    message Request {
+    }
+    message Response {
+        string Error = 1;
+    }
+}
+
+// RawState holds the stored state for a resource to be upgraded by the
+// provider. It can be in one of two formats, the current json encoded format
+// in bytes, or the legacy flatmap format as a map of strings.
+message RawState {
+    bytes json = 1;
+    map<string, string> flatmap = 2;
+}
+
+enum StringKind {
+    PLAIN = 0;
+    MARKDOWN = 1;
+}
+
+// Schema is the configuration schema for a Resource or Provider.
+message Schema {
+    message Block {
+        int64 version = 1;
+        repeated Attribute attributes = 2;
+        repeated NestedBlock block_types = 3;
+        string description = 4;
+        StringKind description_kind = 5;
+        bool deprecated = 6;
+    }
+
+    message Attribute {
+        string name = 1;
+        bytes type = 2;
+        Object nested_type = 10;
+        string description = 3;
+        bool required = 4;
+        bool optional = 5;
+        bool computed = 6;
+        bool sensitive = 7;
+        StringKind description_kind = 8;
+        bool deprecated = 9;
+    }
+
+    message NestedBlock {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+            GROUP = 5;
+        }
+
+        string type_name = 1;
+        Block block = 2;
+        NestingMode nesting = 3;
+        int64 min_items = 4;
+        int64 max_items = 5;
+    }
+
+    message Object {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+        }
+
+        repeated Attribute attributes = 1;
+        NestingMode nesting = 3;
+
+        // MinItems and MaxItems were never used in the protocol, and have no
+        // effect on validation.
+        int64 min_items = 4 [deprecated = true];
+        int64 max_items = 5 [deprecated = true];
+    }
+
+    // The version of the schema.
+    // Schemas are versioned, so that providers can upgrade a saved resource
+    // state when the schema is changed.
+    int64 version = 1;
+
+    // Block is the top level configuration block for this schema.
+    Block block = 2;
+}
+
+service Provider {
+    //////// Information about what a provider supports/expects
+    rpc GetProviderSchema(GetProviderSchema.Request) returns (GetProviderSchema.Response);
+    rpc ValidateProviderConfig(ValidateProviderConfig.Request) returns (ValidateProviderConfig.Response);
+    rpc ValidateResourceConfig(ValidateResourceConfig.Request) returns (ValidateResourceConfig.Response);
+    rpc ValidateDataResourceConfig(ValidateDataResourceConfig.Request) returns (ValidateDataResourceConfig.Response);
+    rpc UpgradeResourceState(UpgradeResourceState.Request) returns (UpgradeResourceState.Response);
+
+    //////// One-time initialization, called before other functions below
+    rpc ConfigureProvider(ConfigureProvider.Request) returns (ConfigureProvider.Response);
+
+    //////// Managed Resource Lifecycle
+    rpc ReadResource(ReadResource.Request) returns (ReadResource.Response);
+    rpc PlanResourceChange(PlanResourceChange.Request) returns (PlanResourceChange.Response);
+    rpc ApplyResourceChange(ApplyResourceChange.Request) returns (ApplyResourceChange.Response);
+    rpc ImportResourceState(ImportResourceState.Request) returns (ImportResourceState.Response);
+
+    rpc ReadDataSource(ReadDataSource.Request) returns (ReadDataSource.Response);
+
+    //////// Graceful Shutdown
+    rpc StopProvider(StopProvider.Request) returns (StopProvider.Response);
+}
+
+message GetProviderSchema {
+    message Request {
+    }
+    message Response {
+        Schema provider = 1;
+        map<string, Schema> resource_schemas = 2;
+        map<string, Schema> data_source_schemas = 3;
+        repeated Diagnostic diagnostics = 4;
+        Schema provider_meta = 5;
+    }
+}
+
+message ValidateProviderConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message UpgradeResourceState {
+    message Request {
+        string type_name = 1;
+
+        // version is the schema_version number recorded in the state file
+        int64 version = 2;
+
+        // raw_state is the raw states as stored for the resource.  Core does
+        // not have access to the schema of prior_version, so it's the
+        // provider's responsibility to interpret this value using the
+        // appropriate older schema. The raw_state will be the json encoded
+        // state, or a legacy flat-mapped format.
+        RawState raw_state = 3;
+    }
+    message Response {
+        // new_state is a msgpack-encoded data structure that, when interpreted with
+        // the _current_ schema for this resource type, is functionally equivalent to
+        // that which was given in prior_state_raw.
+        DynamicValue upgraded_state = 1;
+
+        // diagnostics describes any errors encountered during migration that could not
+        // be safely resolved, and warnings about any possibly-risky assumptions made
+        // in the upgrade process.
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateResourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ValidateDataResourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ConfigureProvider {
+    message Request {
+        string terraform_version = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ReadResource {
+    message Request {
+        string type_name = 1;
+        DynamicValue current_state = 2;
+        bytes private = 3;
+        DynamicValue provider_meta = 4;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        repeated Diagnostic diagnostics = 2;
+        bytes private = 3;
+    }
+}
+
+message PlanResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue proposed_new_state = 3;
+        DynamicValue config = 4;
+        bytes prior_private = 5;
+        DynamicValue provider_meta = 6;
+    }
+
+    message Response {
+        DynamicValue planned_state = 1;
+        repeated AttributePath requires_replace = 2;
+        bytes planned_private = 3;
+        repeated Diagnostic diagnostics = 4;
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 5;
+    }
+}
+
+message ApplyResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue planned_state = 3;
+        DynamicValue config = 4;
+        bytes planned_private = 5;
+        DynamicValue provider_meta = 6;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        bytes private = 2;
+        repeated Diagnostic diagnostics = 3;
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 4;
+    }
+}
+
+message ImportResourceState {
+    message Request {
+        string type_name = 1;
+        string id = 2;
+    }
+
+    message ImportedResource {
+        string type_name = 1;
+        DynamicValue state = 2;
+        bytes private = 3;
+    }
+
+    message Response {
+        repeated ImportedResource imported_resources = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ReadDataSource {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+        DynamicValue provider_meta = 3;
+    }
+    message Response {
+        DynamicValue state = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
diff --git a/v1.4.7/docs/plugin-protocol/tfplugin6.3.proto b/v1.4.7/docs/plugin-protocol/tfplugin6.3.proto
new file mode 100644
index 0000000..e3fa9d1
--- /dev/null
+++ b/v1.4.7/docs/plugin-protocol/tfplugin6.3.proto
@@ -0,0 +1,379 @@
+// Terraform Plugin RPC protocol version 6.3
+//
+// This file defines version 6.3 of the RPC protocol. To implement a plugin
+// against this protocol, copy this definition into your own codebase and
+// use protoc to generate stubs for your target language.
+//
+// This file will not be updated. Any minor versions of protocol 6 to follow
+// should copy this file and modify the copy while maintaing backwards
+// compatibility. Breaking changes, if any are required, will come
+// in a subsequent major version with its own separate proto definition.
+//
+// Note that only the proto files included in a release tag of Terraform are
+// official protocol releases. Proto files taken from other commits may include
+// incomplete changes or features that did not make it into a final release.
+// In all reasonable cases, plugin developers should take the proto file from
+// the tag of the most recent release of Terraform, and not from the main
+// branch or any other development branch.
+//
+syntax = "proto3";
+option go_package = "github.com/hashicorp/terraform/internal/tfplugin6";
+
+package tfplugin6;
+
+// DynamicValue is an opaque encoding of terraform data, with the field name
+// indicating the encoding scheme used.
+message DynamicValue {
+    bytes msgpack = 1;
+    bytes json = 2;
+}
+
+message Diagnostic {
+    enum Severity {
+        INVALID = 0;
+        ERROR = 1;
+        WARNING = 2;
+    }
+    Severity severity = 1;
+    string summary = 2;
+    string detail = 3;
+    AttributePath attribute = 4;
+}
+
+message AttributePath {
+    message Step {
+        oneof selector {
+            // Set "attribute_name" to represent looking up an attribute
+            // in the current object value.
+            string attribute_name = 1;
+            // Set "element_key_*" to represent looking up an element in
+            // an indexable collection type.
+            string element_key_string = 2;
+            int64 element_key_int = 3;
+        }
+    }
+    repeated Step steps = 1;
+}
+
+message StopProvider {
+    message Request {
+    }
+    message Response {
+        string Error = 1;
+    }
+}
+
+// RawState holds the stored state for a resource to be upgraded by the
+// provider. It can be in one of two formats, the current json encoded format
+// in bytes, or the legacy flatmap format as a map of strings.
+message RawState {
+    bytes json = 1;
+    map<string, string> flatmap = 2;
+}
+
+enum StringKind {
+    PLAIN = 0;
+    MARKDOWN = 1;
+}
+
+// Schema is the configuration schema for a Resource or Provider.
+message Schema {
+    message Block {
+        int64 version = 1;
+        repeated Attribute attributes = 2;
+        repeated NestedBlock block_types = 3;
+        string description = 4;
+        StringKind description_kind = 5;
+        bool deprecated = 6;
+    }
+
+    message Attribute {
+        string name = 1;
+        bytes type = 2;
+        Object nested_type = 10;
+        string description = 3;
+        bool required = 4;
+        bool optional = 5;
+        bool computed = 6;
+        bool sensitive = 7;
+        StringKind description_kind = 8;
+        bool deprecated = 9;
+    }
+
+    message NestedBlock {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+            GROUP = 5;
+        }
+
+        string type_name = 1;
+        Block block = 2;
+        NestingMode nesting = 3;
+        int64 min_items = 4;
+        int64 max_items = 5;
+    }
+
+    message Object {
+        enum NestingMode {
+            INVALID = 0;
+            SINGLE = 1;
+            LIST = 2;
+            SET = 3;
+            MAP = 4;
+        }
+
+        repeated Attribute attributes = 1;
+        NestingMode nesting = 3;
+
+        // MinItems and MaxItems were never used in the protocol, and have no
+        // effect on validation.
+        int64 min_items = 4 [deprecated = true];
+        int64 max_items = 5 [deprecated = true];
+    }
+
+    // The version of the schema.
+    // Schemas are versioned, so that providers can upgrade a saved resource
+    // state when the schema is changed.
+    int64 version = 1;
+
+    // Block is the top level configuration block for this schema.
+    Block block = 2;
+}
+
+service Provider {
+    //////// Information about what a provider supports/expects
+    rpc GetProviderSchema(GetProviderSchema.Request) returns (GetProviderSchema.Response);
+    rpc ValidateProviderConfig(ValidateProviderConfig.Request) returns (ValidateProviderConfig.Response);
+    rpc ValidateResourceConfig(ValidateResourceConfig.Request) returns (ValidateResourceConfig.Response);
+    rpc ValidateDataResourceConfig(ValidateDataResourceConfig.Request) returns (ValidateDataResourceConfig.Response);
+    rpc UpgradeResourceState(UpgradeResourceState.Request) returns (UpgradeResourceState.Response);
+
+    //////// One-time initialization, called before other functions below
+    rpc ConfigureProvider(ConfigureProvider.Request) returns (ConfigureProvider.Response);
+
+    //////// Managed Resource Lifecycle
+    rpc ReadResource(ReadResource.Request) returns (ReadResource.Response);
+    rpc PlanResourceChange(PlanResourceChange.Request) returns (PlanResourceChange.Response);
+    rpc ApplyResourceChange(ApplyResourceChange.Request) returns (ApplyResourceChange.Response);
+    rpc ImportResourceState(ImportResourceState.Request) returns (ImportResourceState.Response);
+
+    rpc ReadDataSource(ReadDataSource.Request) returns (ReadDataSource.Response);
+
+    //////// Graceful Shutdown
+    rpc StopProvider(StopProvider.Request) returns (StopProvider.Response);
+}
+
+message GetProviderSchema {
+    message Request {
+    }
+    message Response {
+        Schema provider = 1;
+        map<string, Schema> resource_schemas = 2;
+        map<string, Schema> data_source_schemas = 3;
+        repeated Diagnostic diagnostics = 4;
+        Schema provider_meta = 5;
+        ServerCapabilities server_capabilities = 6;
+    }
+
+
+    // ServerCapabilities allows providers to communicate extra information
+    // regarding supported protocol features. This is used to indicate
+    // availability of certain forward-compatible changes which may be optional
+    // in a major protocol version, but cannot be tested for directly.
+    message ServerCapabilities {
+        // The plan_destroy capability signals that a provider expects a call
+        // to PlanResourceChange when a resource is going to be destroyed.
+        bool plan_destroy = 1;
+    }
+}
+
+message ValidateProviderConfig {
+    message Request {
+        DynamicValue config = 1;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message UpgradeResourceState {
+    // Request is the message that is sent to the provider during the
+    // UpgradeResourceState RPC.
+    //
+    // This message intentionally does not include configuration data as any
+    // configuration-based or configuration-conditional changes should occur
+    // during the PlanResourceChange RPC. Additionally, the configuration is
+    // not guaranteed to exist (in the case of resource destruction), be wholly
+    // known, nor match the given prior state, which could lead to unexpected
+    // provider behaviors for practitioners.
+    message Request {
+        string type_name = 1;
+
+        // version is the schema_version number recorded in the state file
+        int64 version = 2;
+
+        // raw_state is the raw states as stored for the resource.  Core does
+        // not have access to the schema of prior_version, so it's the
+        // provider's responsibility to interpret this value using the
+        // appropriate older schema. The raw_state will be the json encoded
+        // state, or a legacy flat-mapped format.
+        RawState raw_state = 3;
+    }
+    message Response {
+        // new_state is a msgpack-encoded data structure that, when interpreted with
+        // the _current_ schema for this resource type, is functionally equivalent to
+        // that which was given in prior_state_raw.
+        DynamicValue upgraded_state = 1;
+
+        // diagnostics describes any errors encountered during migration that could not
+        // be safely resolved, and warnings about any possibly-risky assumptions made
+        // in the upgrade process.
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ValidateResourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ValidateDataResourceConfig {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ConfigureProvider {
+    message Request {
+        string terraform_version = 1;
+        DynamicValue config = 2;
+    }
+    message Response {
+        repeated Diagnostic diagnostics = 1;
+    }
+}
+
+message ReadResource {
+    // Request is the message that is sent to the provider during the
+    // ReadResource RPC.
+    //
+    // This message intentionally does not include configuration data as any
+    // configuration-based or configuration-conditional changes should occur
+    // during the PlanResourceChange RPC. Additionally, the configuration is
+    // not guaranteed to be wholly known nor match the given prior state, which
+    // could lead to unexpected provider behaviors for practitioners.
+    message Request {
+        string type_name = 1;
+        DynamicValue current_state = 2;
+        bytes private = 3;
+        DynamicValue provider_meta = 4;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        repeated Diagnostic diagnostics = 2;
+        bytes private = 3;
+    }
+}
+
+message PlanResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue proposed_new_state = 3;
+        DynamicValue config = 4;
+        bytes prior_private = 5;
+        DynamicValue provider_meta = 6;
+    }
+
+    message Response {
+        DynamicValue planned_state = 1;
+        repeated AttributePath requires_replace = 2;
+        bytes planned_private = 3;
+        repeated Diagnostic diagnostics = 4;
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 5;
+    }
+}
+
+message ApplyResourceChange {
+    message Request {
+        string type_name = 1;
+        DynamicValue prior_state = 2;
+        DynamicValue planned_state = 3;
+        DynamicValue config = 4;
+        bytes planned_private = 5;
+        DynamicValue provider_meta = 6;
+    }
+    message Response {
+        DynamicValue new_state = 1;
+        bytes private = 2;
+        repeated Diagnostic diagnostics = 3;
+
+        // This may be set only by the helper/schema "SDK" in the main Terraform
+        // repository, to request that Terraform Core >=0.12 permit additional
+        // inconsistencies that can result from the legacy SDK type system
+        // and its imprecise mapping to the >=0.12 type system.
+        // The change in behavior implied by this flag makes sense only for the
+        // specific details of the legacy SDK type system, and are not a general
+        // mechanism to avoid proper type handling in providers.
+        //
+        //     ====              DO NOT USE THIS              ====
+        //     ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ====
+        //     ====              DO NOT USE THIS              ====
+        bool legacy_type_system = 4;
+    }
+}
+
+message ImportResourceState {
+    message Request {
+        string type_name = 1;
+        string id = 2;
+    }
+
+    message ImportedResource {
+        string type_name = 1;
+        DynamicValue state = 2;
+        bytes private = 3;
+    }
+
+    message Response {
+        repeated ImportedResource imported_resources = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
+
+message ReadDataSource {
+    message Request {
+        string type_name = 1;
+        DynamicValue config = 2;
+        DynamicValue provider_meta = 3;
+    }
+    message Response {
+        DynamicValue state = 1;
+        repeated Diagnostic diagnostics = 2;
+    }
+}
diff --git a/v1.4.7/docs/resource-instance-change-lifecycle.md b/v1.4.7/docs/resource-instance-change-lifecycle.md
new file mode 100644
index 0000000..0849bea
--- /dev/null
+++ b/v1.4.7/docs/resource-instance-change-lifecycle.md
@@ -0,0 +1,370 @@
+# Terraform Resource Instance Change Lifecycle
+
+This document describes the relationships between the different operations
+called on a Terraform Provider to handle a change to a resource instance.
+
+![](https://user-images.githubusercontent.com/20180/172506401-777597dc-3e6e-411d-9580-b192fd34adba.png)
+
+The resource instance operations all both consume and produce objects that
+conform to the schema of the selected resource type.
+
+The overall goal of this process is to take a **Configuration** and a
+**Previous Run State**, merge them together using resource-type-specific
+planning logic to produce a **Planned State**, and then change the remote
+system to match that planned state before finally producing the **New State**
+that will be saved in order to become the **Previous Run State** for the next
+operation.
+
+The various object values used in different parts of this process are:
+
+* **Configuration**: Represents the values the user wrote in the configuration,
+  after any automatic type conversions to match the resource type schema.
+
+    Any attributes not defined by the user appear as null in the configuration
+    object. If an argument value is derived from an unknown result of another
+    resource instance, its value in the configuration object could also be
+    unknown.
+
+* **Prior State**: The provider's representation of the current state of the
+  remote object at the time of the most recent read.
+
+* **Proposed New State**: Terraform Core uses some built-in logic to perform
+  an initial basic merger of the **Configuration** and the **Prior State**
+  which a provider may use as a starting point for its planning operation.
+
+    The built-in logic primarily deals with the expected behavior for attributes
+    marked in the schema as both "optional" _and_ "computed", which means that
+    the user may either set it or may leave it unset to allow the provider
+    to choose a value instead.
+
+    Terraform Core therefore constructs the proposed new state by taking the
+    attribute value from Configuration if it is non-null, and then using the
+    Prior State as a fallback otherwise, thereby helping a provider to
+    preserve its previously-chosen value for the attribute where appropriate.
+
+* **Initial Planned State** and **Final Planned State** are both descriptions
+  of what the associated remote object ought to look like after completing
+  the planned action.
+
+    There will often be parts of the object that the provider isn't yet able to
+    predict, either because they will be decided by the remote system during
+    the apply step or because they are derived from configuration values from
+    other resource instances that are themselves not yet known. The provider
+    must mark these by including unknown values in the state objects.
+
+    The distinction between the _Initial_ and _Final_ planned states is that
+    the initial one is created during Terraform Core's planning phase based
+    on a possibly-incomplete configuration, whereas the final one is created
+    during the apply step once all of the dependencies have already been
+    updated and so the configuration should then be wholly known.
+
+* **New State** is a representation of the result of whatever modifications
+  were made to the remote system by the provider during the apply step.
+
+    The new state must always be wholly known, because it represents the
+    actual state of the system, rather than a hypothetical future state.
+
+* **Previous Run State** is the same object as the **New State** from
+  the previous run of Terraform. This is exactly what the provider most
+  recently returned, and so it will not take into account any changes that
+  may have been made outside of Terraform in the meantime, and it may conform
+  to an earlier version of the resource type schema and therefore be
+  incompatible with the _current_ schema.
+
+* **Upgraded State** is derived from **Previous Run State** by using some
+  provider-specified logic to upgrade the existing data to the latest schema.
+  However, it still represents the remote system as it was at the end of the
+  last run, and so still doesn't take into account any changes that may have
+  been made outside of Terraform.
+
+* The **Import ID** and **Import Stub State** are both details of the special
+  process of importing pre-existing objects into a Terraform state, and so
+  we'll wait to discuss those in a later section on importing.
+
+
+## Provider Protocol API Functions
+
+The following sections describe the three provider API functions that are
+called to plan and apply a change, including the expectations Terraform Core
+enforces for each.
+
+For historical reasons, the original Terraform SDK is exempt from error
+messages produced when certain assumptions are violated, but violating them
+will often cause downstream errors nonetheless, because Terraform's workflow
+depends on these contracts being met.
+
+The following section uses the word "attribute" to refer to the named
+attributes described in the resource type schema. A schema may also include
+nested blocks, which contain their _own_ set of attributes; the constraints
+apply recursively to these nested attributes too.
+
+The following are the function names used in provider protocol version 6.
+Protocol version 5 has the same set of operations but uses some
+marginally-different names for them, because we used protocol version 6 as an
+opportunity to tidy up some names that had been awkward before.
+
+### ValidateResourceConfig
+
+`ValidateResourceConfig` takes the **Configuration** object alone, and
+may return error or warning diagnostics in response to its attribute values.
+
+`ValidateResourceConfig` is the provider's opportunity to apply custom
+validation rules to the schema, allowing for constraints that could not be
+expressed via schema alone.
+
+In principle a provider can make any rule it wants here, although in practice
+providers should typically avoid reporting errors for values that are unknown.
+Terraform Core will call this function multiple times at different phases
+of evaluation, and guarantees to _eventually_ call with a wholly-known
+configuration so that the provider will have an opportunity to belatedly catch
+problems related to values that are initially unknown during planning.
+
+If a provider intends to choose a default value for a particular
+optional+computed attribute when left as null in the configuration, the
+provider _must_ tolerate that attribute being unknown in the configuration in
+order to get an opportunity to choose the default value during the later
+plan or apply phase.
+
+The validation step does not produce a new object itself and so it cannot
+modify the user's supplied configuration.
+
+### PlanResourceChange
+
+The purpose of `PlanResourceChange` is to predict the approximate effect of
+a subsequent apply operation, allowing Terraform to render the plan for the
+user and to propagate the predictable subset of results downstream through
+expressions in the configuration.
+
+This operation can base its decision on any combination of **Configuration**,
+**Prior State**, and **Proposed New State**, as long as its result fits the
+following constraints:
+
+* Any attribute that was non-null in the configuration must either preserve
+  the exact configuration value or return the corresponding attribute value
+  from the prior state. (Do the latter if you determine that the change is not
+  functionally significant, such as if the value is a JSON string that has
+  changed only in the positioning of whitespace.)
+
+* Any attribute that is marked as computed in the schema _and_ is null in the
+  configuration may be set by the provider to any arbitrary value of the
+  expected type.
+
+* If a computed attribute has any _known_ value in the planned new state, the
+  provider will be required to ensure that it is unchanged in the new state
+  returned by `ApplyResourceChange`, or return an error explaining why it
+  changed. Set an attribute to an unknown value to indicate that its final
+  result will be determined during `ApplyResourceChange`.
+
+`PlanResourceChange` is actually called twice per run for each resource type.
+
+The first call is during the planning phase, before Terraform prints out a
+diff to the user for confirmation. Because no changes at all have been applied
+at that point, the given **Configuration** may contain unknown values as
+placeholders for the results of expressions that derive from unknown values
+of other resource instances. The result of this initial call is the
+**Initial Planned State**.
+
+If the user accepts the plan, Terraform will call `PlanResourceChange` a
+second time during the apply step, and that call is guaranteed to have a
+wholly-known **Configuration** with any values from upstream dependencies
+taken into account already. The result of this second call is the
+**Final Planned State**.
+
+Terraform Core compares the final with the initial planned state, enforcing
+the following additional constraints along with those listed above:
+
+* Any attribute that had a known value in the **Initial Planned State** must
+  have an identical value in the **Final Planned State**.
+
+* Any attribute that had an unknown value in the **Initial Planned State** may
+  either remain unknown in the second _or_ take on any known value that
+  conforms to the unknown value's type constraint.
+
+The **Final Planned State** is what passes to `ApplyResourceChange`, as
+described in the following section.
+
+### ApplyResourceChange
+
+The `ApplyResourceChange` function is responsible for making calls into the
+remote system to make remote objects match the **Final Planned State**. During
+that operation, the provider should decide on final values for any attributes
+that were left unknown in the **Final Planned State**, and thus produce the
+**New State** object.
+
+`ApplyResourceChange` also receives the **Prior State** so that it can use it
+to potentially implement more "surgical" changes to particular parts of
+the remote objects by detecting portions that are unchanged, in cases where the
+remote API supports partial-update operations.
+
+The **New State** object returned from the provider must meet the following
+constraints:
+
+* Any attribute that had a known value in the **Final Planned State** must have
+  an identical value in the new state. In particular, if the remote API
+  returned a different serialization of the same value then the provider must
+  preserve the form the user wrote in the configuration, and _must not_ return
+  the normalized form produced by the provider.
+
+* Any attribute that had an unknown value in the **Final Planned State** must
+  take on a known value whose type conforms to the type constraint of the
+  unknown value. No unknown values are permitted in the **New State**.
+
+After calling `ApplyResourceChange` for each resource instance in the plan,
+and dealing with any other bookkeeping to return the results to the user,
+a single Terraform run is complete. Terraform Core saves the **New State**
+in a state snapshot for the entire configuration, so it'll be preserved for
+use on the next run.
+
+When the user subsequently runs Terraform again, the **New State** becomes
+the **Previous Run State** verbatim, and passes into `UpgradeResourceState`.
+
+### UpgradeResourceState
+
+Because the state values for a particular resource instance persist in a
+saved state snapshot from one run to the next, Terraform Core must deal with
+the possibility that the user has upgraded to a newer version of the provider
+since the last run, and that the new provider version has an incompatible
+schema for the relevant resource type.
+
+Terraform Core therefore begins by calling `UpgradeResourceState` and passing
+the **Previous Run State** in a _raw_ form, which in current protocol versions
+is the raw JSON data structure as was stored in the state snapshot. Terraform
+Core doesn't have access to the previous schema versions for a provider's
+resource types, so the provider itself must handle the data decoding in this
+upgrade function.
+
+The provider can then use whatever logic is appropriate to update the shape
+of the data to conform to the current schema for the resource type. Although
+Terraform Core has no way to enforce it, a provider should only change the
+shape of the data structure and should _not_ change the meaning of the data.
+In particular, it should not try to update the state data to capture any
+changes made to the corresponding remote object outside of Terraform.
+
+This function then returns the **Upgraded State**, which captures the same
+information as the **Previous Run State** but does so in a way that conforms
+to the current version of the resource type schema, which therefore allows
+Terraform Core to interact with the data fully for subsequent steps.
+
+### ReadResource
+
+Although Terraform typically expects to have exclusive control over any remote
+object that is bound to a resource instance, in practice users may make changes
+to those objects outside of Terraform, causing Terraform's records of the
+object to become stale.
+
+The `ReadResource` function asks the provider to make a best effort to detect
+any such external changes and describe them so that Terraform Core can use
+an up-to-date **Prior State** as the input to the next `PlanResourceChange`
+call.
+
+This is always a best effort operation because there are various reasons why
+a provider might not be able to detect certain changes. For example:
+* Some remote objects have write-only attributes, which means that there is
+  no way to determine what value is currently stored in the remote system.
+* There may be new features of the underlying API which the current provider
+  version doesn't know how to ask about.
+
+Terraform Core expects a provider to carefully distinguish between the
+following two situations for each attribute:
+* **Normalization**: the remote API has returned some data in a different form
+  than was recorded in the **Previous Run State**, but the meaning is unchanged.
+
+    In this case, the provider should return the exact value from the
+    **Previous Run State**, thereby preserving the value as it was written by
+    the user in the configuration and thus avoiding unwanted cascading changes to
+    elsewhere in the configuration.
+* **Drift**: the remote API returned data that is materially different from
+  what was recorded in the **Previous Run State**, meaning that the remote
+  system's behavior no longer matches what the configuration previously
+  requested.
+
+    In this case, the provider should return the value from the remote system,
+    thereby discarding the value from the **Previous Run State**. When a
+    provider does this, Terraform _may_ report it to the user as a change
+    made outside of Terraform, if Terraform Core determined that the detected
+    change was a possible cause of another planned action for a downstream
+    resource instance.
+
+This operation returns the **Prior State** to use for the next call to
+`PlanResourceChange`, thus completing the circle and beginning this process
+over again.
+
+## Handling of Nested Blocks in Configuration
+
+Nested blocks are a configuration-only construct and so the number of blocks
+cannot be changed on the fly during planning or during apply: each block
+represented in the configuration must have a corresponding nested object in
+the planned new state and new state, or Terraform Core will raise an error.
+
+If a provider wishes to report about new instances of the sub-object type
+represented by nested blocks that are created implicitly during the apply
+operation -- for example, if a compute instance gets a default network
+interface created when none are explicitly specified -- this must be done via
+separate "computed" attributes alongside the nested blocks. This could be list
+or map of objects that includes a mixture of the objects described by the
+nested blocks in the configuration and any additional objects created implicitly
+by the remote system.
+
+Provider protocol version 6 introduced the new idea of structural-typed
+attributes, which are a hybrid of attribute-style syntax but nested-block-style
+interpretation. For providers that use structural-typed attributes, they must
+follow the same rules as for a nested block type of the same nesting mode.
+
+## Import Behavior
+
+The main resource instance change lifecycle is concerned with objects whose
+entire lifecycle is driven through Terraform, including the initial creation
+of the object.
+
+As an aid to those who are adopting Terraform as a replacement for existing
+processes or software, Terraform also supports adopting pre-existing objects
+to bring them under Terraform's management without needing to recreate them
+first.
+
+When using this facility, the user provides the address of the resource
+instance they wish to bind the existing object to, and a string representation
+of the identifier of the existing object to be imported in a syntax defined
+by the provider on a per-resource-type basis, which we'll call the
+**Import ID**.
+
+The import process trades the user's **Import ID** for a special
+**Import Stub State**, which behaves as a placeholder for the
+**Previous Run State** pretending as if a previous Terraform run is what had
+created the object.
+
+### ImportResourceState
+
+The `ImportResourceState` operation takes the user's given **Import ID** and
+uses it to verify that the given object exists and, if so, to retrieve enough
+data about it to produce the **Import Stub State**.
+
+Terraform Core will always pass the returned **Import Stub State** to the
+normal `ReadResource` operation after `ImportResourceState` returns it, so
+in practice the provider may populate only the minimal subset of attributes
+that `ReadResource` will need to do its work, letting the normal function
+deal with populating the rest of the data to match what is currently set in
+the remote system.
+
+For the same reasons that `ReadResource` is only a _best effort_ at detecting
+changes outside of Terraform, a provider may not be able to fully support
+importing for all resource types. In that case, the provider developer must
+choose between the following options:
+
+* Perform only a partial import: the provider may choose to leave certain
+  attributes set to `null` in the **Prior State** after both
+  `ImportResourceState` and the subsequent `ReadResource` have completed.
+
+    In this case, the user can provide the missing value in the configuration
+    and thus cause the next `PlanResourceChange` to plan to update that value
+    to match the configuration. The provider's `PlanResourceChange` function
+    must be ready to deal with the attribute being `null` in the
+    **Prior State** and handle that appropriately.
+* Return an error explaining why importing isn't possible.
+
+    This is a last resort because of course it will then leave the user unable
+    to bring the existing object under Terraform's management. However, if a
+    particular object's design doesn't suit importing then it can be a better
+    user experience to be clear and honest that the user must replace the object
+    as part of adopting Terraform, rather than to perform an import that will
+    leave the object in a situation where Terraform cannot meaningfully manage
+    it.
diff --git a/v1.4.7/docs/unicode.md b/v1.4.7/docs/unicode.md
new file mode 100644
index 0000000..efcb442
--- /dev/null
+++ b/v1.4.7/docs/unicode.md
@@ -0,0 +1,142 @@
+# How Terraform Uses Unicode
+
+The Terraform language uses the Unicode standards as the basis of various
+different features. The Unicode Consortium publishes new versions of those
+standards periodically, and we aim to adopt those new versions in new
+minor releases of Terraform in order to support additional characters added
+in those new versions.
+
+Unfortunately due to those features being implemented by relying on a number
+of external libraries, adopting a new version of Unicode is not as simple as
+just updating a version number somewhere. This document aims to describe the
+various steps required to adopt a new version of Unicode in Terraform.
+
+We typically aim to be consistent across all of these dependencies as to which
+major version of Unicode we currently conform to. The usual initial driver
+for a Unicode upgrade is switching to new version of the Go runtime library
+which itself uses a new version of Unicode, because Go itself does not provide
+any way to select Unicode versions independently from Go versions. Therefore
+we typically upgrade to a new Unicode version only in conjunction with
+upgrading to a new Go version.
+
+## Unicode tables in the Go standard library
+
+Several Terraform language features are implemented in terms of functions in
+[the Go `strings` package](https://pkg.go.dev/strings),
+[the Go `unicode` package](https://pkg.go.dev/unicode), and other supporting
+packages in the Go standard library.
+
+The Go team maintains the Go standard library features to support a particular
+Unicode version for each Go version. The specific Unicode version for a
+particular Go version is available in
+[`unicode.Version`](https://pkg.go.dev/unicode#Version).
+
+We adopt a new version of Go by editing the `.go-version` file in the root
+of this repository. Although it's typically possible to build Terraform with
+other versions of Go, that file documents the version we intend to use for
+official releases and thus the primary version we use for development and
+testing. Adopting a new Go version typically also implies other behavior
+changes inherited from the Go standard library, so it's important to review the
+relevant version changelog(s) to note any behavior changes we'll need to pass
+on to our own users via the Terraform changelog.
+
+The other subsystems described below should always be set up to match
+`unicode.Version`. In some cases those libraries automatically try to align
+themselves with `unicode.Version` and generate an error if they cannot, but
+that isn't true of all of them.
+
+## Unicode Identifier Rules in HCL
+
+_Identifier and Pattern Syntax_ (TF31) is a Unicode standards annex which
+describe a set of rules for tokenizing "identifiers", such as variable names
+in a programming language.
+
+HCL uses a superset of that specification for its own identifier tokenization
+rules, and so it includes some code derived from the TF31 data tables that
+describe which characters belong to the "ID_Start" and "ID_Continue" classes.
+
+Since Terraform is the primary user of HCL, it's typically Terraform's adoption
+of a new Unicode version which drives HCL to adopt one. To update the Unicode
+tables to a new version:
+* Edit `hclsyntax/generate.go`'s line which runs `unicode2ragel.rb` to specify
+  the URL of the `DerivedCoreProperties.txt` data file for the intended Unicode
+  version.
+* Run `go generate ./hclsyntax` to run the generation code to update both
+  `unicode_derived.rl` and, indirectly, `scan_tokens.go`. (You will need both
+  a Ruby interpreter and the Ragel state machine compiler on your system in
+  order to complete this step.)
+* Run all the tests to check for regressions: `go test ./...`
+* If all looks good, commit all of the changes and open a PR to HCL.
+* Once that PR is merged and released, update Terraform to use the new version
+  of HCL.
+
+## Unicode Text Segmentation
+
+_Text Segmentation_ (TR29) is a Unicode standards annex which describes
+algorithms for breaking strings into smaller units such as sentences, words,
+and grapheme clusters.
+
+Several Terraform language features make use of the _grapheme cluster_
+algorithm in particular, because it provides a practical definition of
+individual visible characters, taking into account combining sequences such
+as Latin letters with separate diacritics or Emoji characters with gender
+presentation and skin tone modifiers.
+
+The text segmentation algorithms rely on supplementary data tables that are
+not part of the core set encoded in the Go standard library's `unicode`
+packages, and so instead we rely on the third-party module
+[`github.com/apparentlymart/go-textseg`](http://pkg.go.dev/github.com/apparentlymart/go-textseg)
+to provide those tables and a Go implementation of the grapheme cluster
+segmentation algorithm in terms of the tables.
+
+The `go-textseg` library is designed to allow calling programs to potentially
+support multiple Unicode versions at once, by offering a separate module major
+version for each Unicode major version. For example, the full module path for
+the Unicode 13 implementation is `github.com/apparentlymart/go-textseg/v13`.
+
+If that external library doesn't yet have support for the Unicode version we
+intend to adopt then we'll first need to open a pull request to contribute
+new language support. The details of how to do this will unfortunately vary
+depending on how significantly the Text Segmentation annex has changed since
+the most recently-supported Unicode version, but in many cases it can be
+just a matter of editing that library's `make_tables.go`, `make_test_tables.go`,
+and `generate.go` files to point to the URLs where the Unicode consortium
+published new tables and then run `go generate` to rebuild the files derived
+from those data sources. As long as the new Unicode version has only changed
+the data tables and not also changed the algorithm, often no further changes
+are needed.
+
+Once a new Unicode version is included, the maintainer of that library will
+typically publish a new major version that we can depend on. Two different
+codebases included in Terraform all depend directly on the `go-textseg` module
+for parts of their functionality:
+
+* [`hashicorp/hcl`](https://github.com/hashicorp/hcl) uses text
+  segmentation as part of producing visual column offsets in source ranges
+  returned by the tokenizer and parser. Terraform in turn uses that library
+  for the underlying syntax of the Terraform language, and so it passes on
+  those source ranges to the end-user as part of diagnostic messages.
+* The third-party module [`github.com/zclconf/go-cty`](https://github.com/zclconf/go-cty)
+  provides several of the Terraform language built in functions, including
+  functions like `substr` and `length` which need to count grapheme clusters
+  as part of their implementation.
+
+As part of upgrading Terraform's Unicode support we therefore typically also
+open pull requests against these other codebases, and then adopt the new
+versions that produces. Terraform work often drives the adoption of new Unicode
+versions in those codebases, with other dependencies following along when they
+next upgrade.
+
+At the time of writing Terraform itself doesn't _directly_ depend on
+`go-textseg`, and so there are no specific changes required in this Terraform
+codebase aside from the `go.sum` file update that always follows from
+changes to transitive dependencies.
+
+The `go-textseg` library does have a different "auto-version" mechanism which
+selects an appropriate module version based on the current Go language version,
+but neither HCL nor cty use that because the auto-version package will not
+compile for any Go version that doesn't have a corresponding Unicode version
+explicitly recorded in that repository, and so that would be too harsh a
+constraint for libraries like HCL which have many callers, many of which don't
+care strongly about Unicode support, that may wish to upgrade Go before the
+text segmentation library has been updated.
diff --git a/v1.4.7/experiments.go b/v1.4.7/experiments.go
new file mode 100644
index 0000000..f28d27e
--- /dev/null
+++ b/v1.4.7/experiments.go
@@ -0,0 +1,24 @@
+package main
+
+// experimentsAllowed can be set to any non-empty string using Go linker
+// arguments in order to enable the use of experimental features for a
+// particular Terraform build:
+//
+//	go install -ldflags="-X 'main.experimentsAllowed=yes'"
+//
+// By default this variable is initialized as empty, in which case
+// experimental features are not available.
+//
+// The Terraform release process should arrange for this variable to be
+// set for alpha releases and development snapshots, but _not_ for
+// betas, release candidates, or final releases.
+//
+// (NOTE: Some experimental features predate the rule that experiments
+// are available only for alpha/dev builds, and so intentionally do not
+// make use of this setting to avoid retracting a previously-documented
+// open experiment.)
+var experimentsAllowed string
+
+func ExperimentsAllowed() bool {
+	return experimentsAllowed != ""
+}
diff --git a/v1.4.7/go.mod b/v1.4.7/go.mod
new file mode 100644
index 0000000..121421a
--- /dev/null
+++ b/v1.4.7/go.mod
@@ -0,0 +1,195 @@
+module github.com/hashicorp/terraform
+
+require (
+	cloud.google.com/go/kms v1.6.0
+	cloud.google.com/go/storage v1.28.0
+	github.com/Azure/azure-sdk-for-go v59.2.0+incompatible
+	github.com/Azure/go-autorest/autorest v0.11.24
+	github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2
+	github.com/agext/levenshtein v1.2.3
+	github.com/aliyun/alibaba-cloud-sdk-go v1.61.1501
+	github.com/aliyun/aliyun-oss-go-sdk v0.0.0-20190103054945-8205d1f41e70
+	github.com/aliyun/aliyun-tablestore-go-sdk v4.1.2+incompatible
+	github.com/apparentlymart/go-cidr v1.1.0
+	github.com/apparentlymart/go-dump v0.0.0-20190214190832-042adf3cf4a0
+	github.com/apparentlymart/go-shquot v0.0.1
+	github.com/apparentlymart/go-userdirs v0.0.0-20200915174352-b0c018a67c13
+	github.com/apparentlymart/go-versions v1.0.1
+	github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2
+	github.com/aws/aws-sdk-go v1.44.122
+	github.com/bgentry/speakeasy v0.1.0
+	github.com/bmatcuk/doublestar v1.1.5
+	github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e
+	github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f
+	github.com/davecgh/go-spew v1.1.1
+	github.com/dylanmei/winrmtest v0.0.0-20210303004826-fbc9ae56efb6
+	github.com/go-test/deep v1.0.3
+	github.com/golang/mock v1.6.0
+	github.com/google/go-cmp v0.5.9
+	github.com/google/uuid v1.3.0
+	github.com/hashicorp/aws-sdk-go-base v0.7.1
+	github.com/hashicorp/consul/api v1.9.1
+	github.com/hashicorp/consul/sdk v0.8.0
+	github.com/hashicorp/errwrap v1.1.0
+	github.com/hashicorp/go-azure-helpers v0.43.0
+	github.com/hashicorp/go-checkpoint v0.5.0
+	github.com/hashicorp/go-cleanhttp v0.5.2
+	github.com/hashicorp/go-getter v1.7.0
+	github.com/hashicorp/go-hclog v0.15.0
+	github.com/hashicorp/go-multierror v1.1.1
+	github.com/hashicorp/go-plugin v1.4.3
+	github.com/hashicorp/go-retryablehttp v0.7.2
+	github.com/hashicorp/go-tfe v1.21.0
+	github.com/hashicorp/go-uuid v1.0.3
+	github.com/hashicorp/go-version v1.6.0
+	github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f
+	github.com/hashicorp/hcl/v2 v2.16.2
+	github.com/hashicorp/jsonapi v0.0.0-20210826224640-ee7dae0fb22d
+	github.com/hashicorp/terraform-config-inspect v0.0.0-20210209133302-4fd17a0faac2
+	github.com/hashicorp/terraform-registry-address v0.0.0-20220623143253-7d51757b572c
+	github.com/hashicorp/terraform-svchost v0.1.0
+	github.com/jmespath/go-jmespath v0.4.0
+	github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0
+	github.com/lib/pq v1.10.3
+	github.com/manicminer/hamilton v0.44.0
+	github.com/masterzen/winrm v0.0.0-20200615185753-c42b5136ff88
+	github.com/mattn/go-isatty v0.0.16
+	github.com/mattn/go-shellwords v1.0.4
+	github.com/mitchellh/cli v1.1.5
+	github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db
+	github.com/mitchellh/copystructure v1.2.0
+	github.com/mitchellh/go-homedir v1.1.0
+	github.com/mitchellh/go-linereader v0.0.0-20190213213312-1b945b3263eb
+	github.com/mitchellh/go-wordwrap v1.0.1
+	github.com/mitchellh/gox v1.0.1
+	github.com/mitchellh/mapstructure v1.1.2
+	github.com/mitchellh/reflectwalk v1.0.2
+	github.com/nishanths/exhaustive v0.7.11
+	github.com/packer-community/winrmcp v0.0.0-20180921211025-c76d91c1e7db
+	github.com/pkg/browser v0.0.0-20201207095918-0426ae3fba23
+	github.com/pkg/errors v0.9.1
+	github.com/posener/complete v1.2.3
+	github.com/spf13/afero v1.2.2
+	github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.588
+	github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/sts v1.0.588
+	github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag v1.0.233
+	github.com/tencentyun/cos-go-sdk-v5 v0.7.29
+	github.com/tombuildsstuff/giovanni v0.15.1
+	github.com/xanzy/ssh-agent v0.3.1
+	github.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557
+	github.com/zclconf/go-cty v1.12.1
+	github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b
+	github.com/zclconf/go-cty-yaml v1.0.3
+	golang.org/x/crypto v0.1.0
+	golang.org/x/mod v0.8.0
+	golang.org/x/net v0.6.0
+	golang.org/x/oauth2 v0.4.0
+	golang.org/x/sys v0.5.0
+	golang.org/x/term v0.5.0
+	golang.org/x/text v0.8.0
+	golang.org/x/tools v0.6.0
+	golang.org/x/tools/cmd/cover v0.1.0-deprecated
+	google.golang.org/api v0.102.0
+	google.golang.org/genproto v0.0.0-20221027153422-115e99e71e1c
+	google.golang.org/grpc v1.50.1
+	google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0
+	google.golang.org/protobuf v1.28.1
+	honnef.co/go/tools v0.3.0
+	k8s.io/api v0.23.4
+	k8s.io/apimachinery v0.23.4
+	k8s.io/client-go v0.23.4
+	k8s.io/utils v0.0.0-20211116205334-6203023598ed
+)
+
+require (
+	cloud.google.com/go v0.105.0 // indirect
+	cloud.google.com/go/compute v1.12.1 // indirect
+	cloud.google.com/go/compute/metadata v0.2.1 // indirect
+	cloud.google.com/go/iam v0.6.0 // indirect
+	github.com/Azure/go-autorest v14.2.0+incompatible // indirect
+	github.com/Azure/go-autorest/autorest/adal v0.9.18 // indirect
+	github.com/Azure/go-autorest/autorest/azure/cli v0.4.4 // indirect
+	github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
+	github.com/Azure/go-autorest/autorest/to v0.4.0 // indirect
+	github.com/Azure/go-autorest/autorest/validation v0.3.1 // indirect
+	github.com/Azure/go-autorest/logger v0.2.1 // indirect
+	github.com/Azure/go-autorest/tracing v0.6.0 // indirect
+	github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c // indirect
+	github.com/BurntSushi/toml v0.4.1 // indirect
+	github.com/ChrisTrenkamp/goxpath v0.0.0-20190607011252-c5096ec8773d // indirect
+	github.com/Masterminds/goutils v1.1.1 // indirect
+	github.com/Masterminds/semver/v3 v3.1.1 // indirect
+	github.com/Masterminds/sprig/v3 v3.2.2 // indirect
+	github.com/Microsoft/go-winio v0.5.0 // indirect
+	github.com/antchfx/xmlquery v1.3.5 // indirect
+	github.com/antchfx/xpath v1.1.10 // indirect
+	github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
+	github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da // indirect
+	github.com/armon/go-radix v1.0.0 // indirect
+	github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f // indirect
+	github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect
+	github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d // indirect
+	github.com/creack/pty v1.1.18 // indirect
+	github.com/dimchansky/utfbom v1.1.1 // indirect
+	github.com/dylanmei/iso8601 v0.1.0 // indirect
+	github.com/fatih/color v1.13.0 // indirect
+	github.com/go-logr/logr v1.2.0 // indirect
+	github.com/gofrs/uuid v4.0.0+incompatible // indirect
+	github.com/gogo/protobuf v1.3.2 // indirect
+	github.com/golang-jwt/jwt/v4 v4.2.0 // indirect
+	github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
+	github.com/golang/protobuf v1.5.2 // indirect
+	github.com/google/go-querystring v1.1.0 // indirect
+	github.com/google/gofuzz v1.1.0 // indirect
+	github.com/googleapis/enterprise-certificate-proxy v0.2.0 // indirect
+	github.com/googleapis/gax-go/v2 v2.6.0 // indirect
+	github.com/googleapis/gnostic v0.5.5 // indirect
+	github.com/hashicorp/go-immutable-radix v1.0.0 // indirect
+	github.com/hashicorp/go-msgpack v0.5.4 // indirect
+	github.com/hashicorp/go-rootcerts v1.0.2 // indirect
+	github.com/hashicorp/go-safetemp v1.0.0 // indirect
+	github.com/hashicorp/go-slug v0.11.0 // indirect
+	github.com/hashicorp/golang-lru v0.5.1 // indirect
+	github.com/hashicorp/serf v0.9.5 // indirect
+	github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d // indirect
+	github.com/huandu/xstrings v1.3.3 // indirect
+	github.com/imdario/mergo v0.3.13 // indirect
+	github.com/json-iterator/go v1.1.12 // indirect
+	github.com/klauspost/compress v1.15.11 // indirect
+	github.com/manicminer/hamilton-autorest v0.2.0 // indirect
+	github.com/masterzen/simplexml v0.0.0-20190410153822-31eea3082786 // indirect
+	github.com/mattn/go-colorable v0.1.13 // indirect
+	github.com/mitchellh/go-testing-interface v1.14.1 // indirect
+	github.com/mitchellh/iochan v1.0.0 // indirect
+	github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
+	github.com/modern-go/reflect2 v1.0.2 // indirect
+	github.com/mozillazg/go-httpheader v0.3.0 // indirect
+	github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d // indirect
+	github.com/oklog/run v1.0.0 // indirect
+	github.com/satori/go.uuid v1.2.0 // indirect
+	github.com/sergi/go-diff v1.2.0 // indirect
+	github.com/shopspring/decimal v1.3.1 // indirect
+	github.com/spf13/cast v1.5.0 // indirect
+	github.com/spf13/pflag v1.0.5 // indirect
+	github.com/stretchr/objx v0.5.0 // indirect
+	github.com/ulikunitz/xz v0.5.10 // indirect
+	github.com/vmihailenco/msgpack/v4 v4.3.12 // indirect
+	github.com/vmihailenco/tagparser v0.1.1 // indirect
+	go.opencensus.io v0.23.0 // indirect
+	golang.org/x/exp/typeparams v0.0.0-20220218215828-6cf2b201936e // indirect
+	golang.org/x/time v0.3.0 // indirect
+	golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
+	google.golang.org/appengine v1.6.7 // indirect
+	gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
+	gopkg.in/inf.v0 v0.9.1 // indirect
+	gopkg.in/ini.v1 v1.66.2 // indirect
+	gopkg.in/yaml.v2 v2.4.0 // indirect
+	gopkg.in/yaml.v3 v3.0.1 // indirect
+	k8s.io/klog/v2 v2.30.0 // indirect
+	k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 // indirect
+	sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 // indirect
+	sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
+	sigs.k8s.io/yaml v1.2.0 // indirect
+)
+
+go 1.18
diff --git a/v1.4.7/go.sum b/v1.4.7/go.sum
new file mode 100644
index 0000000..5a3a394
--- /dev/null
+++ b/v1.4.7/go.sum
@@ -0,0 +1,1447 @@
+cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
+cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
+cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
+cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
+cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
+cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
+cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
+cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
+cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
+cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
+cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
+cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
+cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
+cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
+cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
+cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
+cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
+cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
+cloud.google.com/go v0.83.0/go.mod h1:Z7MJUsANfY0pYPdw0lbnivPx4/vhy/e2FEkSkF7vAVY=
+cloud.google.com/go v0.84.0/go.mod h1:RazrYuxIK6Kb7YrzzhPoLmCVzl7Sup4NrbKPg8KHSUM=
+cloud.google.com/go v0.87.0/go.mod h1:TpDYlFy7vuLzZMMZ+B6iRiELaY7z/gJPaqbMx6mlWcY=
+cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aDQ=
+cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI=
+cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4=
+cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc=
+cloud.google.com/go v0.99.0/go.mod h1:w0Xx2nLzqWJPuozYQX+hFfCSI8WioryfRDzkoI/Y2ZA=
+cloud.google.com/go v0.100.2/go.mod h1:4Xra9TjzAeYHrl5+oeLlzbM2k3mjVhZh4UqTZ//w99A=
+cloud.google.com/go v0.102.0/go.mod h1:oWcCzKlqJ5zgHQt9YsaeTY9KzIvjyy0ArmiBUgpQ+nc=
+cloud.google.com/go v0.102.1/go.mod h1:XZ77E9qnTEnrgEOvr4xzfdX5TRo7fB4T2F4O6+34hIU=
+cloud.google.com/go v0.104.0/go.mod h1:OO6xxXdJyvuJPcEPBLN9BJPD+jep5G1+2U5B5gkRYtA=
+cloud.google.com/go v0.105.0 h1:DNtEKRBAAzeS4KyIory52wWHuClNaXJ5x1F7xa4q+5Y=
+cloud.google.com/go v0.105.0/go.mod h1:PrLgOJNe5nfE9UMxKxgXj4mD3voiP+YQ6gdt6KMFOKM=
+cloud.google.com/go/aiplatform v1.22.0/go.mod h1:ig5Nct50bZlzV6NvKaTwmplLLddFx0YReh9WfTO5jKw=
+cloud.google.com/go/aiplatform v1.24.0/go.mod h1:67UUvRBKG6GTayHKV8DBv2RtR1t93YRu5B1P3x99mYY=
+cloud.google.com/go/analytics v0.11.0/go.mod h1:DjEWCu41bVbYcKyvlws9Er60YE4a//bK6mnhWvQeFNI=
+cloud.google.com/go/analytics v0.12.0/go.mod h1:gkfj9h6XRf9+TS4bmuhPEShsh3hH8PAZzm/41OOhQd4=
+cloud.google.com/go/area120 v0.5.0/go.mod h1:DE/n4mp+iqVyvxHN41Vf1CR602GiHQjFPusMFW6bGR4=
+cloud.google.com/go/area120 v0.6.0/go.mod h1:39yFJqWVgm0UZqWTOdqkLhjoC7uFfgXRC8g/ZegeAh0=
+cloud.google.com/go/artifactregistry v1.6.0/go.mod h1:IYt0oBPSAGYj/kprzsBjZ/4LnG/zOcHyFHjWPCi6SAQ=
+cloud.google.com/go/artifactregistry v1.7.0/go.mod h1:mqTOFOnGZx8EtSqK/ZWcsm/4U8B77rbcLP6ruDU2Ixk=
+cloud.google.com/go/asset v1.5.0/go.mod h1:5mfs8UvcM5wHhqtSv8J1CtxxaQq3AdBxxQi2jGW/K4o=
+cloud.google.com/go/asset v1.7.0/go.mod h1:YbENsRK4+xTiL+Ofoj5Ckf+O17kJtgp3Y3nn4uzZz5s=
+cloud.google.com/go/asset v1.8.0/go.mod h1:mUNGKhiqIdbr8X7KNayoYvyc4HbbFO9URsjbytpUaW0=
+cloud.google.com/go/assuredworkloads v1.5.0/go.mod h1:n8HOZ6pff6re5KYfBXcFvSViQjDwxFkAkmUFffJRbbY=
+cloud.google.com/go/assuredworkloads v1.6.0/go.mod h1:yo2YOk37Yc89Rsd5QMVECvjaMKymF9OP+QXWlKXUkXw=
+cloud.google.com/go/assuredworkloads v1.7.0/go.mod h1:z/736/oNmtGAyU47reJgGN+KVoYoxeLBoj4XkKYscNI=
+cloud.google.com/go/automl v1.5.0/go.mod h1:34EjfoFGMZ5sgJ9EoLsRtdPSNZLcfflJR39VbVNS2M0=
+cloud.google.com/go/automl v1.6.0/go.mod h1:ugf8a6Fx+zP0D59WLhqgTDsQI9w07o64uf/Is3Nh5p8=
+cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
+cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
+cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
+cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
+cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
+cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
+cloud.google.com/go/bigquery v1.42.0/go.mod h1:8dRTJxhtG+vwBKzE5OseQn/hiydoQN3EedCaOdYmxRA=
+cloud.google.com/go/billing v1.4.0/go.mod h1:g9IdKBEFlItS8bTtlrZdVLWSSdSyFUZKXNS02zKMOZY=
+cloud.google.com/go/billing v1.5.0/go.mod h1:mztb1tBc3QekhjSgmpf/CV4LzWXLzCArwpLmP2Gm88s=
+cloud.google.com/go/binaryauthorization v1.1.0/go.mod h1:xwnoWu3Y84jbuHa0zd526MJYmtnVXn0syOjaJgy4+dM=
+cloud.google.com/go/binaryauthorization v1.2.0/go.mod h1:86WKkJHtRcv5ViNABtYMhhNWRrD1Vpi//uKEy7aYEfI=
+cloud.google.com/go/cloudtasks v1.5.0/go.mod h1:fD92REy1x5woxkKEkLdvavGnPJGEn8Uic9nWuLzqCpY=
+cloud.google.com/go/cloudtasks v1.6.0/go.mod h1:C6Io+sxuke9/KNRkbQpihnW93SWDU3uXt92nu85HkYI=
+cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow=
+cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM=
+cloud.google.com/go/compute v1.5.0/go.mod h1:9SMHyhJlzhlkJqrPAc839t2BZFTSk6Jdj6mkzQJeu0M=
+cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz/FMzPu0s=
+cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU=
+cloud.google.com/go/compute v1.7.0/go.mod h1:435lt8av5oL9P3fv1OEzSbSUe+ybHXGMPQHHZWZxy9U=
+cloud.google.com/go/compute v1.10.0/go.mod h1:ER5CLbMxl90o2jtNbGSbtfOpQKR0t15FOtRsugnLrlU=
+cloud.google.com/go/compute v1.12.1 h1:gKVJMEyqV5c/UnpzjjQbo3Rjvvqpr9B1DFSbJC4OXr0=
+cloud.google.com/go/compute v1.12.1/go.mod h1:e8yNOBcBONZU1vJKCvCoDw/4JQsA0dpM4x/6PIIOocU=
+cloud.google.com/go/compute/metadata v0.2.1 h1:efOwf5ymceDhK6PKMnnrTHP4pppY5L22mle96M1yP48=
+cloud.google.com/go/compute/metadata v0.2.1/go.mod h1:jgHgmJd2RKBGzXqF5LR2EZMGxBkeanZ9wwa75XHJgOM=
+cloud.google.com/go/containeranalysis v0.5.1/go.mod h1:1D92jd8gRR/c0fGMlymRgxWD3Qw9C1ff6/T7mLgVL8I=
+cloud.google.com/go/containeranalysis v0.6.0/go.mod h1:HEJoiEIu+lEXM+k7+qLCci0h33lX3ZqoYFdmPcoO7s4=
+cloud.google.com/go/datacatalog v1.3.0/go.mod h1:g9svFY6tuR+j+hrTw3J2dNcmI0dzmSiyOzm8kpLq0a0=
+cloud.google.com/go/datacatalog v1.5.0/go.mod h1:M7GPLNQeLfWqeIm3iuiruhPzkt65+Bx8dAKvScX8jvs=
+cloud.google.com/go/datacatalog v1.6.0/go.mod h1:+aEyF8JKg+uXcIdAmmaMUmZ3q1b/lKLtXCmXdnc0lbc=
+cloud.google.com/go/dataflow v0.6.0/go.mod h1:9QwV89cGoxjjSR9/r7eFDqqjtvbKxAK2BaYU6PVk9UM=
+cloud.google.com/go/dataflow v0.7.0/go.mod h1:PX526vb4ijFMesO1o202EaUmouZKBpjHsTlCtB4parQ=
+cloud.google.com/go/dataform v0.3.0/go.mod h1:cj8uNliRlHpa6L3yVhDOBrUXH+BPAO1+KFMQQNSThKo=
+cloud.google.com/go/dataform v0.4.0/go.mod h1:fwV6Y4Ty2yIFL89huYlEkwUPtS7YZinZbzzj5S9FzCE=
+cloud.google.com/go/datalabeling v0.5.0/go.mod h1:TGcJ0G2NzcsXSE/97yWjIZO0bXj0KbVlINXMG9ud42I=
+cloud.google.com/go/datalabeling v0.6.0/go.mod h1:WqdISuk/+WIGeMkpw/1q7bK/tFEZxsrFJOJdY2bXvTQ=
+cloud.google.com/go/dataqna v0.5.0/go.mod h1:90Hyk596ft3zUQ8NkFfvICSIfHFh1Bc7C4cK3vbhkeo=
+cloud.google.com/go/dataqna v0.6.0/go.mod h1:1lqNpM7rqNLVgWBJyk5NF6Uen2PHym0jtVJonplVsDA=
+cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
+cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
+cloud.google.com/go/datastream v1.2.0/go.mod h1:i/uTP8/fZwgATHS/XFu0TcNUhuA0twZxxQ3EyCUQMwo=
+cloud.google.com/go/datastream v1.3.0/go.mod h1:cqlOX8xlyYF/uxhiKn6Hbv6WjwPPuI9W2M9SAXwaLLQ=
+cloud.google.com/go/dialogflow v1.15.0/go.mod h1:HbHDWs33WOGJgn6rfzBW1Kv807BE3O1+xGbn59zZWI4=
+cloud.google.com/go/dialogflow v1.16.1/go.mod h1:po6LlzGfK+smoSmTBnbkIZY2w8ffjz/RcGSS+sh1el0=
+cloud.google.com/go/dialogflow v1.17.0/go.mod h1:YNP09C/kXA1aZdBgC/VtXX74G/TKn7XVCcVumTflA+8=
+cloud.google.com/go/documentai v1.7.0/go.mod h1:lJvftZB5NRiFSX4moiye1SMxHx0Bc3x1+p9e/RfXYiU=
+cloud.google.com/go/documentai v1.8.0/go.mod h1:xGHNEB7CtsnySCNrCFdCyyMz44RhFEEX2Q7UD0c5IhU=
+cloud.google.com/go/domains v0.6.0/go.mod h1:T9Rz3GasrpYk6mEGHh4rymIhjlnIuB4ofT1wTxDeT4Y=
+cloud.google.com/go/domains v0.7.0/go.mod h1:PtZeqS1xjnXuRPKE/88Iru/LdfoRyEHYA9nFQf4UKpg=
+cloud.google.com/go/edgecontainer v0.1.0/go.mod h1:WgkZ9tp10bFxqO8BLPqv2LlfmQF1X8lZqwW4r1BTajk=
+cloud.google.com/go/edgecontainer v0.2.0/go.mod h1:RTmLijy+lGpQ7BXuTDa4C4ssxyXT34NIuHIgKuP4s5w=
+cloud.google.com/go/functions v1.6.0/go.mod h1:3H1UA3qiIPRWD7PeZKLvHZ9SaQhR26XIJcC0A5GbvAk=
+cloud.google.com/go/functions v1.7.0/go.mod h1:+d+QBcWM+RsrgZfV9xo6KfA1GlzJfxcfZcRPEhDDfzg=
+cloud.google.com/go/gaming v1.5.0/go.mod h1:ol7rGcxP/qHTRQE/RO4bxkXq+Fix0j6D4LFPzYTIrDM=
+cloud.google.com/go/gaming v1.6.0/go.mod h1:YMU1GEvA39Qt3zWGyAVA9bpYz/yAhTvaQ1t2sK4KPUA=
+cloud.google.com/go/gkeconnect v0.5.0/go.mod h1:c5lsNAg5EwAy7fkqX/+goqFsU1Da/jQFqArp+wGNr/o=
+cloud.google.com/go/gkeconnect v0.6.0/go.mod h1:Mln67KyU/sHJEBY8kFZ0xTeyPtzbq9StAVvEULYK16A=
+cloud.google.com/go/gkehub v0.9.0/go.mod h1:WYHN6WG8w9bXU0hqNxt8rm5uxnk8IH+lPY9J2TV7BK0=
+cloud.google.com/go/gkehub v0.10.0/go.mod h1:UIPwxI0DsrpsVoWpLB0stwKCP+WFVG9+y977wO+hBH0=
+cloud.google.com/go/grafeas v0.2.0/go.mod h1:KhxgtF2hb0P191HlY5besjYm6MqTSTj3LSI+M+ByZHc=
+cloud.google.com/go/iam v0.3.0/go.mod h1:XzJPvDayI+9zsASAFO68Hk07u3z+f+JrT2xXNdp4bnY=
+cloud.google.com/go/iam v0.5.0/go.mod h1:wPU9Vt0P4UmCux7mqtRu6jcpPAb74cP1fh50J3QpkUc=
+cloud.google.com/go/iam v0.6.0 h1:nsqQC88kT5Iwlm4MeNGTpfMWddp6NB/UOLFTH6m1QfQ=
+cloud.google.com/go/iam v0.6.0/go.mod h1:+1AH33ueBne5MzYccyMHtEKqLE4/kJOibtffMHDMFMc=
+cloud.google.com/go/kms v1.6.0 h1:OWRZzrPmOZUzurjI2FBGtgY2mB1WaJkqhw6oIwSj0Yg=
+cloud.google.com/go/kms v1.6.0/go.mod h1:Jjy850yySiasBUDi6KFUwUv2n1+o7QZFyuUJg6OgjA0=
+cloud.google.com/go/language v1.4.0/go.mod h1:F9dRpNFQmJbkaop6g0JhSBXCNlO90e1KWx5iDdxbWic=
+cloud.google.com/go/language v1.6.0/go.mod h1:6dJ8t3B+lUYfStgls25GusK04NLh3eDLQnWM3mdEbhI=
+cloud.google.com/go/lifesciences v0.5.0/go.mod h1:3oIKy8ycWGPUyZDR/8RNnTOYevhaMLqh5vLUXs9zvT8=
+cloud.google.com/go/lifesciences v0.6.0/go.mod h1:ddj6tSX/7BOnhxCSd3ZcETvtNr8NZ6t/iPhY2Tyfu08=
+cloud.google.com/go/longrunning v0.1.1 h1:y50CXG4j0+qvEukslYFBCrzaXX0qpFbBzc3PchSu/LE=
+cloud.google.com/go/mediatranslation v0.5.0/go.mod h1:jGPUhGTybqsPQn91pNXw0xVHfuJ3leR1wj37oU3y1f4=
+cloud.google.com/go/mediatranslation v0.6.0/go.mod h1:hHdBCTYNigsBxshbznuIMFNe5QXEowAuNmmC7h8pu5w=
+cloud.google.com/go/memcache v1.4.0/go.mod h1:rTOfiGZtJX1AaFUrOgsMHX5kAzaTQ8azHiuDoTPzNsE=
+cloud.google.com/go/memcache v1.5.0/go.mod h1:dk3fCK7dVo0cUU2c36jKb4VqKPS22BTkf81Xq617aWM=
+cloud.google.com/go/metastore v1.5.0/go.mod h1:2ZNrDcQwghfdtCwJ33nM0+GrBGlVuh8rakL3vdPY3XY=
+cloud.google.com/go/metastore v1.6.0/go.mod h1:6cyQTls8CWXzk45G55x57DVQ9gWg7RiH65+YgPsNh9s=
+cloud.google.com/go/networkconnectivity v1.4.0/go.mod h1:nOl7YL8odKyAOtzNX73/M5/mGZgqqMeryi6UPZTk/rA=
+cloud.google.com/go/networkconnectivity v1.5.0/go.mod h1:3GzqJx7uhtlM3kln0+x5wyFvuVH1pIBJjhCpjzSt75o=
+cloud.google.com/go/networksecurity v0.5.0/go.mod h1:xS6fOCoqpVC5zx15Z/MqkfDwH4+m/61A3ODiDV1xmiQ=
+cloud.google.com/go/networksecurity v0.6.0/go.mod h1:Q5fjhTr9WMI5mbpRYEbiexTzROf7ZbDzvzCrNl14nyU=
+cloud.google.com/go/notebooks v1.2.0/go.mod h1:9+wtppMfVPUeJ8fIWPOq1UnATHISkGXGqTkxeieQ6UY=
+cloud.google.com/go/notebooks v1.3.0/go.mod h1:bFR5lj07DtCPC7YAAJ//vHskFBxA5JzYlH68kXVdk34=
+cloud.google.com/go/osconfig v1.7.0/go.mod h1:oVHeCeZELfJP7XLxcBGTMBvRO+1nQ5tFG9VQTmYS2Fs=
+cloud.google.com/go/osconfig v1.8.0/go.mod h1:EQqZLu5w5XA7eKizepumcvWx+m8mJUhEwiPqWiZeEdg=
+cloud.google.com/go/oslogin v1.4.0/go.mod h1:YdgMXWRaElXz/lDk1Na6Fh5orF7gvmJ0FGLIs9LId4E=
+cloud.google.com/go/oslogin v1.5.0/go.mod h1:D260Qj11W2qx/HVF29zBg+0fd6YCSjSqLUkY/qEenQU=
+cloud.google.com/go/phishingprotection v0.5.0/go.mod h1:Y3HZknsK9bc9dMi+oE8Bim0lczMU6hrX0UpADuMefr0=
+cloud.google.com/go/phishingprotection v0.6.0/go.mod h1:9Y3LBLgy0kDTcYET8ZH3bq/7qni15yVUoAxiFxnlSUA=
+cloud.google.com/go/privatecatalog v0.5.0/go.mod h1:XgosMUvvPyxDjAVNDYxJ7wBW8//hLDDYmnsNcMGq1K0=
+cloud.google.com/go/privatecatalog v0.6.0/go.mod h1:i/fbkZR0hLN29eEWiiwue8Pb+GforiEIBnV9yrRUOKI=
+cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
+cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
+cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
+cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
+cloud.google.com/go/recaptchaenterprise v1.3.1/go.mod h1:OdD+q+y4XGeAlxRaMn1Y7/GveP6zmq76byL6tjPE7d4=
+cloud.google.com/go/recaptchaenterprise/v2 v2.1.0/go.mod h1:w9yVqajwroDNTfGuhmOjPDN//rZGySaf6PtFVcSCa7o=
+cloud.google.com/go/recaptchaenterprise/v2 v2.2.0/go.mod h1:/Zu5jisWGeERrd5HnlS3EUGb/D335f9k51B/FVil0jk=
+cloud.google.com/go/recaptchaenterprise/v2 v2.3.0/go.mod h1:O9LwGCjrhGHBQET5CA7dd5NwwNQUErSgEDit1DLNTdo=
+cloud.google.com/go/recommendationengine v0.5.0/go.mod h1:E5756pJcVFeVgaQv3WNpImkFP8a+RptV6dDLGPILjvg=
+cloud.google.com/go/recommendationengine v0.6.0/go.mod h1:08mq2umu9oIqc7tDy8sx+MNJdLG0fUi3vaSVbztHgJ4=
+cloud.google.com/go/recommender v1.5.0/go.mod h1:jdoeiBIVrJe9gQjwd759ecLJbxCDED4A6p+mqoqDvTg=
+cloud.google.com/go/recommender v1.6.0/go.mod h1:+yETpm25mcoiECKh9DEScGzIRyDKpZ0cEhWGo+8bo+c=
+cloud.google.com/go/redis v1.7.0/go.mod h1:V3x5Jq1jzUcg+UNsRvdmsfuFnit1cfe3Z/PGyq/lm4Y=
+cloud.google.com/go/redis v1.8.0/go.mod h1:Fm2szCDavWzBk2cDKxrkmWBqoCiL1+Ctwq7EyqBCA/A=
+cloud.google.com/go/retail v1.8.0/go.mod h1:QblKS8waDmNUhghY2TI9O3JLlFk8jybHeV4BF19FrE4=
+cloud.google.com/go/retail v1.9.0/go.mod h1:g6jb6mKuCS1QKnH/dpu7isX253absFl6iE92nHwlBUY=
+cloud.google.com/go/scheduler v1.4.0/go.mod h1:drcJBmxF3aqZJRhmkHQ9b3uSSpQoltBPGPxGAWROx6s=
+cloud.google.com/go/scheduler v1.5.0/go.mod h1:ri073ym49NW3AfT6DZi21vLZrG07GXr5p3H1KxN5QlI=
+cloud.google.com/go/secretmanager v1.6.0/go.mod h1:awVa/OXF6IiyaU1wQ34inzQNc4ISIDIrId8qE5QGgKA=
+cloud.google.com/go/security v1.5.0/go.mod h1:lgxGdyOKKjHL4YG3/YwIL2zLqMFCKs0UbQwgyZmfJl4=
+cloud.google.com/go/security v1.7.0/go.mod h1:mZklORHl6Bg7CNnnjLH//0UlAlaXqiG7Lb9PsPXLfD0=
+cloud.google.com/go/security v1.8.0/go.mod h1:hAQOwgmaHhztFhiQ41CjDODdWP0+AE1B3sX4OFlq+GU=
+cloud.google.com/go/securitycenter v1.13.0/go.mod h1:cv5qNAqjY84FCN6Y9z28WlkKXyWsgLO832YiWwkCWcU=
+cloud.google.com/go/securitycenter v1.14.0/go.mod h1:gZLAhtyKv85n52XYWt6RmeBdydyxfPeTrpToDPw4Auc=
+cloud.google.com/go/servicedirectory v1.4.0/go.mod h1:gH1MUaZCgtP7qQiI+F+A+OpeKF/HQWgtAddhTbhL2bs=
+cloud.google.com/go/servicedirectory v1.5.0/go.mod h1:QMKFL0NUySbpZJ1UZs3oFAmdvVxhhxB6eJ/Vlp73dfg=
+cloud.google.com/go/speech v1.6.0/go.mod h1:79tcr4FHCimOp56lwC01xnt/WPJZc4v3gzyT7FoBkCM=
+cloud.google.com/go/speech v1.7.0/go.mod h1:KptqL+BAQIhMsj1kOP2la5DSEEerPDuOP/2mmkhHhZQ=
+cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
+cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
+cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
+cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
+cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
+cloud.google.com/go/storage v1.22.1/go.mod h1:S8N1cAStu7BOeFfE8KAQzmyyLkK8p/vmRq6kuBTW58Y=
+cloud.google.com/go/storage v1.23.0/go.mod h1:vOEEDNFnciUMhBeT6hsJIn3ieU5cFRmzeLgDvXzfIXc=
+cloud.google.com/go/storage v1.27.0/go.mod h1:x9DOL8TK/ygDUMieqwfhdpQryTeEkhGKMi80i/iqR2s=
+cloud.google.com/go/storage v1.28.0 h1:DLrIZ6xkeZX6K70fU/boWx5INJumt6f+nwwWSHXzzGY=
+cloud.google.com/go/storage v1.28.0/go.mod h1:qlgZML35PXA3zoEnIkiPLY4/TOkUleufRlu6qmcf7sI=
+cloud.google.com/go/talent v1.1.0/go.mod h1:Vl4pt9jiHKvOgF9KoZo6Kob9oV4lwd/ZD5Cto54zDRw=
+cloud.google.com/go/talent v1.2.0/go.mod h1:MoNF9bhFQbiJ6eFD3uSsg0uBALw4n4gaCaEjBw9zo8g=
+cloud.google.com/go/videointelligence v1.6.0/go.mod h1:w0DIDlVRKtwPCn/C4iwZIJdvC69yInhW0cfi+p546uU=
+cloud.google.com/go/videointelligence v1.7.0/go.mod h1:k8pI/1wAhjznARtVT9U1llUaFNPh7muw8QyOUpavru4=
+cloud.google.com/go/vision v1.2.0/go.mod h1:SmNwgObm5DpFBme2xpyOyasvBc1aPdjvMk2bBk0tKD0=
+cloud.google.com/go/vision/v2 v2.2.0/go.mod h1:uCdV4PpN1S0jyCyq8sIM42v2Y6zOLkZs+4R9LrGYwFo=
+cloud.google.com/go/vision/v2 v2.3.0/go.mod h1:UO61abBx9QRMFkNBbf1D8B1LXdS2cGiiCRx0vSpZoUo=
+cloud.google.com/go/webrisk v1.4.0/go.mod h1:Hn8X6Zr+ziE2aNd8SliSDWpEnSS1u4R9+xXZmFiHmGE=
+cloud.google.com/go/webrisk v1.5.0/go.mod h1:iPG6fr52Tv7sGk0H6qUFzmL3HHZev1htXuWDEEsqMTg=
+cloud.google.com/go/workflows v1.6.0/go.mod h1:6t9F5h/unJz41YqfBmqSASJSXccBLtD1Vwf+KmJENM0=
+cloud.google.com/go/workflows v1.7.0/go.mod h1:JhSrZuVZWuiDfKEFxU0/F1PQjmpnpcoISEXH2bcHC3M=
+dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
+github.com/Azure/azure-sdk-for-go v45.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/azure-sdk-for-go v47.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/azure-sdk-for-go v59.2.0+incompatible h1:mbxiZy1K820hQ+dI+YIO/+a0wQDYqOu18BAGe4lXjVk=
+github.com/Azure/azure-sdk-for-go v59.2.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
+github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
+github.com/Azure/go-autorest/autorest v0.11.3/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=
+github.com/Azure/go-autorest/autorest v0.11.10/go.mod h1:eipySxLmqSyC5s5k1CLupqet0PSENBEDP93LQ9a8QYw=
+github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA=
+github.com/Azure/go-autorest/autorest v0.11.24 h1:1fIGgHKqVm54KIPT+q8Zmd1QlVsmHqeUGso5qm2BqqE=
+github.com/Azure/go-autorest/autorest v0.11.24/go.mod h1:G6kyRlFnTuSbEYkQGawPfsCswgme4iYf6rfSKUDzbCc=
+github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg=
+github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
+github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
+github.com/Azure/go-autorest/autorest/adal v0.9.14/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
+github.com/Azure/go-autorest/autorest/adal v0.9.18 h1:kLnPsRjzZZUF3K5REu/Kc+qMQrvuza2bwSnNdhmzLfQ=
+github.com/Azure/go-autorest/autorest/adal v0.9.18/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ=
+github.com/Azure/go-autorest/autorest/azure/cli v0.4.0/go.mod h1:JljT387FplPzBA31vUcvsetLKF3pec5bdAxjVU4kI2s=
+github.com/Azure/go-autorest/autorest/azure/cli v0.4.2/go.mod h1:7qkJkT+j6b+hIpzMOwPChJhTqS8VbsqqgULzMNRugoM=
+github.com/Azure/go-autorest/autorest/azure/cli v0.4.4 h1:iuooz5cZL6VRcO7DVSFYxRcouqn6bFVE/e77Wts50Zk=
+github.com/Azure/go-autorest/autorest/azure/cli v0.4.4/go.mod h1:yAQ2b6eP/CmLPnmLvxtT1ALIY3OR1oFcCqVBi8vHiTc=
+github.com/Azure/go-autorest/autorest/date v0.3.0 h1:7gUk1U5M/CQbp9WoqinNzJar+8KY+LPI6wiWrP/myHw=
+github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
+github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
+github.com/Azure/go-autorest/autorest/mocks v0.4.1 h1:K0laFcLE6VLTOwNgSxaGbUcLPuGXlNkbVvq4cW4nIHk=
+github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
+github.com/Azure/go-autorest/autorest/to v0.4.0 h1:oXVqrxakqqV1UZdSazDOPOLvOIz+XA683u8EctwboHk=
+github.com/Azure/go-autorest/autorest/to v0.4.0/go.mod h1:fE8iZBn7LQR7zH/9XU2NcPR4o9jEImooCeWJcYV/zLE=
+github.com/Azure/go-autorest/autorest/validation v0.3.0/go.mod h1:yhLgjC0Wda5DYXl6JAsWyUe4KVNffhoDhG0zVzUMo3E=
+github.com/Azure/go-autorest/autorest/validation v0.3.1 h1:AgyqjAd94fwNAoTjl/WQXg4VvFeRFpO+UhNyRXqF1ac=
+github.com/Azure/go-autorest/autorest/validation v0.3.1/go.mod h1:yhLgjC0Wda5DYXl6JAsWyUe4KVNffhoDhG0zVzUMo3E=
+github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
+github.com/Azure/go-autorest/logger v0.2.1 h1:IG7i4p/mDa2Ce4TRyAO8IHnVhAVF3RFU+ZtXWSmf4Tg=
+github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
+github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
+github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
+github.com/Azure/go-ntlmssp v0.0.0-20180810175552-4a21cbd618b4/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
+github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c h1:/IBSNwUN8+eKzUzbJPqhK839ygXJ82sde8x3ogr6R28=
+github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
+github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
+github.com/BurntSushi/toml v0.4.1 h1:GaI7EiDXDRfa8VshkTj7Fym7ha+y8/XxIgD2okUIjLw=
+github.com/BurntSushi/toml v0.4.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
+github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
+github.com/ChrisTrenkamp/goxpath v0.0.0-20170922090931-c385f95c6022/go.mod h1:nuWgzSkT5PnyOd+272uUmV0dnAnAn42Mk7PiQC5VzN4=
+github.com/ChrisTrenkamp/goxpath v0.0.0-20190607011252-c5096ec8773d h1:W1diKnDQkXxNDhghdBSbQ4LI/E1aJNTwpqPp3KtlB8w=
+github.com/ChrisTrenkamp/goxpath v0.0.0-20190607011252-c5096ec8773d/go.mod h1:nuWgzSkT5PnyOd+272uUmV0dnAnAn42Mk7PiQC5VzN4=
+github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=
+github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
+github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc=
+github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
+github.com/Masterminds/sprig/v3 v3.2.1/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk=
+github.com/Masterminds/sprig/v3 v3.2.2 h1:17jRggJu518dr3QaafizSXOjKYp94wKfABxUmyxvxX8=
+github.com/Masterminds/sprig/v3 v3.2.2/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk=
+github.com/Microsoft/go-winio v0.5.0 h1:Elr9Wn+sGKPlkaBvwu4mTrxtmOp3F3yV9qhaHbXGjwU=
+github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
+github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
+github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
+github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
+github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
+github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
+github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409/go.mod h1:1pk82RBxDY/JZnPQrtqHlUFfCctgdorsd9M06fMynOM=
+github.com/agext/levenshtein v1.2.1/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
+github.com/agext/levenshtein v1.2.2/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
+github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7lmo=
+github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
+github.com/aliyun/alibaba-cloud-sdk-go v1.61.1501 h1:Ij3S0pNUMgHlhx3Ew8g9RNrt59EKhHYdMODGtFXJfSc=
+github.com/aliyun/alibaba-cloud-sdk-go v1.61.1501/go.mod h1:RcDobYh8k5VP6TNybz9m++gL3ijVI5wueVr0EM10VsU=
+github.com/aliyun/aliyun-oss-go-sdk v0.0.0-20190103054945-8205d1f41e70 h1:FrF4uxA24DF3ARNXVbUin3wa5fDLaB1Cy8mKks/LRz4=
+github.com/aliyun/aliyun-oss-go-sdk v0.0.0-20190103054945-8205d1f41e70/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8=
+github.com/aliyun/aliyun-tablestore-go-sdk v4.1.2+incompatible h1:ABQ7FF+IxSFHDMOTtjCfmMDMHiCq6EsAoCV/9sFinaM=
+github.com/aliyun/aliyun-tablestore-go-sdk v4.1.2+incompatible/go.mod h1:LDQHRZylxvcg8H7wBIDfvO5g/cy4/sz1iucBlc2l3Jw=
+github.com/antchfx/xmlquery v1.3.5 h1:I7TuBRqsnfFuL11ruavGm911Awx9IqSdiU6W/ztSmVw=
+github.com/antchfx/xmlquery v1.3.5/go.mod h1:64w0Xesg2sTaawIdNqMB+7qaW/bSqkQm+ssPaCMWNnc=
+github.com/antchfx/xpath v1.1.10 h1:cJ0pOvEdN/WvYXxvRrzQH9x5QWKpzHacYO8qzCcDYAg=
+github.com/antchfx/xpath v1.1.10/go.mod h1:Yee4kTMuNiPYJ7nSNorELQMr1J33uOpXDMByNYhvtNk=
+github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
+github.com/apparentlymart/go-cidr v1.1.0 h1:2mAhrMoF+nhXqxTzSZMUzDHkLjmIHC+Zzn4tdgBZjnU=
+github.com/apparentlymart/go-cidr v1.1.0/go.mod h1:EBcsNrHc3zQeuaeCeCtQruQm+n9/YjEn/vI25Lg7Gwc=
+github.com/apparentlymart/go-dump v0.0.0-20180507223929-23540a00eaa3/go.mod h1:oL81AME2rN47vu18xqj1S1jPIPuN7afo62yKTNn3XMM=
+github.com/apparentlymart/go-dump v0.0.0-20190214190832-042adf3cf4a0 h1:MzVXffFUye+ZcSR6opIgz9Co7WcDx6ZcY+RjfFHoA0I=
+github.com/apparentlymart/go-dump v0.0.0-20190214190832-042adf3cf4a0/go.mod h1:oL81AME2rN47vu18xqj1S1jPIPuN7afo62yKTNn3XMM=
+github.com/apparentlymart/go-shquot v0.0.1 h1:MGV8lwxF4zw75lN7e0MGs7o6AFYn7L6AZaExUpLh0Mo=
+github.com/apparentlymart/go-shquot v0.0.1/go.mod h1:lw58XsE5IgUXZ9h0cxnypdx31p9mPFIVEQ9P3c7MlrU=
+github.com/apparentlymart/go-textseg v1.0.0/go.mod h1:z96Txxhf3xSFMPmb5X/1W05FF/Nj9VFpLOpjS5yuumk=
+github.com/apparentlymart/go-textseg/v13 v13.0.0 h1:Y+KvPE1NYz0xl601PVImeQfFyEy6iT90AvPUL1NNfNw=
+github.com/apparentlymart/go-textseg/v13 v13.0.0/go.mod h1:ZK2fH7c4NqDTLtiYLvIkEghdlcqw7yxLeM89kiTRPUo=
+github.com/apparentlymart/go-userdirs v0.0.0-20200915174352-b0c018a67c13 h1:JtuelWqyixKApmXm3qghhZ7O96P6NKpyrlSIe8Rwnhw=
+github.com/apparentlymart/go-userdirs v0.0.0-20200915174352-b0c018a67c13/go.mod h1:7kfpUbyCdGJ9fDRCp3fopPQi5+cKNHgTE4ZuNrO71Cw=
+github.com/apparentlymart/go-versions v1.0.1 h1:ECIpSn0adcYNsBfSRwdDdz9fWlL+S/6EUd9+irwkBgU=
+github.com/apparentlymart/go-versions v1.0.1/go.mod h1:YF5j7IQtrOAOnsGkniupEA5bfCjzd7i14yu0shZavyM=
+github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
+github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2 h1:7Ip0wMmLHLRJdrloDxZfhMm0xrLXZS8+COSu2bXmEQs=
+github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
+github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da h1:8GUt8eRujhVEGZFFEjBj46YV4rDjvGrNxb0KMWYkL2I=
+github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
+github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
+github.com/armon/go-radix v1.0.0 h1:F4z6KzEeeQIMeLFa97iZU6vupzoecKdU5TX24SNppXI=
+github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
+github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
+github.com/aws/aws-sdk-go v1.31.9/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
+github.com/aws/aws-sdk-go v1.44.122 h1:p6mw01WBaNpbdP2xrisz5tIkcNwzj/HysobNoaAHjgo=
+github.com/aws/aws-sdk-go v1.44.122/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
+github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f h1:ZNv7On9kyUzm7fvRZumSyy/IUiSC7AzL0I1jKKtwooA=
+github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f/go.mod h1:AuiFmCCPBSrqvVMvuqFuk0qogytodnVFVSN5CeJB8Gc=
+github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d h1:xDfNPAt8lFiC1UJrqV3uuy861HCTo708pDMbjHHdCas=
+github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d/go.mod h1:6QX/PXZ00z/TKoufEY6K/a0k6AhaJrQKdFe6OfVXsa4=
+github.com/bgentry/speakeasy v0.1.0 h1:ByYyxL9InA1OWqxJqqp2A5pYHUrCiAL6K3J+LKSsQkY=
+github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
+github.com/bmatcuk/doublestar v1.1.5 h1:2bNwBOmhyFEFcoB3tGvTD5xanq+4kyOZlB8wFYbMjkk=
+github.com/bmatcuk/doublestar v1.1.5/go.mod h1:wiQtGV+rzVYxB7WIlirSN++5HPtPlXEo9MEoZQC/PmE=
+github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
+github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
+github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
+github.com/cheggaaa/pb v1.0.27/go.mod h1:pQciLPpbU0oxA0h+VJYYLxO+XeDQb5pZijXscXHm81s=
+github.com/chzyer/logex v1.1.10 h1:Swpa1K6QvQznwJRcfTfQJmTE72DqScAa40E+fbHEXEE=
+github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
+github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e h1:fY5BOSpyZCqRo5OhCuC+XN+r/bBCmeuuJtjz+bCNIf8=
+github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
+github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1 h1:q763qf9huN11kDQavWsoZXJNW3xEE4JJyHa5Q25/sd8=
+github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
+github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
+github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
+github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
+github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
+github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI=
+github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d h1:t5Wuyh53qYyg9eqn4BbnlIT+vmhyww0TatL+zT3uWgI=
+github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f h1:lBNOc5arjvs8E5mO2tbpBpLoyyu8B6e44T7hJy6potg=
+github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
+github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
+github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
+github.com/creack/pty v1.1.18 h1:n56/Zwd5o6whRC5PMGretI4IdRLlmBXYNjScPaBgsbY=
+github.com/creack/pty v1.1.18/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
+github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
+github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
+github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
+github.com/dimchansky/utfbom v1.1.1 h1:vV6w1AhK4VMnhBno/TPVCoK9U/LP0PkLCS9tbxHdi/U=
+github.com/dimchansky/utfbom v1.1.1/go.mod h1:SxdoEBH5qIqFocHMyGOXVAybYJdr71b1Q/j0mACtrfE=
+github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
+github.com/dylanmei/iso8601 v0.1.0 h1:812NGQDBcqquTfH5Yeo7lwR0nzx/cKdsmf3qMjPURUI=
+github.com/dylanmei/iso8601 v0.1.0/go.mod h1:w9KhXSgIyROl1DefbMYIE7UVSIvELTbMrCfx+QkYnoQ=
+github.com/dylanmei/winrmtest v0.0.0-20210303004826-fbc9ae56efb6 h1:zWydSUQBJApHwpQ4guHi+mGyQN/8yN6xbKWdDtL3ZNM=
+github.com/dylanmei/winrmtest v0.0.0-20210303004826-fbc9ae56efb6/go.mod h1:6BLLhzn1VEiJ4veuAGhINBTrBlV889Wd+aU4auxKOww=
+github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
+github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
+github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
+github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
+github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
+github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
+github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
+github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
+github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
+github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
+github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.mod h1:KJwIaB5Mv44NWtYuAOFCVOjcI94vtpEz2JU/D2v6IjE=
+github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
+github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
+github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
+github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
+github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=
+github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
+github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
+github.com/form3tech-oss/jwt-go v3.2.3+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
+github.com/frankban/quicktest v1.14.3 h1:FJKSZTDHjyhriyC81FLQ0LY93eSai0ZyR/ZIkd3ZUKE=
+github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
+github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
+github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
+github.com/getkin/kin-openapi v0.76.0/go.mod h1:660oXbgy5JFMKreazJaQTw7o+X00qeSyhcnluiMv+Xg=
+github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
+github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
+github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
+github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
+github.com/go-logr/logr v1.2.0 h1:QK40JKJyMdUDz+h+xvCsru/bJhvG0UxvePV0ufL/AcE=
+github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
+github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
+github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
+github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
+github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
+github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
+github.com/go-test/deep v1.0.1/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
+github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68=
+github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
+github.com/gofrs/uuid v3.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
+github.com/gofrs/uuid v4.0.0+incompatible h1:1SD/1F5pU8p29ybwgQSwpQk+mwdRrXCYuPhW6m+TnJw=
+github.com/gofrs/uuid v4.0.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
+github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
+github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
+github.com/goji/httpauth v0.0.0-20160601135302-2da839ab0f4d/go.mod h1:nnjvkQ9ptGaCkuDUx6wNykzzlUixGxvkme+H/lnzb+A=
+github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
+github.com/golang-jwt/jwt/v4 v4.2.0 h1:besgBTC8w8HjP6NzQdxwKH9Z5oQMZ24ThTrHp3cZ8eU=
+github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
+github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
+github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
+github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
+github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
+github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
+github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
+github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
+github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
+github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=
+github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
+github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
+github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
+github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
+github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
+github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
+github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
+github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
+github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
+github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
+github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
+github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
+github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
+github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
+github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
+github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
+github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
+github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/btree v1.0.1 h1:gK4Kx5IaGY9CD5sPJ36FHiBJ6ZXl0kilRiiCj+jdYp4=
+github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
+github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
+github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=
+github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
+github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
+github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
+github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
+github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8=
+github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
+github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
+github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
+github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
+github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
+github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
+github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
+github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
+github.com/google/martian/v3 v3.2.1 h1:d8MncMlErDFTwQGBK1xhv026j9kqhvw1Qv9IbWT1VLQ=
+github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
+github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
+github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
+github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/googleapis/enterprise-certificate-proxy v0.0.0-20220520183353-fd19c99a87aa/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
+github.com/googleapis/enterprise-certificate-proxy v0.1.0/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
+github.com/googleapis/enterprise-certificate-proxy v0.2.0 h1:y8Yozv7SZtlU//QXbezB6QkpuE6jMD2/gfzk4AftXjs=
+github.com/googleapis/enterprise-certificate-proxy v0.2.0/go.mod h1:8C0jb7/mgJe/9KK8Lm7X9ctZC2t60YyIpYEI16jx0Qg=
+github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
+github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
+github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
+github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM=
+github.com/googleapis/gax-go/v2 v2.2.0/go.mod h1:as02EH8zWkzwUoLbBaFeQ+arQaj/OthfcblKl4IGNaM=
+github.com/googleapis/gax-go/v2 v2.3.0/go.mod h1:b8LNqSzNabLiUpXKkY7HAR5jr6bIT99EXz9pXxye9YM=
+github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK9wbMD5+iXC6c=
+github.com/googleapis/gax-go/v2 v2.5.1/go.mod h1:h6B0KMMFNtI2ddbGJn3T3ZbwkeT6yqEF02fYlzkUCyo=
+github.com/googleapis/gax-go/v2 v2.6.0 h1:SXk3ABtQYDT/OH8jAyvEOQ58mgawq5C4o/4/89qN2ZU=
+github.com/googleapis/gax-go/v2 v2.6.0/go.mod h1:1mjbznJAPHFpesgE5ucqfYEscaz5kMdcIDwU/6+DDoY=
+github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU=
+github.com/googleapis/gnostic v0.5.5 h1:9fHAtK0uDfpveeqqo1hkEZJcFvYXAiCN3UutL8F9xHw=
+github.com/googleapis/gnostic v0.5.5/go.mod h1:7+EbHbldMins07ALC74bsA81Ovc97DwqyJO1AENw9kA=
+github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4=
+github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
+github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
+github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
+github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
+github.com/hashicorp/aws-sdk-go-base v0.7.1 h1:7s/aR3hFn74tYPVihzDyZe7y/+BorN70rr9ZvpV3j3o=
+github.com/hashicorp/aws-sdk-go-base v0.7.1/go.mod h1:2fRjWDv3jJBeN6mVWFHV6hFTNeFBx2gpDLQaZNxUVAY=
+github.com/hashicorp/consul/api v1.9.1 h1:SngrdG2L62qqLsUz85qcPhFZ78rPf8tcD5qjMgs6MME=
+github.com/hashicorp/consul/api v1.9.1/go.mod h1:XjsvQN+RJGWI2TWy1/kqaE16HrR2J/FWgkYjdZQsX9M=
+github.com/hashicorp/consul/sdk v0.8.0 h1:OJtKBtEjboEZvG6AOUdh4Z1Zbyu0WcxQ0qatRrZHTVU=
+github.com/hashicorp/consul/sdk v0.8.0/go.mod h1:GBvyrGALthsZObzUGsfgHZQDXjg4lOjagTIwIR1vPms=
+github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
+github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
+github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
+github.com/hashicorp/go-azure-helpers v0.12.0/go.mod h1:Zc3v4DNeX6PDdy7NljlYpnrdac1++qNW0I4U+ofGwpg=
+github.com/hashicorp/go-azure-helpers v0.43.0 h1:larj4ZgwO3hKzA9xIOTXRW4NBpI6F3K8wpig8eikNOw=
+github.com/hashicorp/go-azure-helpers v0.43.0/go.mod h1:ofh+59GPB8g/lWI08711STfrIPSPOlXQkuMc8rovpBk=
+github.com/hashicorp/go-checkpoint v0.5.0 h1:MFYpPZCnQqQTE18jFwSII6eUQrD/oxMFp3mlgcqk5mU=
+github.com/hashicorp/go-checkpoint v0.5.0/go.mod h1:7nfLNL10NsxqO4iWuW6tWW0HjZuDrwkBuEQsVcpCOgg=
+github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
+github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
+github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
+github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
+github.com/hashicorp/go-getter v1.7.0 h1:bzrYP+qu/gMrL1au7/aDvkoOVGUJpeKBgbqRHACAFDY=
+github.com/hashicorp/go-getter v1.7.0/go.mod h1:W7TalhMmbPmsSMdNjD0ZskARur/9GJ17cfHTRtXV744=
+github.com/hashicorp/go-hclog v0.9.2/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ=
+github.com/hashicorp/go-hclog v0.12.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
+github.com/hashicorp/go-hclog v0.14.1/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
+github.com/hashicorp/go-hclog v0.15.0 h1:qMuK0wxsoW4D0ddCCYwPSTm4KQv1X1ke3WmPWZ0Mvsk=
+github.com/hashicorp/go-hclog v0.15.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
+github.com/hashicorp/go-immutable-radix v1.0.0 h1:AKDB1HM5PWEA7i4nhcpwOrO2byshxBjXVn/J/3+z5/0=
+github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
+github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
+github.com/hashicorp/go-msgpack v0.5.4 h1:SFT72YqIkOcLdWJUYcriVX7hbrZpwc/f7h8aW2NUqrA=
+github.com/hashicorp/go-msgpack v0.5.4/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
+github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
+github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
+github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
+github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
+github.com/hashicorp/go-plugin v1.4.3 h1:DXmvivbWD5qdiBts9TpBC7BYL1Aia5sxbRgQB+v6UZM=
+github.com/hashicorp/go-plugin v1.4.3/go.mod h1:5fGEH17QVwTTcR0zV7yhDPLLmFX9YSZ38b18Udy6vYQ=
+github.com/hashicorp/go-retryablehttp v0.7.0/go.mod h1:vAew36LZh98gCBJNLH42IQ1ER/9wtLZZ8meHqQvEYWY=
+github.com/hashicorp/go-retryablehttp v0.7.2 h1:AcYqCvkpalPnPF2pn0KamgwamS42TqUDDYFRKq/RAd0=
+github.com/hashicorp/go-retryablehttp v0.7.2/go.mod h1:Jy/gPYAdjqffZ/yFGCFV2doI5wjtH1ewM9u8iYVjtX8=
+github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
+github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
+github.com/hashicorp/go-safetemp v1.0.0 h1:2HR189eFNrjHQyENnQMMpCiBAsRxzbTMIgBhEyExpmo=
+github.com/hashicorp/go-safetemp v1.0.0/go.mod h1:oaerMy3BhqiTbVye6QuFhFtIceqFoDHxNAB65b+Rj1I=
+github.com/hashicorp/go-slug v0.11.0 h1:l7cHWiBk8cnnskjheloW9h8PwXhihvwXbQiiFw2KqkY=
+github.com/hashicorp/go-slug v0.11.0/go.mod h1:Ib+IWBYfEfJGI1ZyXMGNbu2BU+aa3Dzu41RKLH301v4=
+github.com/hashicorp/go-sockaddr v1.0.0 h1:GeH6tui99pF4NJgfnhp+L6+FfobzVW3Ah46sLo0ICXs=
+github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
+github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
+github.com/hashicorp/go-tfe v1.21.0 h1:sTZXf/MaC/iQ8HxKwYSL0xJSEVDwY+h4ngh/+na8vdk=
+github.com/hashicorp/go-tfe v1.21.0/go.mod h1:jedlLiHHiDeBKKpON4aIpTdsKbc2OaVbklEPI7XEHiY=
+github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
+github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
+github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
+github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=
+github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
+github.com/hashicorp/go-version v1.0.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
+github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
+github.com/hashicorp/go-version v1.3.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
+github.com/hashicorp/go-version v1.6.0 h1:feTTfFNnjP967rlCxM/I9g701jU+RN74YKx2mOkIeek=
+github.com/hashicorp/go-version v1.6.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
+github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
+github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f h1:UdxlrJz4JOnY8W+DbLISwf2B8WXEolNRA8BGCwI9jws=
+github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f/go.mod h1:oZtUIOe8dh44I2q6ScRibXws4Ajl+d+nod3AaR9vL5w=
+github.com/hashicorp/hcl/v2 v2.0.0/go.mod h1:oVVDG71tEinNGYCxinCYadcmKU9bglqW9pV3txagJ90=
+github.com/hashicorp/hcl/v2 v2.16.2 h1:mpkHZh/Tv+xet3sy3F9Ld4FyI2tUpWe9x3XtPx9f1a0=
+github.com/hashicorp/hcl/v2 v2.16.2/go.mod h1:JRmR89jycNkrrqnMmvPDMd56n1rQJ2Q6KocSLCMCXng=
+github.com/hashicorp/jsonapi v0.0.0-20210826224640-ee7dae0fb22d h1:9ARUJJ1VVynB176G1HCwleORqCaXm/Vx0uUi0dL26I0=
+github.com/hashicorp/jsonapi v0.0.0-20210826224640-ee7dae0fb22d/go.mod h1:Yog5+CPEM3c99L1CL2CFCYoSzgWm5vTU58idbRUaLik=
+github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
+github.com/hashicorp/mdns v1.0.1/go.mod h1:4gW7WsVCke5TE7EPeYliwHlRUyBtfCwuFwuMg2DmyNY=
+github.com/hashicorp/memberlist v0.2.2 h1:5+RffWKwqJ71YPu9mWsF7ZOscZmwfasdA8kbdC7AO2g=
+github.com/hashicorp/memberlist v0.2.2/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
+github.com/hashicorp/serf v0.9.5 h1:EBWvyu9tcRszt3Bxp3KNssBMP1KuHWyO51lz9+786iM=
+github.com/hashicorp/serf v0.9.5/go.mod h1:UWDWwZeL5cuWDJdl0C6wrvrUwEqtQ4ZKBKKENpqIUyk=
+github.com/hashicorp/terraform-config-inspect v0.0.0-20210209133302-4fd17a0faac2 h1:l+bLFvHjqtgNQwWxwrFX9PemGAAO2P1AGZM7zlMNvCs=
+github.com/hashicorp/terraform-config-inspect v0.0.0-20210209133302-4fd17a0faac2/go.mod h1:Z0Nnk4+3Cy89smEbrq+sl1bxc9198gIP4I7wcQF6Kqs=
+github.com/hashicorp/terraform-registry-address v0.0.0-20220623143253-7d51757b572c h1:D8aRO6+mTqHfLsK/BC3j5OAoogv1WLRWzY1AaTo3rBg=
+github.com/hashicorp/terraform-registry-address v0.0.0-20220623143253-7d51757b572c/go.mod h1:Wn3Na71knbXc1G8Lh+yu/dQWWJeFQEpDeJMtWMtlmNI=
+github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734/go.mod h1:kNDNcF7sN4DocDLBkQYz73HGKwN1ANB1blq4lIYLYvg=
+github.com/hashicorp/terraform-svchost v0.1.0 h1:0+RcgZdZYNd81Vw7tu62g9JiLLvbOigp7QtyNh6CjXk=
+github.com/hashicorp/terraform-svchost v0.1.0/go.mod h1:ut8JaH0vumgdCfJaihdcZULqkAwHdQNwNH7taIDdsZM=
+github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
+github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d h1:kJCB4vdITiW1eC1vq2e6IsrXKrZit1bv/TDYFGMp4BQ=
+github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
+github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
+github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
+github.com/huandu/xstrings v1.3.2/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
+github.com/huandu/xstrings v1.3.3 h1:/Gcsuc1x8JVbJ9/rlye4xZnVAbEkGauT8lbebqcQws4=
+github.com/huandu/xstrings v1.3.3/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
+github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
+github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
+github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
+github.com/imdario/mergo v0.3.13 h1:lFzP57bqS/wsqKssCGmtLAb8A0wKjLGrve2q3PPVcBk=
+github.com/imdario/mergo v0.3.13/go.mod h1:4lJ1jqUDcsbIECGy0RUJAXNIhg+6ocWgb1ALK2O4oXg=
+github.com/jhump/protoreflect v1.6.0 h1:h5jfMVslIg6l29nsMs0D8Wj17RDVdNYti0vDN/PZZoE=
+github.com/jhump/protoreflect v1.6.0/go.mod h1:eaTn3RZAmMBcV0fifFvlm6VHNz3wSkYyXYWUh7ymB74=
+github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
+github.com/jmespath/go-jmespath v0.3.0/go.mod h1:9QtRXoHjLGCJ5IBSaohpXITPlowMeeYCZ7fLUTSywik=
+github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
+github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
+github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
+github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
+github.com/json-iterator/go v1.1.5/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
+github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
+github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
+github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
+github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
+github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
+github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 h1:iQTw/8FWTuc7uiaSepXwyf3o52HaUYcV+Tu66S3F5GA=
+github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8=
+github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
+github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/klauspost/compress v1.15.11 h1:Lcadnb3RKGin4FYM/orgq0qde+nc15E5Cbqg4B9Sx9c=
+github.com/klauspost/compress v1.15.11/go.mod h1:QPwzmACJjUTFsnSHH934V6woptycfrDDJnH7hvFVbGM=
+github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
+github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
+github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
+github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
+github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
+github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
+github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
+github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
+github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k=
+github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
+github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
+github.com/lib/pq v1.10.3 h1:v9QZf2Sn6AmjXtQeFpdoq/eaNtYP6IN+7lcrygsIAtg=
+github.com/lib/pq v1.10.3/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
+github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
+github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
+github.com/manicminer/hamilton v0.43.0/go.mod h1:lbVyngC+/nCWuDp8UhC6Bw+bh7jcP/E+YwqzHTmzemk=
+github.com/manicminer/hamilton v0.44.0 h1:mLb4Vxbt2dsAvOpaB7xd/5D8LaTTX6ACwVP4TmW8qwE=
+github.com/manicminer/hamilton v0.44.0/go.mod h1:lbVyngC+/nCWuDp8UhC6Bw+bh7jcP/E+YwqzHTmzemk=
+github.com/manicminer/hamilton-autorest v0.2.0 h1:dDL+t2DrQza0EfNYINYCvXISeNwVqzgVAQh+CH/19ZU=
+github.com/manicminer/hamilton-autorest v0.2.0/go.mod h1:NselDpNTImEmOc/fa41kPg6YhDt/6S95ejWbTGZ6tlg=
+github.com/masterzen/simplexml v0.0.0-20160608183007-4572e39b1ab9/go.mod h1:kCEbxUJlNDEBNbdQMkPSp6yaKcRXVI6f4ddk8Riv4bc=
+github.com/masterzen/simplexml v0.0.0-20190410153822-31eea3082786 h1:2ZKn+w/BJeL43sCxI2jhPLRv73oVVOjEKZjKkflyqxg=
+github.com/masterzen/simplexml v0.0.0-20190410153822-31eea3082786/go.mod h1:kCEbxUJlNDEBNbdQMkPSp6yaKcRXVI6f4ddk8Riv4bc=
+github.com/masterzen/winrm v0.0.0-20200615185753-c42b5136ff88 h1:cxuVcCvCLD9yYDbRCWw0jSgh1oT6P6mv3aJDKK5o7X4=
+github.com/masterzen/winrm v0.0.0-20200615185753-c42b5136ff88/go.mod h1:a2HXwefeat3evJHxFXSayvRHpYEPJYtErl4uIzfaUqY=
+github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
+github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
+github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
+github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
+github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
+github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
+github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
+github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
+github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
+github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
+github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
+github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
+github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
+github.com/mattn/go-isatty v0.0.16 h1:bq3VjFmv/sOjHtdEhmkEV4x1AJtvUvOJ2PFAZ5+peKQ=
+github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
+github.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
+github.com/mattn/go-shellwords v1.0.4 h1:xmZZyxuP+bYKAKkA9ABYXVNJ+G/Wf3R8d8vAP3LDJJk=
+github.com/mattn/go-shellwords v1.0.4/go.mod h1:3xCvwCdWdlDJUrvuMn7Wuy9eWs4pE8vqg+NOMyg4B2o=
+github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
+github.com/miekg/dns v1.1.26 h1:gPxPSwALAeHJSjarOs00QjVdV9QoBvc1D2ujQUr5BzU=
+github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
+github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
+github.com/mitchellh/cli v1.1.5 h1:OxRIeJXpAMztws/XHlN2vu6imG5Dpq+j61AzAX5fLng=
+github.com/mitchellh/cli v1.1.5/go.mod h1:v8+iFts2sPIKUV1ltktPXMCC8fumSKFItNcD2cLtRR4=
+github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
+github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=
+github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw=
+github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=
+github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=
+github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
+github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
+github.com/mitchellh/go-linereader v0.0.0-20190213213312-1b945b3263eb h1:GRiLv4rgyqjqzxbhJke65IYUf4NCOOvrPOJbV/sPxkM=
+github.com/mitchellh/go-linereader v0.0.0-20190213213312-1b945b3263eb/go.mod h1:OaY7UOoTkkrX3wRwjpYRKafIkkyeD0UtweSHAWWiqQM=
+github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
+github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
+github.com/mitchellh/go-testing-interface v1.14.1 h1:jrgshOhYAUVNMAJiKbEu7EqAwgJJ2JqpQmpLJOu07cU=
+github.com/mitchellh/go-testing-interface v1.14.1/go.mod h1:gfgS7OtZj6MA4U1UrDRp04twqAjfvlZyCfX3sDjEym8=
+github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
+github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
+github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0=
+github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0=
+github.com/mitchellh/gox v1.0.1 h1:x0jD3dcHk9a9xPSDN6YEL4xL6Qz0dvNYm8yZqui5chI=
+github.com/mitchellh/gox v1.0.1/go.mod h1:ED6BioOGXMswlXa2zxfh/xdd5QhwYliBFn9V18Ap4z4=
+github.com/mitchellh/iochan v1.0.0 h1:C+X3KsSTLFVBr/tK1eYN/vs4rJcvsiLU338UhYPJWeY=
+github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
+github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
+github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
+github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
+github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
+github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ=
+github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
+github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
+github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
+github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
+github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
+github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
+github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
+github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
+github.com/mozillazg/go-httpheader v0.2.1/go.mod h1:jJ8xECTlalr6ValeXYdOF8fFUISeBAdw6E61aqQma60=
+github.com/mozillazg/go-httpheader v0.3.0 h1:3brX5z8HTH+0RrNA1362Rc3HsaxyWEKtGY45YrhuINM=
+github.com/mozillazg/go-httpheader v0.3.0/go.mod h1:PuT8h0pw6efvp8ZeUec1Rs7dwjK08bt6gKSReGMqtdA=
+github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
+github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
+github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
+github.com/nishanths/exhaustive v0.7.11 h1:xV/WU3Vdwh5BUH4N06JNUznb6d5zhRPOnlgCrpNYNKA=
+github.com/nishanths/exhaustive v0.7.11/go.mod h1:gX+MP7DWMKJmNa1HfMozK+u04hQd3na9i0hyqf3/dOI=
+github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d h1:VhgPp6v9qf9Agr/56bj7Y/xa04UccTW04VP0Qed4vnQ=
+github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d/go.mod h1:YUTz3bUH2ZwIWBy3CJBeOBEugqcmXREj14T+iG/4k4U=
+github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
+github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
+github.com/oklog/run v1.0.0 h1:Ru7dDtJNOyC66gQ5dQmaCa0qIsAUFY3sFpK1Xk8igrw=
+github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
+github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
+github.com/onsi/ginkgo v1.14.0 h1:2mOpI4JVVPBN+WQRa0WKH2eXR+Ey+uK4n7Zj0aYpIQA=
+github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
+github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
+github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
+github.com/onsi/gomega v1.10.1 h1:o0+MgICZLuZ7xjH7Vx6zS/zcu93/BEp1VwkIW1mEXCE=
+github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
+github.com/packer-community/winrmcp v0.0.0-20180921211025-c76d91c1e7db h1:9uViuKtx1jrlXLBW/pMnhOfzn3iSEdLase/But/IZRU=
+github.com/packer-community/winrmcp v0.0.0-20180921211025-c76d91c1e7db/go.mod h1:f6Izs6JvFTdnRbziASagjZ2vmf55NSIkC/weStxCHqk=
+github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c h1:Lgl0gzECD8GnQ5QCWA8o6BtfL6mDH5rQgM4/fX3avOs=
+github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
+github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
+github.com/pkg/browser v0.0.0-20201207095918-0426ae3fba23 h1:dofHuld+js7eKSemxqTVIo8yRlpRw+H1SdpzZxWruBc=
+github.com/pkg/browser v0.0.0-20201207095918-0426ae3fba23/go.mod h1:N6UoU20jOqggOuDwUaBQpluzLNDqif3kq9z2wpdYEfQ=
+github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
+github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
+github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
+github.com/posener/complete v1.2.3 h1:NP0eAhjcjImqslEwo/1hq7gpajME0fTLTezBKDqfXqo=
+github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
+github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
+github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
+github.com/rogpeppe/go-internal v1.6.1 h1:/FiVV8dS/e+YqF2JvO3yXRFbBLTIuSDkuC7aBOAvL+k=
+github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
+github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww=
+github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
+github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I=
+github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
+github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
+github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ=
+github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
+github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
+github.com/shopspring/decimal v1.3.1 h1:2Usl1nmF/WZucqkFZhnfFYxxxu8LG21F6nPQBE5gKV8=
+github.com/shopspring/decimal v1.3.1/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
+github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
+github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
+github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
+github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
+github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
+github.com/spf13/cast v1.5.0 h1:rj3WzYc11XZaIZMPKmwP96zkFEnnAmV8s6XbB2aY32w=
+github.com/spf13/cast v1.5.0/go.mod h1:SpXXQ5YoyJw6s3/6cMTQuxvgRl3PCJiyaX9p6b155UU=
+github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
+github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
+github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
+github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
+github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
+github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
+github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
+github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
+github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
+github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
+github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
+github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
+github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
+github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
+github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.194/go.mod h1:7sCQWVkxcsR38nffDW057DRGk8mUjK1Ing/EFOK8s8Y=
+github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.588 h1:DYtBXB7sVc3EOW5horg8j55cLZynhsLYhHrvQ/jXKKM=
+github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.588/go.mod h1:7sCQWVkxcsR38nffDW057DRGk8mUjK1Ing/EFOK8s8Y=
+github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/kms v1.0.194/go.mod h1:yrBKWhChnDqNz1xuXdSbWXG56XawEq0G5j1lg4VwBD4=
+github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/sts v1.0.588 h1:PlkFOALQZ9BLUyX8EalATUQD5xEn1Sz34C+Rw5VSpvk=
+github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/sts v1.0.588/go.mod h1:vPvXNb+zBZVJfZCIKWcYxLpGzgScKKgiPUArobWZ+nU=
+github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag v1.0.233 h1:5Tbi+jyZ2MojC6GK8V6hchwtnkP2IuENUTqSisbYOlA=
+github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag v1.0.233/go.mod h1:sX14+NSvMjOhNFaMtP2aDy6Bss8PyFXij21gpY6+DAs=
+github.com/tencentyun/cos-go-sdk-v5 v0.7.29 h1:uwRBzc70Wgtc5iQQCowqecfRT0OpCXUOZzodZHOOEDs=
+github.com/tencentyun/cos-go-sdk-v5 v0.7.29/go.mod h1:4E4+bQ2gBVJcgEC9Cufwylio4mXOct2iu05WjgEBx1o=
+github.com/tombuildsstuff/giovanni v0.15.1 h1:CVRaLOJ7C/eercCrKIsarfJ4SZoGMdBL9Q2deFDUXco=
+github.com/tombuildsstuff/giovanni v0.15.1/go.mod h1:0TZugJPEtqzPlMpuJHYfXY6Dq2uLPrXf98D2XQSxNbA=
+github.com/ulikunitz/xz v0.5.10 h1:t92gobL9l3HE202wg3rlk19F6X+JOxl9BBrCCMYEYd8=
+github.com/ulikunitz/xz v0.5.10/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14=
+github.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=
+github.com/vmihailenco/msgpack/v4 v4.3.12 h1:07s4sz9IReOgdikxLTKNbBdqDMLsjPKXwvCazn8G65U=
+github.com/vmihailenco/msgpack/v4 v4.3.12/go.mod h1:gborTTJjAo/GWTqqRjrLCn9pgNN+NXzzngzBKDPIqw4=
+github.com/vmihailenco/tagparser v0.1.1 h1:quXMXlA39OCbd2wAdTsGDlK9RkOk6Wuw+x37wVyIuWY=
+github.com/vmihailenco/tagparser v0.1.1/go.mod h1:OeAg3pn3UbLjkWt+rN9oFYB6u/cQgqMEUPoW2WPyhdI=
+github.com/xanzy/ssh-agent v0.3.1 h1:AmzO1SSWxw73zxFZPRwaMN1MohDw8UyHnmuxyceTEGo=
+github.com/xanzy/ssh-agent v0.3.1/go.mod h1:QIE4lCeL7nkC25x+yA3LBIYfwCc1TFziCtG7cBAac6w=
+github.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557 h1:Jpn2j6wHkC9wJv5iMfJhKqrZJx3TahFx+7sbZ7zQdxs=
+github.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg=
+github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
+github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
+github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
+github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s=
+github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8=
+github.com/zclconf/go-cty v1.12.1 h1:PcupnljUm9EIvbgSHQnHhUr3fO6oFmkOrvs2BAFNXXY=
+github.com/zclconf/go-cty v1.12.1/go.mod h1:s9IfD1LK5ccNMSWCVFCE2rJfHiZgi7JijgeWIMfhLvA=
+github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b h1:FosyBZYxY34Wul7O/MSKey3txpPYyCqVO5ZyceuQJEI=
+github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b/go.mod h1:ZRKQfBXbGkpdV6QMzT3rU1kSTAnfu1dO8dPKjYprgj8=
+github.com/zclconf/go-cty-yaml v1.0.3 h1:og/eOQ7lvA/WWhHGFETVWNduJM7Rjsv2RRpx1sdFMLc=
+github.com/zclconf/go-cty-yaml v1.0.3/go.mod h1:9YLUH4g7lOhVWqUbctnVlZ5KLpg7JAprQNgxSZ1Gyxs=
+go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
+go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
+go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
+go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
+go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
+go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
+golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
+golang.org/x/crypto v0.0.0-20190222235706-ffb98f73852f/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY=
+golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20201016220609-9e8e0b390897/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
+golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
+golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
+golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
+golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
+golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
+golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
+golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
+golang.org/x/exp/typeparams v0.0.0-20220218215828-6cf2b201936e h1:qyrTQ++p1afMkO4DPEeLGq/3oTsdlvdH4vqZUBWzUKM=
+golang.org/x/exp/typeparams v0.0.0-20220218215828-6cf2b201936e/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
+golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
+golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
+golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
+golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
+golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
+golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
+golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
+golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
+golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
+golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
+golang.org/x/mod v0.8.0 h1:LUYupSeNrTNCGzR/hVBk2NHZO4hXcVaW1k4Qx7rjPx8=
+golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/net v0.0.0-20180530234432-1e491301e022/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20191009170851-d66e71096ffb/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20200813134508-3edf25e44fcc/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
+golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
+golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20211209124913-491a49abca63/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
+golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
+golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
+golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
+golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
+golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.0.0-20220617184016-355a448f1bc9/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.0.0-20220909164309-bea034e7d591/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
+golang.org/x/net v0.0.0-20221014081412-f15817d10f9b/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
+golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
+golang.org/x/net v0.6.0 h1:L4ZwwTvKW9gr0ZMS1yrHD9GZhIuVjOBBnaKH+SPQK0Q=
+golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
+golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
+golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
+golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
+golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
+golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb/go.mod h1:jaDAt6Dkxork7LmZnYtzbRWj0W47D86a3TGe0YHBvmE=
+golang.org/x/oauth2 v0.0.0-20220622183110-fd043fe589d2/go.mod h1:jaDAt6Dkxork7LmZnYtzbRWj0W47D86a3TGe0YHBvmE=
+golang.org/x/oauth2 v0.0.0-20220822191816-0ebed06d0094/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
+golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
+golang.org/x/oauth2 v0.0.0-20221014153046-6fdb5e3db783/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
+golang.org/x/oauth2 v0.1.0/go.mod h1:G9FE4dLTsbXUu90h/Pf85g4w1D+SSAgR+q46nJZ8M4A=
+golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
+golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
+golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20220929204114-8fcdb60fdcc0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
+golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190502175342-a43fa875dd82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190509141414-a5b02f93d862/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211210111614-af8b64212486/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220328115105-d36c6a25d886/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220615213510-4f61da869c0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220624220833-87e55d714810/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
+golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
+golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
+golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
+golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
+golang.org/x/term v0.5.0 h1:n2a8QNdAb0sZNpU9R1ALUXBbY+w51fCQDN+7EdxNBsY=
+golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
+golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
+golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
+golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.8.0 h1:57P1ETyNKtuIjB4SRd15iJxuhj8Gc416Y78H3qgMh68=
+golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
+golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
+golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
+golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190907020128-2ca718005c18/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
+golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
+golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
+golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
+golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
+golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.7/go.mod h1:LGqMHiF4EqQNHR1JncWGqT5BVaXmza+X+BDGol+dOxo=
+golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
+golang.org/x/tools v0.6.0 h1:BOw41kyTf3PuCW1pVQf8+Cyg8pMlkYB1oo9iJ6D/lKM=
+golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
+golang.org/x/tools/cmd/cover v0.1.0-deprecated h1:Rwy+mWYz6loAF+LnG1jHG/JWMHRMMC2/1XX3Ejkx9lA=
+golang.org/x/tools/cmd/cover v0.1.0-deprecated/go.mod h1:hMDiIvlpN1NoVgmjLjUJE9tMHyxHjFX7RuQ+rW12mSA=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
+golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
+golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 h1:H2TDz8ibqkAF6YGhCdN3jS9O0/s90v0rJh3X/OLHEUk=
+golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
+google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
+google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
+google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
+google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
+google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
+google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
+google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
+google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE=
+google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
+google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
+google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
+google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo=
+google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4=
+google.golang.org/api v0.50.0/go.mod h1:4bNT5pAuq5ji4SRZm+5QIkjny9JAyVD/3gaSihNefaw=
+google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00sOU=
+google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k=
+google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
+google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
+google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI=
+google.golang.org/api v0.61.0/go.mod h1:xQRti5UdCmoCEqFxcz93fTl338AVqDgyaDRuOZ3hg9I=
+google.golang.org/api v0.63.0/go.mod h1:gs4ij2ffTRXwuzzgJl/56BdwJaA194ijkfn++9tDuPo=
+google.golang.org/api v0.67.0/go.mod h1:ShHKP8E60yPsKNw/w8w+VYaj9H6buA5UqDp8dhbQZ6g=
+google.golang.org/api v0.70.0/go.mod h1:Bs4ZM2HGifEvXwd50TtW70ovgJffJYw2oRCOFU/SkfA=
+google.golang.org/api v0.71.0/go.mod h1:4PyU6e6JogV1f9eA4voyrTY2batOLdgZ5qZ5HOCc4j8=
+google.golang.org/api v0.74.0/go.mod h1:ZpfMZOVRMywNyvJFeqL9HRWBgAuRfSjJFpe9QtRRyDs=
+google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
+google.golang.org/api v0.77.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
+google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw=
+google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg=
+google.golang.org/api v0.84.0/go.mod h1:NTsGnUFJMYROtiquksZHBWtHfeMC7iYthki7Eq3pa8o=
+google.golang.org/api v0.85.0/go.mod h1:AqZf8Ep9uZ2pyTvgL+x0D3Zt0eoT9b5E8fmzfu6FO2g=
+google.golang.org/api v0.90.0/go.mod h1:+Sem1dnrKlrXMR/X0bPnMWyluQe4RsNoYfmNLhOIkzw=
+google.golang.org/api v0.93.0/go.mod h1:+Sem1dnrKlrXMR/X0bPnMWyluQe4RsNoYfmNLhOIkzw=
+google.golang.org/api v0.95.0/go.mod h1:eADj+UBuxkh5zlrSntJghuNeg8HwQ1w5lTKkuqaETEI=
+google.golang.org/api v0.96.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s=
+google.golang.org/api v0.97.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s=
+google.golang.org/api v0.98.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s=
+google.golang.org/api v0.100.0/go.mod h1:ZE3Z2+ZOr87Rx7dqFsdRQkRBk36kDtp/h+QpHbB7a70=
+google.golang.org/api v0.102.0 h1:JxJl2qQ85fRMPNvlZY/enexbxpCjLwGhZUtgfGeQ51I=
+google.golang.org/api v0.102.0/go.mod h1:3VFl6/fzoA+qNuS1N1/VfXY4LjoXN/wzeIp7TweWwGo=
+google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
+google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
+google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
+google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/genproto v0.0.0-20170818010345-ee236bd376b0/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
+google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
+google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
+google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
+google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
+google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
+google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
+google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210329143202-679c6ae281ee/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
+google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
+google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
+google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
+google.golang.org/genproto v0.0.0-20210604141403-392c879c8b08/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
+google.golang.org/genproto v0.0.0-20210608205507-b6d2f5bf0d7d/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
+google.golang.org/genproto v0.0.0-20210624195500-8bfb893ecb84/go.mod h1:SzzZ/N+nwJDaO1kznhnlzqS8ocJICar6hYhVyhi++24=
+google.golang.org/genproto v0.0.0-20210713002101-d411969a0d9a/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
+google.golang.org/genproto v0.0.0-20210716133855-ce7ef5c701ea/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
+google.golang.org/genproto v0.0.0-20210728212813-7823e685a01f/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
+google.golang.org/genproto v0.0.0-20210805201207-89edb61ffb67/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
+google.golang.org/genproto v0.0.0-20210813162853-db860fec028c/go.mod h1:cFeNkxwySK631ADgubI+/XFU/xp8FD5KIVV4rj8UC5w=
+google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211206160659-862468c7d6e0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211221195035-429b39de9b1c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20220207164111-0872dc986b00/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20220218161850-94dd64e39d7c/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
+google.golang.org/genproto v0.0.0-20220222213610-43724f9ea8cf/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
+google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
+google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
+google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb/go.mod h1:hAL49I2IFola2sVEjAn7MEwsja0xp51I0tlGAf9hz4E=
+google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
+google.golang.org/genproto v0.0.0-20220413183235-5e96e2839df9/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
+google.golang.org/genproto v0.0.0-20220414192740-2d67ff6cf2b4/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
+google.golang.org/genproto v0.0.0-20220421151946-72621c1f0bd3/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
+google.golang.org/genproto v0.0.0-20220429170224-98d788798c3e/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
+google.golang.org/genproto v0.0.0-20220502173005-c8bf987b8c21/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
+google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
+google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
+google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
+google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
+google.golang.org/genproto v0.0.0-20220616135557-88e70c0c3a90/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
+google.golang.org/genproto v0.0.0-20220617124728-180714bec0ad/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
+google.golang.org/genproto v0.0.0-20220624142145-8cd45d7dbd1f/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
+google.golang.org/genproto v0.0.0-20220628213854-d9e0b6570c03/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
+google.golang.org/genproto v0.0.0-20220722212130-b98a9ff5e252/go.mod h1:GkXuJDJ6aQ7lnJcRF+SJVgFdQhypqgl3LB1C9vabdRE=
+google.golang.org/genproto v0.0.0-20220801145646-83ce21fca29f/go.mod h1:iHe1svFLAZg9VWz891+QbRMwUv9O/1Ww+/mngYeThbc=
+google.golang.org/genproto v0.0.0-20220815135757-37a418bb8959/go.mod h1:dbqgFATTzChvnt+ujMdZwITVAJHFtfyN1qUhDqEiIlk=
+google.golang.org/genproto v0.0.0-20220817144833-d7fd3f11b9b1/go.mod h1:dbqgFATTzChvnt+ujMdZwITVAJHFtfyN1qUhDqEiIlk=
+google.golang.org/genproto v0.0.0-20220822174746-9e6da59bd2fc/go.mod h1:dbqgFATTzChvnt+ujMdZwITVAJHFtfyN1qUhDqEiIlk=
+google.golang.org/genproto v0.0.0-20220829144015-23454907ede3/go.mod h1:dbqgFATTzChvnt+ujMdZwITVAJHFtfyN1qUhDqEiIlk=
+google.golang.org/genproto v0.0.0-20220829175752-36a9c930ecbf/go.mod h1:dbqgFATTzChvnt+ujMdZwITVAJHFtfyN1qUhDqEiIlk=
+google.golang.org/genproto v0.0.0-20220913154956-18f8339a66a5/go.mod h1:0Nb8Qy+Sk5eDzHnzlStwW3itdNaWoZA5XeSG+R3JHSo=
+google.golang.org/genproto v0.0.0-20220914142337-ca0e39ece12f/go.mod h1:0Nb8Qy+Sk5eDzHnzlStwW3itdNaWoZA5XeSG+R3JHSo=
+google.golang.org/genproto v0.0.0-20220915135415-7fd63a7952de/go.mod h1:0Nb8Qy+Sk5eDzHnzlStwW3itdNaWoZA5XeSG+R3JHSo=
+google.golang.org/genproto v0.0.0-20220916172020-2692e8806bfa/go.mod h1:0Nb8Qy+Sk5eDzHnzlStwW3itdNaWoZA5XeSG+R3JHSo=
+google.golang.org/genproto v0.0.0-20220919141832-68c03719ef51/go.mod h1:0Nb8Qy+Sk5eDzHnzlStwW3itdNaWoZA5XeSG+R3JHSo=
+google.golang.org/genproto v0.0.0-20220920201722-2b89144ce006/go.mod h1:ht8XFiar2npT/g4vkk7O0WYS1sHOHbdujxbEp7CJWbw=
+google.golang.org/genproto v0.0.0-20220926165614-551eb538f295/go.mod h1:woMGP53BroOrRY3xTxlbr8Y3eB/nzAvvFM83q7kG2OI=
+google.golang.org/genproto v0.0.0-20220926220553-6981cbe3cfce/go.mod h1:woMGP53BroOrRY3xTxlbr8Y3eB/nzAvvFM83q7kG2OI=
+google.golang.org/genproto v0.0.0-20221010155953-15ba04fc1c0e/go.mod h1:3526vdqwhZAwq4wsRUaVG555sVgsNmIjRtO7t/JH29U=
+google.golang.org/genproto v0.0.0-20221014173430-6e2ab493f96b/go.mod h1:1vXfmgAz9N9Jx0QA82PqRVauvCz1SGSz739p0f183jM=
+google.golang.org/genproto v0.0.0-20221014213838-99cd37c6964a/go.mod h1:1vXfmgAz9N9Jx0QA82PqRVauvCz1SGSz739p0f183jM=
+google.golang.org/genproto v0.0.0-20221025140454-527a21cfbd71/go.mod h1:9qHF0xnpdSfF6knlcsnpzUu5y+rpwgbvsyGAZPBMg4s=
+google.golang.org/genproto v0.0.0-20221027153422-115e99e71e1c h1:QgY/XxIAIeccR+Ca/rDdKubLIU9rcJ3xfy1DC/Wd2Oo=
+google.golang.org/genproto v0.0.0-20221027153422-115e99e71e1c/go.mod h1:CGI5F/G+E5bKwmfYo09AXuVN4dD894kIKUFmVbP2/Fo=
+google.golang.org/grpc v1.8.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
+google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
+google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
+google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
+google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
+google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
+google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
+google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
+google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
+google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
+google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
+google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
+google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
+google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
+google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
+google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
+google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
+google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
+google.golang.org/grpc v1.40.1/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
+google.golang.org/grpc v1.44.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
+google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11+0rQ=
+google.golang.org/grpc v1.46.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
+google.golang.org/grpc v1.46.2/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
+google.golang.org/grpc v1.47.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
+google.golang.org/grpc v1.48.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
+google.golang.org/grpc v1.49.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
+google.golang.org/grpc v1.50.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
+google.golang.org/grpc v1.50.1 h1:DS/BukOZWp8s6p4Dt/tOaJaTQyPyOoCcrjroHuCeLzY=
+google.golang.org/grpc v1.50.1/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
+google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0 h1:M1YKkFIboKNieVO5DLUEVzQfGwJD30Nv2jfUgzb5UcE=
+google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
+google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
+google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
+google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
+google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
+google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
+google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
+google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
+google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
+google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
+google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
+google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
+google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
+google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
+gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
+gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
+gopkg.in/cheggaaa/pb.v1 v1.0.27/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
+gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
+gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
+gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
+gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
+gopkg.in/ini.v1 v1.66.2 h1:XfR1dOYubytKy4Shzc2LHrrGhU0lDCfDGG1yLPmpgsI=
+gopkg.in/ini.v1 v1.66.2/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
+gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
+gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
+gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
+gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
+gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
+gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
+honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
+honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
+honnef.co/go/tools v0.3.0 h1:2LdYUZ7CIxnYgskbUZfY7FPggmqnh6shBqfWa8Tn3XU=
+honnef.co/go/tools v0.3.0/go.mod h1:vlRD9XErLMGT+mDuofSr0mMMquscM/1nQqtRSsh6m70=
+k8s.io/api v0.23.4 h1:85gnfXQOWbJa1SiWGpE9EEtHs0UVvDyIsSMpEtl2D4E=
+k8s.io/api v0.23.4/go.mod h1:i77F4JfyNNrhOjZF7OwwNJS5Y1S9dpwvb9iYRYRczfI=
+k8s.io/apimachinery v0.23.4 h1:fhnuMd/xUL3Cjfl64j5ULKZ1/J9n8NuQEgNL+WXWfdM=
+k8s.io/apimachinery v0.23.4/go.mod h1:BEuFMMBaIbcOqVIJqNZJXGFTP4W6AycEpb5+m/97hrM=
+k8s.io/client-go v0.23.4 h1:YVWvPeerA2gpUudLelvsolzH7c2sFoXXR5wM/sWqNFU=
+k8s.io/client-go v0.23.4/go.mod h1:PKnIL4pqLuvYUK1WU7RLTMYKPiIh7MYShLshtRY9cj0=
+k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
+k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
+k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
+k8s.io/klog/v2 v2.30.0 h1:bUO6drIvCIsvZ/XFgfxoGFQU/a4Qkh0iAlvUR7vlHJw=
+k8s.io/klog/v2 v2.30.0/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
+k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 h1:E3J9oCLlaobFUqsjG9DfKbP2BmgwBL2p7pn0A3dG9W4=
+k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65/go.mod h1:sX9MT8g7NVZM5lVL/j8QyCCJe8YSMW30QvGZWaCIDIk=
+k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+k8s.io/utils v0.0.0-20211116205334-6203023598ed h1:ck1fRPWPJWsMd8ZRFsWc6mh/zHp5fZ/shhbrgPUxDAE=
+k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
+rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
+rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
+sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 h1:fD1pz4yfdADVNfFmcP2aBEtudwUQ1AlLnRBALr33v3s=
+sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6/go.mod h1:p4QtZmO4uMYipTQNzagwnNoseA6OxSUutVw05NhYDRs=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
+sigs.k8s.io/structured-merge-diff/v4 v4.2.1 h1:bKCqE9GvQ5tiVHn5rfn1r+yao3aLQEaLzkkmAkf+A6Y=
+sigs.k8s.io/structured-merge-diff/v4 v4.2.1/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
+sigs.k8s.io/yaml v1.2.0 h1:kr/MCeFWJWTwyaHoR9c8EjH9OumOmoF9YGiZd7lFm/Q=
+sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
diff --git a/v1.4.7/help.go b/v1.4.7/help.go
new file mode 100644
index 0000000..9e38f85
--- /dev/null
+++ b/v1.4.7/help.go
@@ -0,0 +1,94 @@
+package main
+
+import (
+	"bytes"
+	"fmt"
+	"log"
+	"sort"
+	"strings"
+
+	"github.com/mitchellh/cli"
+)
+
+// helpFunc is a cli.HelpFunc that can be used to output the help CLI instructions for Terraform.
+func helpFunc(commands map[string]cli.CommandFactory) string {
+	// Determine the maximum key length, and classify based on type
+	var otherCommands []string
+	maxKeyLen := 0
+
+	for key := range commands {
+		if _, ok := HiddenCommands[key]; ok {
+			// We don't consider hidden commands when deciding the
+			// maximum command length.
+			continue
+		}
+
+		if len(key) > maxKeyLen {
+			maxKeyLen = len(key)
+		}
+
+		isOther := true
+		for _, candidate := range PrimaryCommands {
+			if candidate == key {
+				isOther = false
+				break
+			}
+		}
+		if isOther {
+			otherCommands = append(otherCommands, key)
+		}
+	}
+	sort.Strings(otherCommands)
+
+	// The output produced by this is included in the docs at
+	// website/source/docs/cli/commands/index.html.markdown; if you
+	// change this then consider updating that to match.
+	helpText := fmt.Sprintf(`
+Usage: terraform [global options] <subcommand> [args]
+
+The available commands for execution are listed below.
+The primary workflow commands are given first, followed by
+less common or more advanced commands.
+
+Main commands:
+%s
+All other commands:
+%s
+Global options (use these before the subcommand, if any):
+  -chdir=DIR    Switch to a different working directory before executing the
+                given subcommand.
+  -help         Show this help output, or the help for a specified subcommand.
+  -version      An alias for the "version" subcommand.
+`, listCommands(commands, PrimaryCommands, maxKeyLen), listCommands(commands, otherCommands, maxKeyLen))
+
+	return strings.TrimSpace(helpText)
+}
+
+// listCommands just lists the commands in the map with the
+// given maximum key length.
+func listCommands(allCommands map[string]cli.CommandFactory, order []string, maxKeyLen int) string {
+	var buf bytes.Buffer
+
+	for _, key := range order {
+		commandFunc, ok := allCommands[key]
+		if !ok {
+			// This suggests an inconsistency in the command table definitions
+			// in commands.go .
+			panic("command not found: " + key)
+		}
+
+		command, err := commandFunc()
+		if err != nil {
+			// This would be really weird since there's no good reason for
+			// any of our command factories to fail.
+			log.Printf("[ERR] cli: Command '%s' failed to load: %s",
+				key, err)
+			continue
+		}
+
+		key = fmt.Sprintf("%s%s", key, strings.Repeat(" ", maxKeyLen-len(key)))
+		buf.WriteString(fmt.Sprintf("  %s  %s\n", key, command.Synopsis()))
+	}
+
+	return buf.String()
+}
diff --git a/v1.4.7/internal/addrs/check.go b/v1.4.7/internal/addrs/check.go
new file mode 100644
index 0000000..430b50c
--- /dev/null
+++ b/v1.4.7/internal/addrs/check.go
@@ -0,0 +1,251 @@
+package addrs
+
+import (
+	"fmt"
+
+	"github.com/hashicorp/hcl/v2"
+	"github.com/hashicorp/hcl/v2/hclsyntax"
+	"github.com/hashicorp/terraform/internal/tfdiags"
+)
+
+// Check is the address of a check rule within a checkable object.
+//
+// This represents the check rule globally within a configuration, and is used
+// during graph evaluation to identify a condition result object to update with
+// the result of check rule evaluation.
+//
+// The check address is not distinct from resource traversals, and check rule
+// values are not intended to be available to the language, so the address is
+// not Referenceable.
+//
+// Note also that the check address is only relevant within the scope of a run,
+// as reordering check blocks between runs will result in their addresses
+// changing. Check is therefore for internal use only and should not be exposed
+// in durable artifacts such as state snapshots.
+type Check struct {
+	Container Checkable
+	Type      CheckType
+	Index     int
+}
+
+func NewCheck(container Checkable, typ CheckType, index int) Check {
+	return Check{
+		Container: container,
+		Type:      typ,
+		Index:     index,
+	}
+}
+
+func (c Check) String() string {
+	container := c.Container.String()
+	switch c.Type {
+	case ResourcePrecondition:
+		return fmt.Sprintf("%s.precondition[%d]", container, c.Index)
+	case ResourcePostcondition:
+		return fmt.Sprintf("%s.postcondition[%d]", container, c.Index)
+	case OutputPrecondition:
+		return fmt.Sprintf("%s.precondition[%d]", container, c.Index)
+	default:
+		// This should not happen
+		return fmt.Sprintf("%s.condition[%d]", container, c.Index)
+	}
+}
+
+func (c Check) UniqueKey() UniqueKey {
+	return checkKey{
+		ContainerKey: c.Container.UniqueKey(),
+		Type:         c.Type,
+		Index:        c.Index,
+	}
+}
+
+type checkKey struct {
+	ContainerKey UniqueKey
+	Type         CheckType
+	Index        int
+}
+
+func (k checkKey) uniqueKeySigil() {}
+
+// CheckType describes a category of check. We use this only to establish
+// uniqueness for Check values, and do not expose this concept of "check types"
+// (which is subject to change in future) in any durable artifacts such as
+// state snapshots.
+//
+// (See [CheckableKind] for an enumeration that we _do_ use externally, to
+// describe the type of object being checked rather than the type of the check
+// itself.)
+type CheckType int
+
+//go:generate go run golang.org/x/tools/cmd/stringer -type=CheckType check.go
+
+const (
+	InvalidCondition      CheckType = 0
+	ResourcePrecondition  CheckType = 1
+	ResourcePostcondition CheckType = 2
+	OutputPrecondition    CheckType = 3
+)
+
+// Description returns a human-readable description of the check type. This is
+// presented in the user interface through a diagnostic summary.
+func (c CheckType) Description() string {
+	switch c {
+	case ResourcePrecondition:
+		return "Resource precondition"
+	case ResourcePostcondition:
+		return "Resource postcondition"
+	case OutputPrecondition:
+		return "Module output value precondition"
+	default:
+		// This should not happen
+		return "Condition"
+	}
+}
+
+// Checkable is an interface implemented by all address types that can contain
+// condition blocks.
+type Checkable interface {
+	UniqueKeyer
+
+	checkableSigil()
+
+	// Check returns the address of an individual check rule of a specified
+	// type and index within this checkable container.
+	Check(CheckType, int) Check
+
+	// ConfigCheckable returns the address of the configuration construct that
+	// this Checkable belongs to.
+	//
+	// Checkable objects can potentially be dynamically declared during a
+	// plan operation using constructs like resource for_each, and so
+	// ConfigCheckable gives us a way to talk about the static containers
+	// those dynamic objects belong to, in case we wish to group together
+	// dynamic checkable objects into their static checkable for reporting
+	// purposes.
+	ConfigCheckable() ConfigCheckable
+
+	CheckableKind() CheckableKind
+	String() string
+}
+
+var (
+	_ Checkable = AbsResourceInstance{}
+	_ Checkable = AbsOutputValue{}
+)
+
+// CheckableKind describes the different kinds of checkable objects.
+type CheckableKind rune
+
+//go:generate go run golang.org/x/tools/cmd/stringer -type=CheckableKind check.go
+
+const (
+	CheckableKindInvalid CheckableKind = 0
+	CheckableResource    CheckableKind = 'R'
+	CheckableOutputValue CheckableKind = 'O'
+)
+
+// ConfigCheckable is an interfaces implemented by address types that represent
+// configuration constructs that can have Checkable addresses associated with
+// them.
+//
+// This address type therefore in a sense represents a container for zero or
+// more checkable objects all declared by the same configuration construct,
+// so that we can talk about these groups of checkable objects before we're
+// ready to decide how many checkable objects belong to each one.
+type ConfigCheckable interface {
+	UniqueKeyer
+
+	configCheckableSigil()
+
+	CheckableKind() CheckableKind
+	String() string
+}
+
+var (
+	_ ConfigCheckable = ConfigResource{}
+	_ ConfigCheckable = ConfigOutputValue{}
+)
+
+// ParseCheckableStr attempts to parse the given string as a Checkable address
+// of the given kind.
+//
+// This should be the opposite of Checkable.String for any Checkable address
+// type, as long as "kind" is set to the value returned by the address's
+// CheckableKind method.
+//
+// We do not typically expect users to write out checkable addresses as input,
+// but we use them as part of some of our wire formats for persisting check
+// results between runs.
+func ParseCheckableStr(kind CheckableKind, src string) (Checkable, tfdiags.Diagnostics) {
+	var diags tfdiags.Diagnostics
+
+	traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(src), "", hcl.InitialPos)
+	diags = diags.Append(parseDiags)
+	if parseDiags.HasErrors() {
+		return nil, diags
+	}
+
+	path, remain, diags := parseModuleInstancePrefix(traversal)
+	if diags.HasErrors() {
+		return nil, diags
+	}
+
+	if remain.IsRelative() {
+		// (relative means that there's either nothing left or what's next isn't an identifier)
+		diags = diags.Append(&hcl.Diagnostic{
+			Severity: hcl.DiagError,
+			Summary:  "Invalid checkable address",
+			Detail:   "Module path must be followed by either a resource instance address or an output value address.",
+			Subject:  remain.SourceRange().Ptr(),
+		})
+		return nil, diags
+	}
+
+	// We use "kind" to disambiguate here because unfortunately we've
+	// historically never reserved "output" as a possible resource type name
+	// and so it is in principle possible -- albeit unlikely -- that there
+	// might be a resource whose type is literally "output".
+	switch kind {
+	case CheckableResource:
+		riAddr, moreDiags := parseResourceInstanceUnderModule(path, remain)
+		diags = diags.Append(moreDiags)
+		if diags.HasErrors() {
+			return nil, diags
+		}
+		return riAddr, diags
+
+	case CheckableOutputValue:
+		if len(remain) != 2 {
+			diags = diags.Append(&hcl.Diagnostic{
+				Severity: hcl.DiagError,
+				Summary:  "Invalid checkable address",
+				Detail:   "Output address must have only one attribute part after the keyword 'output', giving the name of the output value.",
+				Subject:  remain.SourceRange().Ptr(),
+			})
+			return nil, diags
+		}
+		if remain.RootName() != "output" {
+			diags = diags.Append(&hcl.Diagnostic{
+				Severity: hcl.DiagError,
+				Summary:  "Invalid checkable address",
+				Detail:   "Output address must follow the module address with the keyword 'output'.",
+				Subject:  remain.SourceRange().Ptr(),
+			})
+			return nil, diags
+		}
+		if step, ok := remain[1].(hcl.TraverseAttr); !ok {
+			diags = diags.Append(&hcl.Diagnostic{
+				Severity: hcl.DiagError,
+				Summary:  "Invalid checkable address",
+				Detail:   "Output address must have only one attribute part after the keyword 'output', giving the name of the output value.",
+				Subject:  remain.SourceRange().Ptr(),
+			})
+			return nil, diags
+		} else {
+			return OutputValue{Name: step.Name}.Absolute(path), diags
+		}
+
+	default:
+		panic(fmt.Sprintf("unsupported CheckableKind %s", kind))
+	}
+}
diff --git a/v1.4.7/internal/addrs/checkablekind_string.go b/v1.4.7/internal/addrs/checkablekind_string.go
new file mode 100644
index 0000000..8987cd0
--- /dev/null
+++ b/v1.4.7/internal/addrs/checkablekind_string.go
@@ -0,0 +1,33 @@
+// Code generated by "stringer -type=CheckableKind check.go"; DO NOT EDIT.
+
+package addrs
+
+import "strconv"
+
+func _() {
+	// An "invalid array index" compiler error signifies that the constant values have changed.
+	// Re-run the stringer command to generate them again.
+	var x [1]struct{}
+	_ = x[CheckableKindInvalid-0]
+	_ = x[CheckableResource-82]
+	_ = x[CheckableOutputValue-79]
+}
+
+const (
+	_CheckableKind_name_0 = "CheckableKindInvalid"
+	_CheckableKind_name_1 = "CheckableOutputValue"
+	_CheckableKind_name_2 = "CheckableResource"
+)
+
+func (i CheckableKind) String() string {
+	switch {
+	case i == 0:
+		return _CheckableKind_name_0
+	case i == 79:
+		return _CheckableKind_name_1
+	case i == 82:
+		return _CheckableKind_name_2
+	default:
+		return "CheckableKind(" + strconv.FormatInt(int64(i), 10) + ")"
+	}
+}
diff --git a/v1.4.7/internal/addrs/checktype_string.go b/v1.4.7/internal/addrs/checktype_string.go
new file mode 100644
index 0000000..8c2fceb
--- /dev/null
+++ b/v1.4.7/internal/addrs/checktype_string.go
@@ -0,0 +1,26 @@
+// Code generated by "stringer -type=CheckType check.go"; DO NOT EDIT.
+
+package addrs
+
+import "strconv"
+
+func _() {
+	// An "invalid array index" compiler error signifies that the constant values have changed.
+	// Re-run the stringer command to generate them again.
+	var x [1]struct{}
+	_ = x[InvalidCondition-0]
+	_ = x[ResourcePrecondition-1]
+	_ = x[ResourcePostcondition-2]
+	_ = x[OutputPrecondition-3]
+}
+
+const _CheckType_name = "InvalidConditionResourcePreconditionResourcePostconditionOutputPrecondition"
+
+var _CheckType_index = [...]uint8{0, 16, 36, 57, 75}
+
+func (i CheckType) String() string {
+	if i < 0 || i >= CheckType(len(_CheckType_index)-1) {
+		return "CheckType(" + strconv.FormatInt(int64(i), 10) + ")"
+	}
+	return _CheckType_name[_CheckType_index[i]:_CheckType_index[i+1]]
+}
diff --git a/v1.4.7/internal/addrs/count_attr.go b/v1.4.7/internal/addrs/count_attr.go
new file mode 100644
index 0000000..0be5c02
--- /dev/null
+++ b/v1.4.7/internal/addrs/count_attr.go
@@ -0,0 +1,18 @@
+package addrs
+
+// CountAttr is the address of an attribute of the "count" object in
+// the interpolation scope, like "count.index".
+type CountAttr struct {
+	referenceable
+	Name string
+}
+
+func (ca CountAttr) String() string {
+	return "count." + ca.Name
+}
+
+func (ca CountAttr) UniqueKey() UniqueKey {
+	return ca // A CountAttr is its own UniqueKey
+}
+
+func (ca CountAttr) uniqueKeySigil() {}
diff --git a/v1.4.7/internal/addrs/doc.go b/v1.4.7/internal/addrs/doc.go
new file mode 100644
index 0000000..4609331
--- /dev/null
+++ b/v1.4.7/internal/addrs/doc.go
@@ -0,0 +1,17 @@
+// Package addrs contains types that represent "addresses", which are
+// references to specific objects within a Terraform configuration or
+// state.
+//
+// All addresses have string representations based on HCL traversal syntax
+// which should be used in the user-interface, and also in-memory
+// representations that can be used internally.
+//
+// For object types that exist within Terraform modules a pair of types is
+// used. The "local" part of the address is represented by a type, and then
+// an absolute path to that object in the context of its module is represented
+// by a type of the same name with an "Abs" prefix added, for "absolute".
+//
+// All types within this package should be treated as immutable, even if this
+// is not enforced by the Go compiler. It is always an implementation error
+// to modify an address object in-place after it is initially constructed.
+package addrs
diff --git a/v1.4.7/internal/addrs/for_each_attr.go b/v1.4.7/internal/addrs/for_each_attr.go
new file mode 100644
index 0000000..6b0c060
--- /dev/null
+++ b/v1.4.7/internal/addrs/for_each_attr.go
@@ -0,0 +1,18 @@
+package addrs
+
+// ForEachAttr is the address of an attribute referencing the current "for_each" object in
+// the interpolation scope, addressed using the "each" keyword, ex. "each.key" and "each.value"
+type ForEachAttr struct {
+	referenceable
+	Name string
+}
+
+func (f ForEachAttr) String() string {
+	return "each." + f.Name
+}
+
+func (f ForEachAttr) UniqueKey() UniqueKey {
+	return f // A ForEachAttr is its own UniqueKey
+}
+
+func (f ForEachAttr) uniqueKeySigil() {}
diff --git a/v1.4.7/internal/addrs/input_variable.go b/v1.4.7/internal/addrs/input_variable.go
new file mode 100644
index 0000000..e85743b
--- /dev/null
+++ b/v1.4.7/internal/addrs/input_variable.go
@@ -0,0 +1,56 @@
+package addrs
+
+import (
+	"fmt"
+)
+
+// InputVariable is the address of an input variable.
+type InputVariable struct {
+	referenceable
+	Name string
+}
+
+func (v InputVariable) String() string {
+	return "var." + v.Name
+}
+
+func (v InputVariable) UniqueKey() UniqueKey {
+	return v // A InputVariable is its own UniqueKey
+}
+
+func (v InputVariable) uniqueKeySigil() {}
+
+// Absolute converts the receiver into an absolute address within the given
+// module instance.
+func (v InputVariable) Absolute(m ModuleInstance) AbsInputVariableInstance {
+	return AbsInputVariableInstance{
+		Module:   m,
+		Variable: v,
+	}
+}
+
+// AbsInputVariableInstance is the address of an input variable within a
+// particular module instance.
+type AbsInputVariableInstance struct {
+	Module   ModuleInstance
+	Variable InputVariable
+}
+
+// InputVariable returns the absolute address of the input variable of the
+// given name inside the receiving module instance.
+func (m ModuleInstance) InputVariable(name string) AbsInputVariableInstance {
+	return AbsInputVariableInstance{
+		Module: m,
+		Variable: InputVariable{
+			Name: name,
+		},
+	}
+}
+
+func (v AbsInputVariableInstance) String() string {
+	if len(v.Module) == 0 {
+		return v.Variable.String()
+	}
+
+	return fmt.Sprintf("%s.%s", v.Module.String(), v.Variable.String())
+}
diff --git a/v1.4.7/internal/addrs/instance_key.go b/v1.4.7/internal/addrs/instance_key.go
new file mode 100644
index 0000000..2d46bfc
--- /dev/null
+++ b/v1.4.7/internal/addrs/instance_key.go
@@ -0,0 +1,191 @@
+package addrs
+
+import (
+	"fmt"
+	"strings"
+	"unicode"
+
+	"github.com/zclconf/go-cty/cty"
+	"github.com/zclconf/go-cty/cty/gocty"
+)
+
+// InstanceKey represents the key of an instance within an object that
+// contains multiple instances due to using "count" or "for_each" arguments
+// in configuration.
+//
+// IntKey and StringKey are the two implementations of this type. No other
+// implementations are allowed. The single instance of an object that _isn't_
+// using "count" or "for_each" is represented by NoKey, which is a nil
+// InstanceKey.
+type InstanceKey interface {
+	instanceKeySigil()
+	String() string
+
+	// Value returns the cty.Value of the appropriate type for the InstanceKey
+	// value.
+	Value() cty.Value
+}
+
+// ParseInstanceKey returns the instance key corresponding to the given value,
+// which must be known and non-null.
+//
+// If an unknown or null value is provided then this function will panic. This
+// function is intended to deal with the values that would naturally be found
+// in a hcl.TraverseIndex, which (when parsed from source, at least) can never
+// contain unknown or null values.
+func ParseInstanceKey(key cty.Value) (InstanceKey, error) {
+	switch key.Type() {
+	case cty.String:
+		return StringKey(key.AsString()), nil
+	case cty.Number:
+		var idx int
+		err := gocty.FromCtyValue(key, &idx)
+		return IntKey(idx), err
+	default:
+		return NoKey, fmt.Errorf("either a string or an integer is required")
+	}
+}
+
+// NoKey represents the absense of an InstanceKey, for the single instance
+// of a configuration object that does not use "count" or "for_each" at all.
+var NoKey InstanceKey
+
+// IntKey is the InstanceKey representation representing integer indices, as
+// used when the "count" argument is specified or if for_each is used with
+// a sequence type.
+type IntKey int
+
+func (k IntKey) instanceKeySigil() {
+}
+
+func (k IntKey) String() string {
+	return fmt.Sprintf("[%d]", int(k))
+}
+
+func (k IntKey) Value() cty.Value {
+	return cty.NumberIntVal(int64(k))
+}
+
+// StringKey is the InstanceKey representation representing string indices, as
+// used when the "for_each" argument is specified with a map or object type.
+type StringKey string
+
+func (k StringKey) instanceKeySigil() {
+}
+
+func (k StringKey) String() string {
+	// We use HCL's quoting syntax here so that we can in principle parse
+	// an address constructed by this package as if it were an HCL
+	// traversal, even if the string contains HCL's own metacharacters.
+	return fmt.Sprintf("[%s]", toHCLQuotedString(string(k)))
+}
+
+func (k StringKey) Value() cty.Value {
+	return cty.StringVal(string(k))
+}
+
+// InstanceKeyLess returns true if the first given instance key i should sort
+// before the second key j, and false otherwise.
+func InstanceKeyLess(i, j InstanceKey) bool {
+	iTy := instanceKeyType(i)
+	jTy := instanceKeyType(j)
+
+	switch {
+	case i == j:
+		return false
+	case i == NoKey:
+		return true
+	case j == NoKey:
+		return false
+	case iTy != jTy:
+		// The ordering here is arbitrary except that we want NoKeyType
+		// to sort before the others, so we'll just use the enum values
+		// of InstanceKeyType here (where NoKey is zero, sorting before
+		// any other).
+		return uint32(iTy) < uint32(jTy)
+	case iTy == IntKeyType:
+		return int(i.(IntKey)) < int(j.(IntKey))
+	case iTy == StringKeyType:
+		return string(i.(StringKey)) < string(j.(StringKey))
+	default:
+		// Shouldn't be possible to get down here in practice, since the
+		// above is exhaustive.
+		return false
+	}
+}
+
+func instanceKeyType(k InstanceKey) InstanceKeyType {
+	if _, ok := k.(StringKey); ok {
+		return StringKeyType
+	}
+	if _, ok := k.(IntKey); ok {
+		return IntKeyType
+	}
+	return NoKeyType
+}
+
+// InstanceKeyType represents the different types of instance key that are
+// supported. Usually it is sufficient to simply type-assert an InstanceKey
+// value to either IntKey or StringKey, but this type and its values can be
+// used to represent the types themselves, rather than specific values
+// of those types.
+type InstanceKeyType rune
+
+const (
+	NoKeyType     InstanceKeyType = 0
+	IntKeyType    InstanceKeyType = 'I'
+	StringKeyType InstanceKeyType = 'S'
+)
+
+// toHCLQuotedString is a helper which formats the given string in a way that
+// HCL's expression parser would treat as a quoted string template.
+//
+// This includes:
+//   - Adding quote marks at the start and the end.
+//   - Using backslash escapes as needed for characters that cannot be represented directly.
+//   - Escaping anything that would be treated as a template interpolation or control sequence.
+func toHCLQuotedString(s string) string {
+	// This is an adaptation of a similar function inside the hclwrite package,
+	// inlined here because hclwrite's version generates HCL tokens but we
+	// only need normal strings.
+	if len(s) == 0 {
+		return `""`
+	}
+	var buf strings.Builder
+	buf.WriteByte('"')
+	for i, r := range s {
+		switch r {
+		case '\n':
+			buf.WriteString(`\n`)
+		case '\r':
+			buf.WriteString(`\r`)
+		case '\t':
+			buf.WriteString(`\t`)
+		case '"':
+			buf.WriteString(`\"`)
+		case '\\':
+			buf.WriteString(`\\`)
+		case '$', '%':
+			buf.WriteRune(r)
+			remain := s[i+1:]
+			if len(remain) > 0 && remain[0] == '{' {
+				// Double up our template introducer symbol to escape it.
+				buf.WriteRune(r)
+			}
+		default:
+			if !unicode.IsPrint(r) {
+				var fmted string
+				if r < 65536 {
+					fmted = fmt.Sprintf("\\u%04x", r)
+				} else {
+					fmted = fmt.Sprintf("\\U%08x", r)
+				}
+				buf.WriteString(fmted)
+			} else {
+				buf.WriteRune(r)
+			}
+		}
+	}
+	buf.WriteByte('"')
+	return buf.String()
+}
diff --git a/v1.4.7/internal/addrs/instance_key_test.go b/v1.4.7/internal/addrs/instance_key_test.go
new file mode 100644
index 0000000..0d12888
--- /dev/null
+++ b/v1.4.7/internal/addrs/instance_key_test.go
@@ -0,0 +1,75 @@
+package addrs
+
+import (
+	"fmt"
+	"testing"
+)
+
+func TestInstanceKeyString(t *testing.T) {
+	tests := []struct {
+		Key  InstanceKey
+		Want string
+	}{
+		{
+			IntKey(0),
+			`[0]`,
+		},
+		{
+			IntKey(5),
+			`[5]`,
+		},
+		{
+			StringKey(""),
+			`[""]`,
+		},
+		{
+			StringKey("hi"),
+			`["hi"]`,
+		},
+		{
+			StringKey("0"),
+			`["0"]`, // intentionally distinct from IntKey(0)
+		},
+		{
+			// Quotes must be escaped
+			StringKey(`"`),
+			`["\""]`,
+		},
+		{
+			// Escape sequences must themselves be escaped
+			StringKey(`\r\n`),
+			`["\\r\\n"]`,
+		},
+		{
+			// Template interpolation sequences "${" must be escaped.
+			StringKey(`${hello}`),
+			`["$${hello}"]`,
+		},
+		{
+			// Template control sequences "%{" must be escaped.
+			StringKey(`%{ for something in something }%{ endfor }`),
+			`["%%{ for something in something }%%{ endfor }"]`,
+		},
+		{
+			// Dollar signs that aren't followed by { are not interpolation sequences
+			StringKey(`$hello`),
+			`["$hello"]`,
+		},
+		{
+			// Percent signs that aren't followed by { are not control sequences
+			StringKey(`%hello`),
+			`["%hello"]`,
+		},
+	}
+
+	for _, test := range tests {
+		testName := fmt.Sprintf("%#v", test.Key)
+		t.Run(testName, func(t *testing.T) {
+			got := test.Key.String()
+			want := test.Want
+			if got != want {
+				t.Errorf("wrong result\nreciever: %s\ngot:      %s\nwant:     %s", testName, got, want)
+			}
+		})
+	}
+}
diff --git a/v1.4.7/internal/addrs/local_value.go b/v1.4.7/internal/addrs/local_value.go
new file mode 100644
index 0000000..6017650
--- /dev/null
+++ b/v1.4.7/internal/addrs/local_value.go
@@ -0,0 +1,54 @@
+package addrs
+
+import (
+	"fmt"
+)
+
+// LocalValue is the address of a local value.
+type LocalValue struct {
+	referenceable
+	Name string
+}
+
+func (v LocalValue) String() string {
+	return "local." + v.Name
+}
+
+func (v LocalValue) UniqueKey() UniqueKey {
+	return v // A LocalValue is its own UniqueKey
+}
+
+func (v LocalValue) uniqueKeySigil() {}
+
+// Absolute converts the receiver into an absolute address within the given
+// module instance.
+func (v LocalValue) Absolute(m ModuleInstance) AbsLocalValue {
+	return AbsLocalValue{
+		Module:     m,
+		LocalValue: v,
+	}
+}
+
+// AbsLocalValue is the absolute address of a local value within a module instance.
+type AbsLocalValue struct {
+	Module     ModuleInstance
+	LocalValue LocalValue
+}
+
+// LocalValue returns the absolute address of a local value of the given
+// name within the receiving module instance.
+func (m ModuleInstance) LocalValue(name string) AbsLocalValue {
+	return AbsLocalValue{
+		Module: m,
+		LocalValue: LocalValue{
+			Name: name,
+		},
+	}
+}
+
+func (v AbsLocalValue) String() string {
+	if len(v.Module) == 0 {
+		return v.LocalValue.String()
+	}
+	return fmt.Sprintf("%s.%s", v.Module.String(), v.LocalValue.String())
+}
diff --git a/v1.4.7/internal/addrs/map.go b/v1.4.7/internal/addrs/map.go
new file mode 100644
index 0000000..87b1aae
--- /dev/null
+++ b/v1.4.7/internal/addrs/map.go
@@ -0,0 +1,128 @@
+package addrs
+
+// Map represents a mapping whose keys are address types that implement
+// UniqueKeyer.
+//
+// Since not all address types are comparable in the Go language sense, this
+// type cannot work with the typical Go map access syntax, and so instead has
+// a method-based syntax. Use this type only for situations where the key
+// type isn't guaranteed to always be a valid key for a standard Go map.
+type Map[K UniqueKeyer, V any] struct {
+	// Elems is the internal data structure of the map.
+	//
+	// This is exported to allow for comparisons during tests and other similar
+	// careful read operations, but callers MUST NOT modify this map directly.
+	// Use only the methods of Map to modify the contents of this structure,
+	// to ensure that it remains correct and consistent.
+	Elems map[UniqueKey]MapElem[K, V]
+}
+
+type MapElem[K UniqueKeyer, V any] struct {
+	Key   K
+	Value V
+}
+
+func MakeMap[K UniqueKeyer, V any](initialElems ...MapElem[K, V]) Map[K, V] {
+	inner := make(map[UniqueKey]MapElem[K, V], len(initialElems))
+	ret := Map[K, V]{inner}
+	for _, elem := range initialElems {
+		ret.Put(elem.Key, elem.Value)
+	}
+	return ret
+}
+
+func MakeMapElem[K UniqueKeyer, V any](key K, value V) MapElem[K, V] {
+	return MapElem[K, V]{key, value}
+}
+
+// Put inserts a new element into the map, or replaces an existing element
+// which has an equivalent key.
+func (m Map[K, V]) Put(key K, value V) {
+	realKey := key.UniqueKey()
+	m.Elems[realKey] = MapElem[K, V]{key, value}
+}
+
+// PutElement is like Put but takes the key and value from the given MapElement
+// structure instead of as individual arguments.
+func (m Map[K, V]) PutElement(elem MapElem[K, V]) {
+	m.Put(elem.Key, elem.Value)
+}
+
+// Remove deletes the element with the given key from the map, or does nothing
+// if there is no such element.
+func (m Map[K, V]) Remove(key K) {
+	realKey := key.UniqueKey()
+	delete(m.Elems, realKey)
+}
+
+// Get returns the value of the element with the given key, or the zero value
+// of V if there is no such element.
+func (m Map[K, V]) Get(key K) V {
+	realKey := key.UniqueKey()
+	return m.Elems[realKey].Value
+}
+
+// GetOk is like Get but additionally returns a flag for whether there was an
+// element with the given key present in the map.
+func (m Map[K, V]) GetOk(key K) (V, bool) {
+	realKey := key.UniqueKey()
+	elem, ok := m.Elems[realKey]
+	return elem.Value, ok
+}
+
+// Has returns true if and only if there is an element in the map which has the
+// given key.
+func (m Map[K, V]) Has(key K) bool {
+	realKey := key.UniqueKey()
+	_, ok := m.Elems[realKey]
+	return ok
+}
+
+// Len returns the number of elements in the map.
+func (m Map[K, V]) Len() int {
+	return len(m.Elems)
+}
+
+// Elements returns a slice containing a snapshot of the current elements of
+// the map, in an unpredictable order.
+func (m Map[K, V]) Elements() []MapElem[K, V] {
+	if len(m.Elems) == 0 {
+		return nil
+	}
+	ret := make([]MapElem[K, V], 0, len(m.Elems))
+	for _, elem := range m.Elems {
+		ret = append(ret, elem)
+	}
+	return ret
+}
+
+// Keys returns a Set[K] containing a snapshot of the current keys of elements
+// of the map.
+func (m Map[K, V]) Keys() Set[K] {
+	if len(m.Elems) == 0 {
+		return nil
+	}
+	ret := make(Set[K], len(m.Elems))
+
+	// We mess with the internals of Set here, rather than going through its
+	// public interface, because that means we can avoid re-calling UniqueKey
+	// on all of the elements when we know that our own Put method would have
+	// already done the same thing.
+	for realKey, elem := range m.Elems {
+		ret[realKey] = elem.Key
+	}
+	return ret
+}
+
+// Values returns a slice containing a snapshot of the current values of
+// elements of the map, in an unpredictable order.
+func (m Map[K, V]) Values() []V {
+	if len(m.Elems) == 0 {
+		return nil
+	}
+	ret := make([]V, 0, len(m.Elems))
+	for _, elem := range m.Elems {
+		ret = append(ret, elem.Value)
+	}
+	return ret
+}
diff --git a/v1.4.7/internal/addrs/map_test.go b/v1.4.7/internal/addrs/map_test.go
new file mode 100644
index 0000000..e5a84f0
--- /dev/null
+++ b/v1.4.7/internal/addrs/map_test.go
@@ -0,0 +1,83 @@
+package addrs
+
+import (
+	"testing"
+)
+
+func TestMap(t *testing.T) {
+	variableName := InputVariable{Name: "name"}
+	localHello := LocalValue{Name: "hello"}
+	pathModule := PathAttr{Name: "module"}
+	moduleBeep := ModuleCall{Name: "beep"}
+	eachKey := ForEachAttr{Name: "key"} // intentionally not in the map
+
+	m := MakeMap(
+		MakeMapElem[Referenceable](variableName, "Aisling"),
+	)
+
+	m.Put(localHello, "hello")
+	m.Put(pathModule, "boop")
+	m.Put(moduleBeep, "unrealistic")
+
+	keySet := m.Keys()
+	if want := variableName; !m.Has(want) {
+		t.Errorf("map does not include %s", want)
+	}
+	if want := variableName; !keySet.Has(want) {
+		t.Errorf("key set does not include %s", want)
+	}
+	if want := localHello; !m.Has(want) {
+		t.Errorf("map does not include %s", want)
+	}
+	if want := localHello; !keySet.Has(want) {
+		t.Errorf("key set does not include %s", want)
+	}
+	if want := pathModule; !keySet.Has(want) {
+		t.Errorf("key set does not include %s", want)
+	}
+	if want := moduleBeep; !keySet.Has(want) {
+		t.Errorf("key set does not include %s", want)
+	}
+	if doNotWant := eachKey; m.Has(doNotWant) {
+		t.Errorf("map includes rogue element %s", doNotWant)
+	}
+	if doNotWant := eachKey; keySet.Has(doNotWant) {
+		t.Errorf("key set includes rogue element %s", doNotWant)
+	}
+
+	if got, want := m.Get(variableName), "Aisling"; got != want {
+		t.Errorf("unexpected value %q for %s; want %q", got, variableName, want)
+	}
+	if got, want := m.Get(localHello), "hello"; got != want {
+		t.Errorf("unexpected value %q for %s; want %q", got, localHello, want)
+	}
+	if got, want := m.Get(pathModule), "boop"; got != want {
+		t.Errorf("unexpected value %q for %s; want %q", got, pathModule, want)
+	}
+	if got, want := m.Get(moduleBeep), "unrealistic"; got != want {
+		t.Errorf("unexpected value %q for %s; want %q", got, moduleBeep, want)
+	}
+	if got, want := m.Get(eachKey), ""; got != want {
+		// eachKey isn't in the map, so Get returns the zero value of string
+		t.Errorf("unexpected value %q for %s; want %q", got, eachKey, want)
+	}
+
+	if v, ok := m.GetOk(variableName); v != "Aisling" || !ok {
+		t.Errorf("GetOk for %q returned incorrect result (%q, %#v)", variableName, v, ok)
+	}
+	if v, ok := m.GetOk(eachKey); v != "" || ok {
+		t.Errorf("GetOk for %q returned incorrect result (%q, %#v)", eachKey, v, ok)
+	}
+
+	m.Remove(moduleBeep)
+	if doNotWant := moduleBeep; m.Has(doNotWant) {
+		t.Errorf("map still includes %s after removing it", doNotWant)
+	}
+	if want := moduleBeep; !keySet.Has(want) {
+		t.Errorf("key set no longer includes %s after removing it from the map; key set is supposed to be a snapshot at the time of call", want)
+	}
+	keySet = m.Keys()
+	if doNotWant := moduleBeep; keySet.Has(doNotWant) {
+		t.Errorf("key set still includes %s after a second call after removing it from the map", doNotWant)
+	}
+}
diff --git a/v1.4.7/internal/addrs/module.go b/v1.4.7/internal/addrs/module.go
new file mode 100644
index 0000000..83a5cfd
--- /dev/null
+++ b/v1.4.7/internal/addrs/module.go
@@ -0,0 +1,167 @@
+package addrs
+
+import (
+	"strings"
+)
+
+// Module is an address for a module call within configuration. This is
+// the static counterpart of ModuleInstance, representing a traversal through
+// the static module call tree in configuration and does not take into account
+// the potentially-multiple instances of a module that might be created by
+// "count" and "for_each" arguments within those calls.
+//
+// This type should be used only in very specialized cases when working with
+// the static module call tree. Type ModuleInstance is appropriate in more cases.
+//
+// Although Module is a slice, it should be treated as immutable after creation.
+type Module []string
+
+// RootModule is the module address representing the root of the static module
+// call tree, which is also the zero value of Module.
+//
+// Note that this is not the root of the dynamic module tree, which is instead
+// represented by RootModuleInstance.
+var RootModule Module
+
+// IsRoot returns true if the receiver is the address of the root module,
+// or false otherwise.
+func (m Module) IsRoot() bool {
+	return len(m) == 0
+}
+
+func (m Module) String() string {
+	if len(m) == 0 {
+		return ""
+	}
+	// Calculate necessary space.
+	l := 0
+	for _, step := range m {
+		l += len(step)
+	}
+	buf := strings.Builder{}
+	// 8 is len(".module.") which separates entries.
+	buf.Grow(l + len(m)*8)
+	sep := ""
+	for _, step := range m {
+		buf.WriteString(sep)
+		buf.WriteString("module.")
+		buf.WriteString(step)
+		sep = "."
+	}
+	return buf.String()
+}
+
+func (m Module) Equal(other Module) bool {
+	if len(m) != len(other) {
+		return false
+	}
+	for i := range m {
+		if m[i] != other[i] {
+			return false
+		}
+	}
+	return true
+}
+
+func (m Module) targetableSigil() {
+	// Module is targetable
+}
+
+// TargetContains implements Targetable for Module by returning true if the given other
+// address either matches the receiver, is a sub-module-instance of the
+// receiver, or is a targetable absolute address within a module that
+// is contained within the receiver.
+func (m Module) TargetContains(other Targetable) bool {
+	switch to := other.(type) {
+
+	case Module:
+		if len(to) < len(m) {
+			// Can't be contained if the path is shorter
+			return false
+		}
+		// Other is contained if its steps match for the length of our own path.
+		for i, ourStep := range m {
+			otherStep := to[i]
+			if ourStep != otherStep {
+				return false
+			}
+		}
+		// If we fall out here then the prefixed matched, so it's contained.
+		return true
+
+	case ModuleInstance:
+		return m.TargetContains(to.Module())
+
+	case ConfigResource:
+		return m.TargetContains(to.Module)
+
+	case AbsResource:
+		return m.TargetContains(to.Module)
+
+	case AbsResourceInstance:
+		return m.TargetContains(to.Module)
+
+	default:
+		return false
+	}
+}
+
+func (m Module) AddrType() TargetableAddrType {
+	return ModuleAddrType
+}
+
+// Child returns the address of a child call in the receiver, identified by the
+// given name.
+func (m Module) Child(name string) Module {
+	ret := make(Module, 0, len(m)+1)
+	ret = append(ret, m...)
+	return append(ret, name)
+}
+
+// Parent returns the address of the parent module of the receiver, or the
+// receiver itself if there is no parent (if it's the root module address).
+func (m Module) Parent() Module {
+	if len(m) == 0 {
+		return m
+	}
+	return m[:len(m)-1]
+}
+
+// Call returns the module call address that corresponds to the given module
+// instance, along with the address of the module that contains it.
+//
+// There is no call for the root module, so this method will panic if called
+// on the root module address.
+//
+// In practice, this just turns the last element of the receiver into a
+// ModuleCall and then returns a slice of the receiever that excludes that
+// last part. This is just a convenience for situations where a call address
+// is required, such as when dealing with *Reference and Referencable values.
+func (m Module) Call() (Module, ModuleCall) {
+	if len(m) == 0 {
+		panic("cannot produce ModuleCall for root module")
+	}
+
+	caller, callName := m[:len(m)-1], m[len(m)-1]
+	return caller, ModuleCall{
+		Name: callName,
+	}
+}
+
+// Ancestors returns a slice containing the receiver and all of its ancestor
+// modules, all the way up to (and including) the root module.  The result is
+// ordered by depth, with the root module always first.
+//
+// Since the result always includes the root module, a caller may choose to
+// ignore it by slicing the result with [1:].
+func (m Module) Ancestors() []Module {
+	ret := make([]Module, 0, len(m)+1)
+	for i := 0; i <= len(m); i++ {
+		ret = append(ret, m[:i])
+	}
+	return ret
+}
+
+func (m Module) configMoveableSigil() {
+	// ModuleInstance is moveable
+}
diff --git a/v1.4.7/internal/addrs/module_call.go b/v1.4.7/internal/addrs/module_call.go
new file mode 100644
index 0000000..709b1e3
--- /dev/null
+++ b/v1.4.7/internal/addrs/module_call.go
@@ -0,0 +1,192 @@
+package addrs
+
+import (
+	"fmt"
+)
+
+// ModuleCall is the address of a call from the current module to a child
+// module.
+type ModuleCall struct {
+	referenceable
+	Name string
+}
+
+func (c ModuleCall) String() string {
+	return "module." + c.Name
+}
+
+func (c ModuleCall) UniqueKey() UniqueKey {
+	return c // A ModuleCall is its own UniqueKey
+}
+
+func (c ModuleCall) uniqueKeySigil() {}
+
+// Instance returns the address of an instance of the receiver identified by
+// the given key.
+func (c ModuleCall) Instance(key InstanceKey) ModuleCallInstance {
+	return ModuleCallInstance{
+		Call: c,
+		Key:  key,
+	}
+}
+
+func (c ModuleCall) Absolute(moduleAddr ModuleInstance) AbsModuleCall {
+	return AbsModuleCall{
+		Module: moduleAddr,
+		Call:   c,
+	}
+}
+
+func (c ModuleCall) Equal(other ModuleCall) bool {
+	return c.Name == other.Name
+}
+
+// AbsModuleCall is the address of a "module" block relative to the root
+// of the configuration.
+//
+// This is similar to ModuleInstance alone, but specifically represents
+// the module block itself rather than any one of the instances that
+// module block declares.
+type AbsModuleCall struct {
+	Module ModuleInstance
+	Call   ModuleCall
+}
+
+func (c AbsModuleCall) absMoveableSigil() {
+	// AbsModuleCall is "moveable".
+}
+
+func (c AbsModuleCall) String() string {
+	if len(c.Module) == 0 {
+		return "module." + c.Call.Name
+
+	}
+	return fmt.Sprintf("%s.module.%s", c.Module, c.Call.Name)
+}
+
+func (c AbsModuleCall) Instance(key InstanceKey) ModuleInstance {
+	ret := make(ModuleInstance, len(c.Module), len(c.Module)+1)
+	copy(ret, c.Module)
+	ret = append(ret, ModuleInstanceStep{
+		Name:        c.Call.Name,
+		InstanceKey: key,
+	})
+	return ret
+}
+
+func (c AbsModuleCall) Equal(other AbsModuleCall) bool {
+	return c.Module.Equal(other.Module) && c.Call.Equal(other.Call)
+}
+
+type absModuleCallInstanceKey string
+
+func (c AbsModuleCall) UniqueKey() UniqueKey {
+	return absModuleCallInstanceKey(c.String())
+}
+
+func (mk absModuleCallInstanceKey) uniqueKeySigil() {}
+
+// ModuleCallInstance is the address of one instance of a module created from
+// a module call, which might create multiple instances using "count" or
+// "for_each" arguments.
+//
+// There is no "Abs" version of ModuleCallInstance because an absolute module
+// path is represented by ModuleInstance.
+type ModuleCallInstance struct {
+	referenceable
+	Call ModuleCall
+	Key  InstanceKey
+}
+
+func (c ModuleCallInstance) String() string {
+	if c.Key == NoKey {
+		return c.Call.String()
+	}
+	return fmt.Sprintf("module.%s%s", c.Call.Name, c.Key)
+}
+
+func (c ModuleCallInstance) UniqueKey() UniqueKey {
+	return c // A ModuleCallInstance is its own UniqueKey
+}
+
+func (c ModuleCallInstance) uniqueKeySigil() {}
+
+func (c ModuleCallInstance) Absolute(moduleAddr ModuleInstance) ModuleInstance {
+	ret := make(ModuleInstance, len(moduleAddr), len(moduleAddr)+1)
+	copy(ret, moduleAddr)
+	ret = append(ret, ModuleInstanceStep{
+		Name:        c.Call.Name,
+		InstanceKey: c.Key,
+	})
+	return ret
+}
+
+// ModuleInstance returns the address of the module instance that corresponds
+// to the receiving call instance when resolved in the given calling module.
+// In other words, it returns the child module instance that the receving
+// call instance creates.
+func (c ModuleCallInstance) ModuleInstance(caller ModuleInstance) ModuleInstance {
+	return caller.Child(c.Call.Name, c.Key)
+}
+
+// Output returns the absolute address of an output of the receiver identified by its
+// name.
+func (c ModuleCallInstance) Output(name string) ModuleCallInstanceOutput {
+	return ModuleCallInstanceOutput{
+		Call: c,
+		Name: name,
+	}
+}
+
+// ModuleCallOutput is the address of a named output and its associated
+// ModuleCall, which may expand into multiple module instances
+type ModuleCallOutput struct {
+	referenceable
+	Call ModuleCall
+	Name string
+}
+
+func (m ModuleCallOutput) String() string {
+	return fmt.Sprintf("%s.%s", m.Call.String(), m.Name)
+}
+
+func (m ModuleCallOutput) UniqueKey() UniqueKey {
+	return m // A ModuleCallOutput is its own UniqueKey
+}
+
+func (m ModuleCallOutput) uniqueKeySigil() {}
+
+// ModuleCallInstanceOutput is the address of a particular named output produced by
+// an instance of a module call.
+type ModuleCallInstanceOutput struct {
+	referenceable
+	Call ModuleCallInstance
+	Name string
+}
+
+// ModuleCallOutput returns the referenceable ModuleCallOutput for this
+// particular instance.
+func (co ModuleCallInstanceOutput) ModuleCallOutput() ModuleCallOutput {
+	return ModuleCallOutput{
+		Call: co.Call.Call,
+		Name: co.Name,
+	}
+}
+
+func (co ModuleCallInstanceOutput) String() string {
+	return fmt.Sprintf("%s.%s", co.Call.String(), co.Name)
+}
+
+func (co ModuleCallInstanceOutput) UniqueKey() UniqueKey {
+	return co // A ModuleCallInstanceOutput is its own UniqueKey
+}
+
+func (co ModuleCallInstanceOutput) uniqueKeySigil() {}
+
+// AbsOutputValue returns the absolute output value address that corresponds
+// to the receving module call output address, once resolved in the given
+// calling module.
+func (co ModuleCallInstanceOutput) AbsOutputValue(caller ModuleInstance) AbsOutputValue {
+	moduleAddr := co.Call.ModuleInstance(caller)
+	return moduleAddr.OutputValue(co.Name)
+}
diff --git a/v1.4.7/internal/addrs/module_instance.go b/v1.4.7/internal/addrs/module_instance.go
new file mode 100644
index 0000000..f197dc1
--- /dev/null
+++ b/v1.4.7/internal/addrs/module_instance.go
@@ -0,0 +1,543 @@
+package addrs
+
+import (
+	"fmt"
+	"strings"
+
+	"github.com/hashicorp/hcl/v2"
+	"github.com/hashicorp/hcl/v2/hclsyntax"
+	"github.com/zclconf/go-cty/cty"
+	"github.com/zclconf/go-cty/cty/gocty"
+
+	"github.com/hashicorp/terraform/internal/tfdiags"
+)
+
+// ModuleInstance is an address for a particular module instance within the
+// dynamic module tree. This is an extension of the static traversals
+// represented by type Module that deals with the possibility of a single
+// module call producing multiple instances via the "count" and "for_each"
+// arguments.
+//
+// Although ModuleInstance is a slice, it should be treated as immutable after
+// creation.
+type ModuleInstance []ModuleInstanceStep
+
+var (
+	_ Targetable = ModuleInstance(nil)
+)
+
+func ParseModuleInstance(traversal hcl.Traversal) (ModuleInstance, tfdiags.Diagnostics) {
+	mi, remain, diags := parseModuleInstancePrefix(traversal)
+	if len(remain) != 0 {
+		if len(remain) == len(traversal) {
+			diags = diags.Append(&hcl.Diagnostic{
+				Severity: hcl.DiagError,
+				Summary:  "Invalid module instance address",
+				Detail:   "A module instance address must begin with \"module.\".",
+				Subject:  remain.SourceRange().Ptr(),
+			})
+		} else {
+			diags = diags.Append(&hcl.Diagnostic{
+				Severity: hcl.DiagError,
+				Summary:  "Invalid module instance address",
+				Detail:   "The module instance address is followed by additional invalid content.",
+				Subject:  remain.SourceRange().Ptr(),
+			})
+		}
+	}
+	return mi, diags
+}
+
+// ParseModuleInstanceStr is a helper wrapper around ParseModuleInstance
+// that takes a string and parses it with the HCL native syntax traversal parser
+// before interpreting it.
+//
+// This should be used only in specialized situations since it will cause the
+// created references to not have any meaningful source location information.
+// If a reference string is coming from a source that should be identified in
+// error messages then the caller should instead parse it directly using a
+// suitable function from the HCL API and pass the traversal itself to
+// ParseModuleInstance.
+//
+// Error diagnostics are returned if either the parsing fails or the analysis
+// of the traversal fails. There is no way for the caller to distinguish the
+// two kinds of diagnostics programmatically. If error diagnostics are returned
+// then the returned address is invalid.
+func ParseModuleInstanceStr(str string) (ModuleInstance, tfdiags.Diagnostics) {
+	var diags tfdiags.Diagnostics
+
+	traversal, parseDiags := hclsyntax.ParseTraversalAbs([]byte(str), "", hcl.Pos{Line: 1, Column: 1})
+	diags = diags.Append(parseDiags)
+	if parseDiags.HasErrors() {
+		return nil, diags
+	}
+
+	addr, addrDiags := ParseModuleInstance(traversal)
+	diags = diags.Append(addrDiags)
+	return addr, diags
+}
+
+func parseModuleInstancePrefix(traversal hcl.Traversal) (ModuleInstance, hcl.Traversal, tfdiags.Diagnostics) {
+	remain := traversal
+	var mi ModuleInstance
+	var diags tfdiags.Diagnostics
+
+LOOP:
+	for len(remain) > 0 {
+		var next string
+		switch tt := remain[0].(type) {
+		case hcl.TraverseRoot:
+			next = tt.Name
+		case hcl.TraverseAttr:
+			next = tt.Name
+		default:
+			diags = diags.Append(&hcl.Diagnostic{
+				Severity: hcl.DiagError,
+				Summary:  "Invalid address operator",
+				Detail:   "Module address prefix must be followed by dot and then a name.",
+				Subject:  remain[0].SourceRange().Ptr(),
+			})
+			break LOOP
+		}
+
+		if next != "module" {
+			break
+		}
+
+		kwRange := remain[0].SourceRange()
+		remain = remain[1:]
+		// If we have the prefix "module" then we should be followed by an
+		// module call name, as an attribute, and then optionally an index step
+		// giving the instance key.
+		if len(remain) == 0 {
+			diags = diags.Append(&hcl.Diagnostic{
+				Severity: hcl.DiagError,
+				Summary:  "Invalid address operator",
+				Detail:   "Prefix \"module.\" must be followed by a module name.",
+				Subject:  &kwRange,
+			})
+			break
+		}
+
+		var moduleName string
+		switch tt := remain[0].(type) {
+		case hcl.TraverseAttr:
+			moduleName = tt.Name
+		default:
+			diags = diags.Append(&hcl.Diagnostic{
+				Severity: hcl.DiagError,
+				Summary:  "Invalid address operator",
+				Detail:   "Prefix \"module.\" must be followed by a module name.",
+				Subject:  remain[0].SourceRange().Ptr(),
+			})
+			break LOOP
+		}
+		remain = remain[1:]
+		step := ModuleInstanceStep{
+			Name: moduleName,
+		}
+
+		if len(remain) > 0 {
+			if idx, ok := remain[0].(hcl.TraverseIndex); ok {
+				remain = remain[1:]
+
+				switch idx.Key.Type() {
+				case cty.String:
+					step.InstanceKey = StringKey(idx.Key.AsString())
+				case cty.Number:
+					var idxInt int
+					err := gocty.FromCtyValue(idx.Key, &idxInt)
+					if err == nil {
+						step.InstanceKey = IntKey(idxInt)
+					} else {
+						diags = diags.Append(&hcl.Diagnostic{
+							Severity: hcl.DiagError,
+							Summary:  "Invalid address operator",
+							Detail:   fmt.Sprintf("Invalid module index: %s.", err),
+							Subject:  idx.SourceRange().Ptr(),
+						})
+					}
+				default:
+					// Should never happen, because no other types are allowed in traversal indices.
+					diags = diags.Append(&hcl.Diagnostic{
+						Severity: hcl.DiagError,
+						Summary:  "Invalid address operator",
+						Detail:   "Invalid module key: must be either a string or an integer.",
+						Subject:  idx.SourceRange().Ptr(),
+					})
+				}
+			}
+		}
+
+		mi = append(mi, step)
+	}
+
+	var retRemain hcl.Traversal
+	if len(remain) > 0 {
+		retRemain = make(hcl.Traversal, len(remain))
+		copy(retRemain, remain)
+		// The first element here might be either a TraverseRoot or a
+		// TraverseAttr, depending on whether we had a module address on the
+		// front. To make life easier for callers, we'll normalize to always
+		// start with a TraverseRoot.
+		if tt, ok := retRemain[0].(hcl.TraverseAttr); ok {
+			retRemain[0] = hcl.TraverseRoot{
+				Name:     tt.Name,
+				SrcRange: tt.SrcRange,
+			}
+		}
+	}
+
+	return mi, retRemain, diags
+}
+
+// UnkeyedInstanceShim is a shim method for converting a Module address to the
+// equivalent ModuleInstance address that assumes that no modules have
+// keyed instances.
+//
+// This is a temporary allowance for the fact that Terraform does not presently
+// support "count" and "for_each" on modules, and thus graph building code that
+// derives graph nodes from configuration must just assume unkeyed modules
+// in order to construct the graph. At a later time when "count" and "for_each"
+// support is added for modules, all callers of this method will need to be
+// reworked to allow for keyed module instances.
+func (m Module) UnkeyedInstanceShim() ModuleInstance {
+	path := make(ModuleInstance, len(m))
+	for i, name := range m {
+		path[i] = ModuleInstanceStep{Name: name}
+	}
+	return path
+}
+
+// ModuleInstanceStep is a single traversal step through the dynamic module
+// tree. It is used only as part of ModuleInstance.
+type ModuleInstanceStep struct {
+	Name        string
+	InstanceKey InstanceKey
+}
+
+// RootModuleInstance is the module instance address representing the root
+// module, which is also the zero value of ModuleInstance.
+var RootModuleInstance ModuleInstance
+
+// IsRoot returns true if the receiver is the address of the root module instance,
+// or false otherwise.
+func (m ModuleInstance) IsRoot() bool {
+	return len(m) == 0
+}
+
+// Child returns the address of a child module instance of the receiver,
+// identified by the given name and key.
+func (m ModuleInstance) Child(name string, key InstanceKey) ModuleInstance {
+	ret := make(ModuleInstance, 0, len(m)+1)
+	ret = append(ret, m...)
+	return append(ret, ModuleInstanceStep{
+		Name:        name,
+		InstanceKey: key,
+	})
+}
+
+// ChildCall returns the address of a module call within the receiver,
+// identified by the given name.
+func (m ModuleInstance) ChildCall(name string) AbsModuleCall {
+	return AbsModuleCall{
+		Module: m,
+		Call:   ModuleCall{Name: name},
+	}
+}
+
+// Parent returns the address of the parent module instance of the receiver, or
+// the receiver itself if there is no parent (if it's the root module address).
+func (m ModuleInstance) Parent() ModuleInstance {
+	if len(m) == 0 {
+		return m
+	}
+	return m[:len(m)-1]
+}
+
+// String returns a string representation of the receiver, in the format used
+// within e.g. user-provided resource addresses.
+//
+// The address of the root module has the empty string as its representation.
+func (m ModuleInstance) String() string {
+	if len(m) == 0 {
+		return ""
+	}
+	// Calculate minimal necessary space (no instance keys).
+	l := 0
+	for _, step := range m {
+		l += len(step.Name)
+	}
+	buf := strings.Builder{}
+	// 8 is len(".module.") which separates entries.
+	buf.Grow(l + len(m)*8)
+	sep := ""
+	for _, step := range m {
+		buf.WriteString(sep)
+		buf.WriteString("module.")
+		buf.WriteString(step.Name)
+		if step.InstanceKey != NoKey {
+			buf.WriteString(step.InstanceKey.String())
+		}
+		sep = "."
+	}
+	return buf.String()
+}
+
+type moduleInstanceKey string
+
+func (m ModuleInstance) UniqueKey() UniqueKey {
+	return moduleInstanceKey(m.String())
+}
+
+func (mk moduleInstanceKey) uniqueKeySigil() {}
+
+// Equal returns true if the receiver and the given other value
+// contains the exact same parts.
+func (m ModuleInstance) Equal(o ModuleInstance) bool {
+	if len(m) != len(o) {
+		return false
+	}
+
+	for i := range m {
+		if m[i] != o[i] {
+			return false
+		}
+	}
+	return true
+}
+
+// Less returns true if the receiver should sort before the given other value
+// in a sorted list of addresses.
+func (m ModuleInstance) Less(o ModuleInstance) bool {
+	if len(m) != len(o) {
+		// Shorter path sorts first.
+		return len(m) < len(o)
+	}
+
+	for i := range m {
+		mS, oS := m[i], o[i]
+		switch {
+		case mS.Name != oS.Name:
+			return mS.Name < oS.Name
+		case mS.InstanceKey != oS.InstanceKey:
+			return InstanceKeyLess(mS.InstanceKey, oS.InstanceKey)
+		}
+	}
+
+	return false
+}
+
+// Ancestors returns a slice containing the receiver and all of its ancestor
+// module instances, all the way up to (and including) the root module.
+// The result is ordered by depth, with the root module always first.
+//
+// Since the result always includes the root module, a caller may choose to
+// ignore it by slicing the result with [1:].
+func (m ModuleInstance) Ancestors() []ModuleInstance {
+	ret := make([]ModuleInstance, 0, len(m)+1)
+	for i := 0; i <= len(m); i++ {
+		ret = append(ret, m[:i])
+	}
+	return ret
+}
+
+// IsAncestor returns true if the receiver is an ancestor of the given
+// other value.
+func (m ModuleInstance) IsAncestor(o ModuleInstance) bool {
+	// Longer or equal sized paths means the receiver cannot
+	// be an ancestor of the given module insatnce.
+	if len(m) >= len(o) {
+		return false
+	}
+
+	for i, ms := range m {
+		if ms.Name != o[i].Name {
+			return false
+		}
+		if ms.InstanceKey != NoKey && ms.InstanceKey != o[i].InstanceKey {
+			return false
+		}
+	}
+
+	return true
+}
+
+// Call returns the module call address that corresponds to the given module
+// instance, along with the address of the module instance that contains it.
+//
+// There is no call for the root module, so this method will panic if called
+// on the root module address.
+//
+// A single module call can produce potentially many module instances, so the
+// result discards any instance key that might be present on the last step
+// of the instance. To retain this, use CallInstance instead.
+//
+// In practice, this just turns the last element of the receiver into a
+// ModuleCall and then returns a slice of the receiever that excludes that
+// last part. This is just a convenience for situations where a call address
+// is required, such as when dealing with *Reference and Referencable values.
+func (m ModuleInstance) Call() (ModuleInstance, ModuleCall) {
+	if len(m) == 0 {
+		panic("cannot produce ModuleCall for root module")
+	}
+
+	inst, lastStep := m[:len(m)-1], m[len(m)-1]
+	return inst, ModuleCall{
+		Name: lastStep.Name,
+	}
+}
+
+// CallInstance returns the module call instance address that corresponds to
+// the given module instance, along with the address of the module instance
+// that contains it.
+//
+// There is no call for the root module, so this method will panic if called
+// on the root module address.
+//
+// In practice, this just turns the last element of the receiver into a
+// ModuleCallInstance and then returns a slice of the receiever that excludes
+// that last part. This is just a convenience for situations where a call\
+// address is required, such as when dealing with *Reference and Referencable
+// values.
+func (m ModuleInstance) CallInstance() (ModuleInstance, ModuleCallInstance) {
+	if len(m) == 0 {
+		panic("cannot produce ModuleCallInstance for root module")
+	}
+
+	inst, lastStep := m[:len(m)-1], m[len(m)-1]
+	return inst, ModuleCallInstance{
+		Call: ModuleCall{
+			Name: lastStep.Name,
+		},
+		Key: lastStep.InstanceKey,
+	}
+}
+
+// TargetContains implements Targetable by returning true if the given other
+// address either matches the receiver, is a sub-module-instance of the
+// receiver, or is a targetable absolute address within a module that
+// is contained within the reciever.
+func (m ModuleInstance) TargetContains(other Targetable) bool {
+	switch to := other.(type) {
+	case Module:
+		if len(to) < len(m) {
+			// Can't be contained if the path is shorter
+			return false
+		}
+		// Other is contained if its steps match for the length of our own path.
+		for i, ourStep := range m {
+			otherStep := to[i]
+
+			// We can't contain an entire module if we have a specific instance
+			// key. The case of NoKey is OK because this address is either
+			// meant to address an unexpanded module, or a single instance of
+			// that module, and both of those are a covered in-full by the
+			// Module address.
+			if ourStep.InstanceKey != NoKey {
+				return false
+			}
+
+			if ourStep.Name != otherStep {
+				return false
+			}
+		}
+		// If we fall out here then the prefixed matched, so it's contained.
+		return true
+
+	case ModuleInstance:
+		if len(to) < len(m) {
+			return false
+		}
+		for i, ourStep := range m {
+			otherStep := to[i]
+
+			if ourStep.Name != otherStep.Name {
+				return false
+			}
+
+			// if this is our last step, because all targets are parsed as
+			// instances, this may be a ModuleInstance intended to be used as a
+			// Module.
+			if i == len(m)-1 {
+				if ourStep.InstanceKey == NoKey {
+					// If the other step is a keyed instance, then we contain that
+					// step, and if it isn't it's a match, which is true either way
+					return true
+				}
+			}
+
+			if ourStep.InstanceKey != otherStep.InstanceKey {
+				return false
+			}
+
+		}
+		return true
+
+	case ConfigResource:
+		return m.TargetContains(to.Module)
+
+	case AbsResource:
+		return m.TargetContains(to.Module)
+
+	case AbsResourceInstance:
+		return m.TargetContains(to.Module)
+
+	default:
+		return false
+	}
+}
+
+// Module returns the address of the module that this instance is an instance
+// of.
+func (m ModuleInstance) Module() Module {
+	if len(m) == 0 {
+		return nil
+	}
+	ret := make(Module, len(m))
+	for i, step := range m {
+		ret[i] = step.Name
+	}
+	return ret
+}
+
+func (m ModuleInstance) AddrType() TargetableAddrType {
+	return ModuleInstanceAddrType
+}
+
+func (m ModuleInstance) targetableSigil() {
+	// ModuleInstance is targetable
+}
+
+func (m ModuleInstance) absMoveableSigil() {
+	// ModuleInstance is moveable
+}
+
+// IsDeclaredByCall returns true if the receiver is an instance of the given
+// AbsModuleCall.
+func (m ModuleInstance) IsDeclaredByCall(other AbsModuleCall) bool {
+	// Compare len(m) to len(other.Module+1) because the final module instance
+	// step in other is stored in the AbsModuleCall.Call
+	if len(m) > len(other.Module)+1 || len(m) == 0 && len(other.Module) == 0 {
+		return false
+	}
+
+	// Verify that the other's ModuleInstance matches the receiver.
+	inst, lastStep := other.Module, other.Call
+	for i := range inst {
+		if inst[i] != m[i] {
+			return false
+		}
+	}
+
+	// Now compare the final step of the received with the other Call, where
+	// only the name needs to match.
+	return lastStep.Name == m[len(m)-1].Name
+}
+
+func (s ModuleInstanceStep) String() string {
+	if s.InstanceKey != NoKey {
+		return s.Name + s.InstanceKey.String()
+	}
+	return s.Name
+}
diff --git a/v1.4.7/internal/addrs/module_instance_test.go b/v1.4.7/internal/addrs/module_instance_test.go
new file mode 100644
index 0000000..393bcd5
--- /dev/null
+++ b/v1.4.7/internal/addrs/module_instance_test.go
@@ -0,0 +1,170 @@
+package addrs
+
+import (
+	"fmt"
+	"testing"
+)
+
+func TestModuleInstanceEqual_true(t *testing.T) {
+	addrs := []string{
+		"module.foo",
+		"module.foo.module.bar",
+		"module.foo[1].module.bar",
+		`module.foo["a"].module.bar["b"]`,
+		`module.foo["a"].module.bar.module.baz[3]`,
+	}
+	for _, m := range addrs {
+		t.Run(m, func(t *testing.T) {
+			addr, diags := ParseModuleInstanceStr(m)
+			if len(diags) > 0 {
+				t.Fatalf("unexpected diags: %s", diags.Err())
+			}
+			if !addr.Equal(addr) {
+				t.Fatalf("expected %#v to be equal to itself", addr)
+			}
+		})
+	}
+}
+
+func TestModuleInstanceEqual_false(t *testing.T) {
+	testCases := []struct {
+		left  string
+		right string
+	}{
+		{
+			"module.foo",
+			"module.bar",
+		},
+		{
+			"module.foo",
+			"module.foo.module.bar",
+		},
+		{
+			"module.foo[1]",
+			"module.bar[1]",
+		},
+		{
+			`module.foo[1]`,
+			`module.foo["1"]`,
+		},
+		{
+			"module.foo.module.bar",
+			"module.foo[1].module.bar",
+		},
+		{
+			`module.foo.module.bar`,
+			`module.foo["a"].module.bar`,
+		},
+	}
+	for _, tc := range testCases {
+		t.Run(fmt.Sprintf("%s = %s", tc.left, tc.right), func(t *testing.T) {
+			left, diags := ParseModuleInstanceStr(tc.left)
+			if len(diags) > 0 {
+				t.Fatalf("unexpected diags parsing %s: %s", tc.left, diags.Err())
+			}
+			right, diags := ParseModuleInstanceStr(tc.right)
+			if len(diags) > 0 {
+				t.Fatalf("unexpected diags parsing %s: %s", tc.right, diags.Err())
+			}
+
+			if left.Equal(right) {
+				t.Fatalf("expected %#v not to be equal to %#v", left, right)
+			}
+
+			if right.Equal(left) {
+				t.Fatalf("expected %#v not to be equal to %#v", right, left)
+			}
+		})
+	}
+}
+
+func BenchmarkStringShort(b *testing.B) {
+	addr, _ := ParseModuleInstanceStr(`module.foo`)
+	for n := 0; n < b.N; n++ {
+		addr.String()
+	}
+}
+
+func BenchmarkStringLong(b *testing.B) {
+	addr, _ := ParseModuleInstanceStr(`module.southamerica-brazil-region.module.user-regional-desktops.module.user-name`)
+	for n := 0; n < b.N; n++ {
+		addr.String()
+	}
+}
+
+func TestModuleInstance_IsDeclaredByCall(t *testing.T) {
+	tests := []struct {
+		instance ModuleInstance
+		call     AbsModuleCall
+		want     bool
+	}{
+		{
+			ModuleInstance{},
+			AbsModuleCall{},
+			false,
+		},
+		{
+			mustParseModuleInstanceStr("module.child"),
+			AbsModuleCall{},
+			false,
+		},
+		{
+			ModuleInstance{},
+			AbsModuleCall{
+				RootModuleInstance,
+				ModuleCall{Name: "child"},
+			},
+			false,
+		},
+		{
+			mustParseModuleInstanceStr("module.child"),
+			AbsModuleCall{ // module.child
+				RootModuleInstance,
+				ModuleCall{Name: "child"},
+			},
+			true,
+		},
+		{
+			mustParseModuleInstanceStr(`module.child`),
+			AbsModuleCall{ // module.kinder.module.child
+				mustParseModuleInstanceStr("module.kinder"),
+				ModuleCall{Name: "child"},
+			},
+			false,
+		},
+		{
+			mustParseModuleInstanceStr("module.kinder"),
+			// module.kinder.module.child contains module.kinder, but is not itself an instance of module.kinder
+			AbsModuleCall{
+				mustParseModuleInstanceStr("module.kinder"),
+				ModuleCall{Name: "child"},
+			},
+			false,
+		},
+		{
+			mustParseModuleInstanceStr("module.child"),
+			AbsModuleCall{
+				mustParseModuleInstanceStr(`module.kinder["a"]`),
+				ModuleCall{Name: "kinder"},
+			},
+			false,
+		},
+	}
+
+	for _, test := range tests {
+		t.Run(fmt.Sprintf("%q.IsCallInstance(%q)", test.instance, test.call.String()), func(t *testing.T) {
+			got := test.instance.IsDeclaredByCall(test.call)
+			if got != test.want {
+				t.Fatal("wrong result")
+			}
+		})
+	}
+}
+
+func mustParseModuleInstanceStr(str string) ModuleInstance {
+	mi, diags := ParseModuleInstanceStr(str)
+	if diags.HasErrors() {
+		panic(diags.ErrWithWarnings())
+	}
+	return mi
+}
diff --git a/v1.4.7/internal/addrs/module_package.go b/v1.4.7/internal/addrs/module_package.go
new file mode 100644
index 0000000..e1c82e3
--- /dev/null
+++ b/v1.4.7/internal/addrs/module_package.go
@@ -0,0 +1,46 @@
+package addrs
+
+import (
+	tfaddr "github.com/hashicorp/terraform-registry-address"
+)
+
+// A ModulePackage represents a physical location where Terraform can retrieve
+// a module package, which is an archive, repository, or other similar
+// container which delivers the source code for one or more Terraform modules.
+//
+// A ModulePackage is a string in go-getter's address syntax. By convention,
+// we use ModulePackage-typed values only for the result of successfully
+// running the go-getter "detectors", which produces an address string which
+// includes an explicit installation method prefix along with an address
+// string in the format expected by that installation method.
+//
+// Note that although the "detector" phase of go-getter does do some simple
+// normalization in certain cases, it isn't generally possible to compare
+// two ModulePackage values to decide if they refer to the same package. Two
+// equal ModulePackage values represent the same package, but there might be
+// other non-equal ModulePackage values that also refer to that package, and
+// there is no reliable way to determine that.
+//
+// Don't convert a user-provided string directly to ModulePackage. Instead,
+// use ParseModuleSource with a remote module address and then access the
+// ModulePackage value from the result, making sure to also handle the
+// selected subdirectory if any. You should convert directly to ModulePackage
+// only for a string that is hard-coded into the program (e.g. in a unit test)
+// where you've ensured that it's already in the expected syntax.
+type ModulePackage string
+
+func (p ModulePackage) String() string {
+	return string(p)
+}
+
+// A ModuleRegistryPackage is an extra indirection over a ModulePackage where
+// we use a module registry to translate a more symbolic address (and
+// associated version constraint given out of band) into a physical source
+// location.
+//
+// ModuleRegistryPackage is distinct from ModulePackage because they have
+// disjoint use-cases: registry package addresses are only used to query a
+// registry in order to find a real module package address. These being
+// distinct is intended to help future maintainers more easily follow the
+// series of steps in the module installer, with the help of the type checker.
+type ModuleRegistryPackage = tfaddr.ModulePackage
diff --git a/v1.4.7/internal/addrs/module_source.go b/v1.4.7/internal/addrs/module_source.go
new file mode 100644
index 0000000..82000db
--- /dev/null
+++ b/v1.4.7/internal/addrs/module_source.go
@@ -0,0 +1,365 @@
+package addrs
+
+import (
+	"fmt"
+	"path"
+	"strings"
+
+	tfaddr "github.com/hashicorp/terraform-registry-address"
+	"github.com/hashicorp/terraform/internal/getmodules"
+)
+
+// ModuleSource is the general type for all three of the possible module source
+// address types. The concrete implementations of this are ModuleSourceLocal,
+// ModuleSourceRegistry, and ModuleSourceRemote.
+type ModuleSource interface {
+	// String returns a full representation of the address, including any
+	// additional components that are typically implied by omission in
+	// user-written addresses.
+	//
+	// We typically use this longer representation in error message, in case
+	// the inclusion of normally-omitted components is helpful in debugging
+	// unexpected behavior.
+	String() string
+
+	// ForDisplay is similar to String but instead returns a representation of
+	// the idiomatic way to write the address in configuration, omitting
+	// components that are commonly just implied in addresses written by
+	// users.
+	//
+	// We typically use this shorter representation in informational messages,
+	// such as the note that we're about to start downloading a package.
+	ForDisplay() string
+
+	moduleSource()
+}
+
+var _ ModuleSource = ModuleSourceLocal("")
+var _ ModuleSource = ModuleSourceRegistry{}
+var _ ModuleSource = ModuleSourceRemote{}
+
+var moduleSourceLocalPrefixes = []string{
+	"./",
+	"../",
+	".\\",
+	"..\\",
+}
+
+// ParseModuleSource parses a module source address as given in the "source"
+// argument inside a "module" block in the configuration.
+//
+// For historical reasons this syntax is a bit overloaded, supporting three
+// different address types:
+//   - Local paths starting with either ./ or ../, which are special because
+//     Terraform considers them to belong to the same "package" as the caller.
+//   - Module registry addresses, given as either NAMESPACE/NAME/SYSTEM or
+//     HOST/NAMESPACE/NAME/SYSTEM, in which case the remote registry serves
+//     as an indirection over the third address type that follows.
+//   - Various URL-like and other heuristically-recognized strings which
+//     we currently delegate to the external library go-getter.
+//
+// There is some ambiguity between the module registry addresses and go-getter's
+// very liberal heuristics and so this particular function will typically treat
+// an invalid registry address as some other sort of remote source address
+// rather than returning an error. If you know that you're expecting a
+// registry address in particular, use ParseModuleSourceRegistry instead, which
+// can therefore expose more detailed error messages about registry address
+// parsing in particular.
+func ParseModuleSource(raw string) (ModuleSource, error) {
+	if isModuleSourceLocal(raw) {
+		localAddr, err := parseModuleSourceLocal(raw)
+		if err != nil {
+			// This is to make sure we really return a nil ModuleSource in
+			// this case, rather than an interface containing the zero
+			// value of ModuleSourceLocal.
+			return nil, err
+		}
+		return localAddr, nil
+	}
+
+	// For historical reasons, whether an address is a registry
+	// address is defined only by whether it can be successfully
+	// parsed as one, and anything else must fall through to be
+	// parsed as a direct remote source, where go-getter might
+	// then recognize it as a filesystem path. This is odd
+	// but matches behavior we've had since Terraform v0.10 which
+	// existing modules may be relying on.
+	// (Notice that this means that there's never any path where
+	// the registry source parse error gets returned to the caller,
+	// which is annoying but has been true for many releases
+	// without it posing a serious problem in practice.)
+	if ret, err := ParseModuleSourceRegistry(raw); err == nil {
+		return ret, nil
+	}
+
+	// If we get down here then we treat everything else as a
+	// remote address. In practice there's very little that
+	// go-getter doesn't consider invalid input, so even invalid
+	// nonsense will probably interpreted as _something_ here
+	// and then fail during installation instead. We can't
+	// really improve this situation for historical reasons.
+	remoteAddr, err := parseModuleSourceRemote(raw)
+	if err != nil {
+		// This is to make sure we really return a nil ModuleSource in
+		// this case, rather than an interface containing the zero
+		// value of ModuleSourceRemote.
+		return nil, err
+	}
+	return remoteAddr, nil
+}
+
+// ModuleSourceLocal is a ModuleSource representing a local path reference
+// from the caller's directory to the callee's directory within the same
+// module package.
+//
+// A "module package" here means a set of modules distributed together in
+// the same archive, repository, or similar. That's a significant distinction
+// because we always download and cache entire module packages at once,
+// and then create relative references within the same directory in order
+// to ensure all modules in the package are looking at a consistent filesystem
+// layout. We also assume that modules within a package are maintained together,
+// which means that cross-cutting maintenence across all of them would be
+// possible.
+//
+// The actual value of a ModuleSourceLocal is a normalized relative path using
+// forward slashes, even on operating systems that have other conventions,
+// because we're representing traversal within the logical filesystem
+// represented by the containing package, not actually within the physical
+// filesystem we unpacked the package into. We should typically not construct
+// ModuleSourceLocal values directly, except in tests where we can ensure
+// the value meets our assumptions. Use ParseModuleSource instead if the
+// input string is not hard-coded in the program.
+type ModuleSourceLocal string
+
+func parseModuleSourceLocal(raw string) (ModuleSourceLocal, error) {
+	// As long as we have a suitable prefix (detected by ParseModuleSource)
+	// there is no failure case for local paths: we just use the "path"
+	// package's cleaning logic to remove any redundant "./" and "../"
+	// sequences and any duplicate slashes and accept whatever that
+	// produces.
+
+	// Although using backslashes (Windows-style) is non-idiomatic, we do
+	// allow it and just normalize it away, so the rest of Terraform will
+	// only see the forward-slash form.
+	if strings.Contains(raw, `\`) {
+		// Note: We use string replacement rather than filepath.ToSlash
+		// here because the filepath package behavior varies by current
+		// platform, but we want to interpret configured paths the same
+		// across all platforms: these are virtual paths within a module
+		// package, not physical filesystem paths.
+		raw = strings.ReplaceAll(raw, `\`, "/")
+	}
+
+	// Note that we could've historically blocked using "//" in a path here
+	// in order to avoid confusion with the subdir syntax in remote addresses,
+	// but we historically just treated that as the same as a single slash
+	// and so we continue to do that now for compatibility. Clean strips those
+	// out and reduces them to just a single slash.
+	clean := path.Clean(raw)
+
+	// However, we do need to keep a single "./" on the front if it isn't
+	// a "../" path, or else it would be ambigous with the registry address
+	// syntax.
+	if !strings.HasPrefix(clean, "../") {
+		clean = "./" + clean
+	}
+
+	return ModuleSourceLocal(clean), nil
+}
+
+func isModuleSourceLocal(raw string) bool {
+	for _, prefix := range moduleSourceLocalPrefixes {
+		if strings.HasPrefix(raw, prefix) {
+			return true
+		}
+	}
+	return false
+}
+
+func (s ModuleSourceLocal) moduleSource() {}
+
+func (s ModuleSourceLocal) String() string {
+	// We assume that our underlying string was already normalized at
+	// construction, so we just return it verbatim.
+	return string(s)
+}
+
+func (s ModuleSourceLocal) ForDisplay() string {
+	return string(s)
+}
+
+// ModuleSourceRegistry is a ModuleSource representing a module listed in a
+// Terraform module registry.
+//
+// A registry source isn't a direct source location but rather an indirection
+// over a ModuleSourceRemote. The job of a registry is to translate the
+// combination of a ModuleSourceRegistry and a module version number into
+// a concrete ModuleSourceRemote that Terraform will then download and
+// install.
+type ModuleSourceRegistry tfaddr.Module
+
+// DefaultModuleRegistryHost is the hostname used for registry-based module
+// source addresses that do not have an explicit hostname.
+const DefaultModuleRegistryHost = tfaddr.DefaultModuleRegistryHost
+
+// ParseModuleSourceRegistry is a variant of ParseModuleSource which only
+// accepts module registry addresses, and will reject any other address type.
+//
+// Use this instead of ParseModuleSource if you know from some other surrounding
+// context that an address is intended to be a registry address rather than
+// some other address type, which will then allow for better error reporting
+// due to the additional information about user intent.
+func ParseModuleSourceRegistry(raw string) (ModuleSource, error) {
+	// Before we delegate to the "real" function we'll just make sure this
+	// doesn't look like a local source address, so we can return a better
+	// error message for that situation.
+	if isModuleSourceLocal(raw) {
+		return ModuleSourceRegistry{}, fmt.Errorf("can't use local directory %q as a module registry address", raw)
+	}
+
+	src, err := tfaddr.ParseModuleSource(raw)
+	if err != nil {
+		return nil, err
+	}
+	return ModuleSourceRegistry{
+		Package: src.Package,
+		Subdir:  src.Subdir,
+	}, nil
+}
+
+func (s ModuleSourceRegistry) moduleSource() {}
+
+func (s ModuleSourceRegistry) String() string {
+	if s.Subdir != "" {
+		return s.Package.String() + "//" + s.Subdir
+	}
+	return s.Package.String()
+}
+
+func (s ModuleSourceRegistry) ForDisplay() string {
+	if s.Subdir != "" {
+		return s.Package.ForDisplay() + "//" + s.Subdir
+	}
+	return s.Package.ForDisplay()
+}
+
+// ModuleSourceRemote is a ModuleSource representing a remote location from
+// which we can retrieve a module package.
+//
+// A ModuleSourceRemote can optionally include a "subdirectory" path, which
+// means that it's selecting a sub-directory of the given package to use as
+// the entry point into the package.
+type ModuleSourceRemote struct {
+	// Package is the address of the remote package that the requested
+	// module belongs to.
+	Package ModulePackage
+
+	// If Subdir is non-empty then it represents a sub-directory within the
+	// remote package which will serve as the entry-point for the package.
+	//
+	// Subdir uses a normalized forward-slash-based path syntax within the
+	// virtual filesystem represented by the final package. It will never
+	// include `../` or `./` sequences.
+	Subdir string
+}
+
+f