Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.2k views
in Technique[技术] by (71.8m points)

terraform - 有条件地移除方块(Conditionally remove a block)

Following de-facto standard way for conditionally adding and removing blocks ( 1 , 2 , 3 ), I am facing a difficulty with generating a plan when the block must be removed.

(用于有条件地添加和删除块(以下事实标准方式123 ),我面临与当块必须除去生成的俯视困难。)

I have the following tf config.

(我有以下tf配置。)

Note the dynamic block:

(注意dynamic块:)

provider "kubernetes" {}

variable secret {
  type = string
}

resource "kubernetes_deployment" "sample-deployment" {
  metadata {
    name = "sample-deployment"

    labels = {
      app = "api"
    }
  }

  spec {
    selector {
      match_labels = {
        app = "sample"
      }
    }

    template {
      metadata {
        labels = {
          app = "sample"
        }
      }

      spec {
        dynamic image_pull_secrets {
          for_each = compact([var.secret])

          content {
            name = var.secret
          }
        }

        container {
          name  = "httpenv"
          image = "jpetazzo/httpenv:latest"
        }
      }
    }
  }
}

Then I run 3 commands, one after another:

(然后我运行3个命令,一个接一个:)

  1. Initially create the resource:

    (最初创建资源:)

     terraform apply -var secret= 

    Deployment is created, and image_pull_secret is not in the diff.

    (创建部署,并且image_pull_secret中没有image_pull_secret 。)

  2. Set secret and update the resource:

    (设置密钥并更新资源:)

     terraform apply -var secret=my-secret 

    Diff for update contains:

    (更新差异包含:)

     + image_pull_secrets { + name = "my-secret" } 
  3. Remove the secret and update the resource again:

    (删除机密并再次更新资源:)

     terraform apply -var secret= 

    The output is blank:

    (输出为空白:)

     Apply complete! Resources: 0 added, 0 changed, 0 destroyed. 

Clearly, I'm missing something, as otherwise I would imagine this issue would have been brought up by now.

(显然,我缺少了一些东西,否则我会以为现在会出现此问题。)

What am I missing ?

(我想念什么?)

The version of terraform I'm using is v0.12.16 .

(我正在使用的terraform版本是v0.12.16 。)

Update .

(更新 。)

  1. After running:

    (运行后:)

     env TF_LOG=TRACE TF_LOG_PATH=logs.txt terraform apply -var secret= 

    I noticed this in the logs.txt :

    (我在logs.txt注意到了这logs.txt :)

     2019/12/01 12:17:34 [WARN] Provider "kubernetes" produced an invalid plan for kubernetes_deployment.sample-deployment, but we are tolerating it because it is using the legacy plugin SDK. The following problems may be the cause of any confusing errors from downstream operations: - .spec[0].template[0].spec[0].image_pull_secrets: block count in plan (1) disagrees with count in config (0) - .spec[0].template[0].spec[0].container[0].resources: block count in plan (1) disagrees with count in config (0) - .spec[0].strategy: block count in plan (1) disagrees with count in config (0) 

    Could this be related to the issue I'm facing?

    (这可能与我面临的问题有关吗?)

  2. looks like parts of the message that mention block are coming from terraform code.

    (消息中提到的内容block似乎来自terraform代码。)

    So the issue I'm seeing must not be strictly related to kubernetes provider.

    (因此,我看到的问题一定不能与kubernetes提供者严格相关。)

    Or is it?

    (还是?)

  ask by gmile translate from so

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
等待大神答复

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...