Terraform 和 Oracle Cloud Infrastructure

简介

这是Oracle Cloud Hands-on Labs Level 200中的一个实验Terraform and Oracle Cloud Infrastructure
文档中说需要30分钟,实际3个小时。

准备工作

创建一个云实例,Image选择Oracle Cloud Developer Image. 这样terraform, OCI CLI等软件就都安装完成了。OCI CLI只会在最后一个kubernetes实验中用到。

下载实例脚本:

$ curl -L -o tfsetup.sh https://raw.githubusercontent.com/jamalarif/oci/master/scripts/tflabsetup/tfsetup.sh

这个脚本会自动安装OCI CLI,Terraform,建立目录和配置文件模板。但由于我想自己配置OCI CLI,而且由很多软件都已安装,因此我只选择了脚本中的以下命令运行:

sudo yum -y install terraform python-oci-cli bzip2 cpio zip unzip dos2unix dialog curl jq git golang iputils wget screen tmux byobu eli
nks kubectl

mkdir -p terraformtest

command cat >~/terraformtest/provider.tf <<'EOF'
variable "tenancy_ocid" {}
variable "user_ocid" {}
variable "fingerprint" {}
variable "private_key_path" {}
variable "compartment_ocid" {}
variable "ssh_public_key" {}
variable "region" {}

provider "oci" {
  tenancy_ocid         = "${var.tenancy_ocid}"
  user_ocid            = "${var.user_ocid}"
  fingerprint          = "${var.fingerprint}"
  private_key_path     = "${var.private_key_path}"
  region               = "${var.region}"
  disable_auto_retries = "true"
}

data "oci_identity_availability_domains" "ADs" {
  compartment_id = "${var.tenancy_ocid}"
}

output "ADprint" {
  value = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[0],"name")}"
}
EOF

command cat>~/terraformtest/env-vars <<'EOF'
export TF_VAR_tenancy_ocid=<tenancy_OCID>
export TF_VAR_user_ocid=<api.user_OCID>
export TF_VAR_compartment_ocid=<Demo_Compartment_OCID>

export TF_VAR_fingerprint=$(cat ~/.oci/oci_api_key_fingerprint)

export TF_VAR_private_key_path=~/.oci/oci_api_key.pem

export TF_VAR_ssh_public_key=$(cat ~/.ssh/id_rsa.pub)
export TF_VAR_ssh_private_key=$(cat ~/.ssh/id_rsa)

export TF_VAR_region=us-ashburn-1
EOF

配置OCI CLI,需要输入user, tenancy的OCID以及region名称:

$ oci setup config
...
...
Enter a location for your config [/home/opc/.oci/config]:
Enter a user OCID: ***
Enter a tenancy OCID: ***
Enter a region (e.g. ap-mumbai-1, ap-seoul-1, ap-sydney-1, ap-tokyo-1, ca-toronto-1, eu-frankfurt-1, eu-zurich-1, sa-saopaulo-1, uk-gov-london-1, uk-london-1, us-ashburn-1, us-gov-ashburn-1, us-gov-chicago-1, us-gov-phoenix-1, us-langley-1, us-luke-1, us-phoenix-1): eu-frankfurt-1
Do you want to generate a new RSA key pair? (If you decline you will be asked to supply the path to an existing key.) [Y/n]:
Enter a directory for your keys to be created [/home/opc/.oci]:
Enter a name for your key [oci_api_key]:
Public key written to: /home/opc/.oci/oci_api_key_public.pem
Enter a passphrase for your private key (empty for no passphrase):
Private key written to: /home/opc/.oci/oci_api_key.pem
Fingerprint: ****4:3a:1c:05:08
Config written to /home/opc/.oci/config
...

然后在OCI控制台的Identity>>Users>>User Details>>API Keys中添加Public Key, 将公钥的内容拷贝粘贴:

$ cat /home/opc/.oci/oci_api_key_public.pem
-----BEGIN PUBLIC KEY-----
...
-----END PUBLIC KEY-----

验证OCI CLI配置成功:

[opc@oracle-cloud-developer ~]$ oci iam region list
{
  "data": [
    {
      "key": "BOM",
      "name": "ap-mumbai-1"
    },
    {
      "key": "FRA",
      "name": "eu-frankfurt-1"
    },
    {
      "key": "GRU",
      "name": "sa-saopaulo-1"
    },
    {
      "key": "IAD",
      "name": "us-ashburn-1"
    },
    {
      "key": "ICN",
      "name": "ap-seoul-1"
    },
    {
      "key": "LHR",
      "name": "uk-london-1"
    },
    {
      "key": "NRT",
      "name": "ap-tokyo-1"
    },
    {
      "key": "PHX",
      "name": "us-phoenix-1"
    },
    {
      "key": "SYD",
      "name": "ap-sydney-1"
    },
    {
      "key": "YYZ",
      "name": "ca-toronto-1"
    },
    {
      "key": "ZRH",
      "name": "eu-zurich-1"
    }
  ]
}

修改env-vars如下:

$ cat ~/terraformtest/env-vars
export TF_VAR_tenancy_ocid=ocid1.tenancy.oc1..aaaaaaa...
export TF_VAR_user_ocid=ocid1.user.oc1..aaaaaaaa2i5q...
export TF_VAR_compartment_ocid=ocid1.compartment.oc1..aaaaaaaazto...

export TF_VAR_fingerprint=65:74:0c:80:cf:bb:f5:ed:f8:b9:1b:...

export TF_VAR_private_key_path=~/.oci/oci_api_key.pem

export TF_VAR_ssh_public_key=$(cat ~/.ssh/id_rsa.pub)
export TF_VAR_ssh_private_key=$(cat ~/.ssh/id_rsa)

export TF_VAR_region=eu-frankfurt-1

补齐缺的id_rsa和id_rsa.pub。这两个文件与OCI client无关。目的是在未来创建实例时,使用这一对作为公钥和私钥,因此可以临时生成以下:

yes "y" | ssh-keygen -N "" -f ~/.ssh/id_rsa

验证设置成功:

$ source ./env-vars
[opc@oracle-cloud-developer terraformtest]$ export | grep TF
declare -x LANG="en_US.UTF-8"
declare -x TF_VAR_compartment_ocid="ocid1.compartment.oc1..aaaaaaaaztosn..."
declare -x TF_VAR_fingerprint="65:74:0c:80:cf:bb:f5:ed:f8:b9****"
declare -x TF_VAR_private_key_path="/home/opc/.oci/oci_api_key.pem"
declare -x TF_VAR_region="eu-frankfurt-1"
declare -x TF_VAR_ssh_private_key="-----BEGIN RSA PRIVATE KEY-----
declare -x TF_VAR_ssh_public_key="-----BEGIN PUBLIC KEY-----
declare -x TF_VAR_tenancy_ocid="ocid1.tenancy.oc1..aaaaaaaavnk4luui..."
declare -x TF_VAR_user_ocid="ocid1.user.oc1..aaaaaaaa2i5qb3fidndq5ap447kuwd..."

确认terraform运行成功,以下3个命令是固定套路,或称为三板斧,表示初始化,生成执行计划,应用,详见帮助

$ terraform init
$ terraform plan
$ terraform apply 

输出如下:

$ terraform init

Initializing the backend...

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.oci: version = "~> 3.51"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.oci_identity_availability_domains.ADs: Refreshing state...

------------------------------------------------------------------------

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

$ terraform apply
data.oci_identity_availability_domains.ADs: Refreshing state...

Warning: Interpolation-only expressions are deprecated

  on provider.tf line 10, in provider "oci":
  10:   tenancy_ocid         = "${var.tenancy_ocid}"

Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.

Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.

(and 4 more similar warnings elsewhere)


Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

ADprint = dTZF:EU-FRANKFURT-1-AD-1

这几个terraform命令实际执行的是以下文件,即打印出region中第一个可用域:

$ cat ~/terraformtest/provider.tf
variable "tenancy_ocid" {}
variable "user_ocid" {}
variable "fingerprint" {}
variable "private_key_path" {}
variable "compartment_ocid" {}
variable "ssh_public_key" {}
variable "region" {}

provider "oci" {
  tenancy_ocid         = "${var.tenancy_ocid}"
  user_ocid            = "${var.user_ocid}"
  fingerprint          = "${var.fingerprint}"
  private_key_path     = "${var.private_key_path}"
  region               = "${var.region}"
  disable_auto_retries = "true"
}

data "oci_identity_availability_domains" "ADs" {
  compartment_id = "${var.tenancy_ocid}"
}

output "ADprint" {
  value = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[0],"name")}"
}

下面要玩个大的。首先下载项目:

$ git clone https://github.com/jamalarif/oraclecodelab18.git
$ cd oraclecodelab18
$ ls -l
total 12
-rw-rw-r--. 1 opc opc   25 Dec 24 01:11 _config.yml
drwxrwxr-x. 2 opc opc   95 Dec 24 01:11 oke-cluster-exercise
-rw-rw-r--. 1 opc opc 1287 Dec 24 01:11 README.md
drwxrwxr-x. 3 opc opc 4096 Dec 24 01:41 webserver-exercise

Terraform的Web服务器实验

项目中包括两个实验,先做第一个。这个实验建立网络,然后创建两个实例,并安装Web Server:

$ cd webserver-exercise

terraform init初始化时出现了错误,原因是这两个实验是2018 OOW的实验,现在terraform新的版本语法不一样,因此要先做个升级:

$ terraform 0.12upgrade

然后仍是那三板斧命令,不过要记得拷贝之前的环境变量定义:

$ cp ~/terraformtest/env-vars .
$ source env-vars
$ terraform init
$ terraform plan
$ terraform apply 
...
Outputs:

Webserver-AD1 = [
  "150.136.*.*",
]
Webserver-AD2 = [
  "150.136.*.*",
]

然后浏览器访问输出的两个公网IP,即可得到输出。

在实际执行中,发生了些错误,错误纠正后,再次运行terraform apply即可,因为terraform在terraform.tfstate中保存了状态。

下面过一下terraform的基本命令:

$ terraform providers
.
└── provider.oci >= 3.0.0

$ terraform output
Webserver-AD1 = [
  "150.136.161.230",
]
Webserver-AD2 = [
  "150.136.182.174",
]

$ terraform state list
data.oci_identity_availability_domains.ashburn
oci_core_instance.Webserver-AD1
oci_core_instance.Webserver-AD2
oci_core_internet_gateway.igw
oci_core_route_table.rt1
oci_core_security_list.sl_w
oci_core_subnet.subnet1
oci_core_subnet.subnet2
oci_core_virtual_network.vcn_w

$ terraform state show oci_core_instance.Webserver-AD1
# oci_core_instance.Webserver-AD1:
resource "oci_core_instance" "Webserver-AD1" {
    availability_domain = "dTZF:US-ASHBURN-AD-1"
    boot_volume_id      = "ocid1.bootvolume.oc1.iad.abuwcljtscfmspxikaklbzwasgk5d3s..."
    compartment_id      = "ocid1.compartment.oc1..aaaaaaaaztosn7e..."
    defined_tags        = {}
    display_name        = "Webserver-ASHBURN_AD1"
    fault_domain        = "FAULT-DOMAIN-1"
    freeform_tags       = {}
    id                  = "ocid1.instance.oc1.iad.anuwcljtcuco5yqcx..."
    image               = "ocid1.image.oc1.iad.aaaaaaaai..."
    launch_mode         = "NATIVE"
    metadata            = {
        "ssh_authorized_keys" = "ssh-rsa AAAAB... opc@oracle-cloud-developer"
        "user_data"           = "...g=="
    }
    private_ip          = "10.0.10.2"
    public_ip           = "150.136.161.230"
    region              = "iad"
    shape               = "VM.Standard1.2"
    state               = "RUNNING"
    subnet_id           = "ocid1.subnet.oc1.iad.aaaaaaaa..."
    system_tags         = {}
    time_created        = "2019-12-24 01:39:55.637 +0000 UTC"

    agent_config {
        is_monitoring_disabled = false
    }

    create_vnic_details {
        assign_public_ip       = "true"
        defined_tags           = {}
        display_name           = "primaryvnic"
        freeform_tags          = {}
        private_ip             = "10.0.10.2"
        skip_source_dest_check = false
        subnet_id              = "ocid1.subnet.oc1.iad.aaaaaaaaolffsmm..."
    }

    launch_options {
        boot_volume_type                    = "PARAVIRTUALIZED"
        firmware                            = "UEFI_64"
        is_consistent_volume_naming_enabled = false
        is_pv_encryption_in_transit_enabled = false
        network_type                        = "VFIO"
        remote_data_volume_type             = "PARAVIRTUALIZED"
    }

    source_details {
        boot_volume_size_in_gbs = "47"
        source_id               = "ocid1.image.oc1.iad.aaaaaaaaiu73xa6..."
        source_type             = "image"
    }

    timeouts {
        create = "60m"
    }
}

$ terraform graph
digraph {
        compound = "true"
        newrank = "true"
        subgraph "root" {
                "[root] data.oci_identity_availability_domains.ashburn" [label = "data.oci_identity_availability_domains.ashburn", shape = "box"]
                "[root] oci_core_instance.Webserver-AD1" [label = "oci_core_instance.Webserver-AD1", shape = "box"]
                "[root] oci_core_instance.Webserver-AD2" [label = "oci_core_instance.Webserver-AD2", shape = "box"]
                "[root] oci_core_internet_gateway.igw" [label = "oci_core_internet_gateway.igw", shape = "box"]
                "[root] oci_core_route_table.rt1" [label = "oci_core_route_table.rt1", shape = "box"]
                "[root] oci_core_security_list.sl_w" [label = "oci_core_security_list.sl_w", shape = "box"]
                "[root] oci_core_subnet.subnet1" [label = "oci_core_subnet.subnet1", shape = "box"]
                "[root] oci_core_subnet.subnet2" [label = "oci_core_subnet.subnet2", shape = "box"]
                "[root] oci_core_virtual_network.vcn_w" [label = "oci_core_virtual_network.vcn_w", shape = "box"]
                "[root] output.Webserver-AD1" [label = "output.Webserver-AD1", shape = "note"]
                "[root] output.Webserver-AD2" [label = "output.Webserver-AD2", shape = "note"]
                "[root] provider.oci" [label = "provider.oci", shape = "diamond"]
                "[root] var.compartment_ocid" [label = "var.compartment_ocid", shape = "note"]
                "[root] var.dns_label_vcn" [label = "var.dns_label_vcn", shape = "note"]
                "[root] var.fingerprint" [label = "var.fingerprint", shape = "note"]
                "[root] var.instance_shape" [label = "var.instance_shape", shape = "note"]
                "[root] var.private_key_path" [label = "var.private_key_path", shape = "note"]
                "[root] var.region" [label = "var.region", shape = "note"]
                "[root] var.ssh_private_key" [label = "var.ssh_private_key", shape = "note"]
                "[root] var.ssh_public_key" [label = "var.ssh_public_key", shape = "note"]
                "[root] var.subnet_cidr_w1" [label = "var.subnet_cidr_w1", shape = "note"]
                "[root] var.subnet_cidr_w2" [label = "var.subnet_cidr_w2", shape = "note"]
                "[root] var.tenancy_ocid" [label = "var.tenancy_ocid", shape = "note"]
                "[root] var.user-data" [label = "var.user-data", shape = "note"]
                "[root] var.user_ocid" [label = "var.user_ocid", shape = "note"]
                "[root] var.vcn_cidr_block" [label = "var.vcn_cidr_block", shape = "note"]
                "[root] data.oci_identity_availability_domains.ashburn" -> "[root] provider.oci"
                "[root] meta.count-boundary (EachMode fixup)" -> "[root] output.Webserver-AD1"
                "[root] meta.count-boundary (EachMode fixup)" -> "[root] output.Webserver-AD2"
                "[root] meta.count-boundary (EachMode fixup)" -> "[root] var.dns_label_vcn"
                "[root] meta.count-boundary (EachMode fixup)" -> "[root] var.region"
                "[root] meta.count-boundary (EachMode fixup)" -> "[root] var.ssh_private_key"
                "[root] oci_core_instance.Webserver-AD1" -> "[root] oci_core_subnet.subnet1"
                "[root] oci_core_instance.Webserver-AD1" -> "[root] var.instance_shape"
                "[root] oci_core_instance.Webserver-AD1" -> "[root] var.ssh_public_key"
                "[root] oci_core_instance.Webserver-AD1" -> "[root] var.user-data"
                "[root] oci_core_instance.Webserver-AD2" -> "[root] oci_core_subnet.subnet2"
                "[root] oci_core_instance.Webserver-AD2" -> "[root] var.instance_shape"
                "[root] oci_core_instance.Webserver-AD2" -> "[root] var.ssh_public_key"
                "[root] oci_core_instance.Webserver-AD2" -> "[root] var.user-data"
                "[root] oci_core_internet_gateway.igw" -> "[root] oci_core_virtual_network.vcn_w"
                "[root] oci_core_route_table.rt1" -> "[root] oci_core_internet_gateway.igw"
                "[root] oci_core_security_list.sl_w" -> "[root] oci_core_virtual_network.vcn_w"
                "[root] oci_core_subnet.subnet1" -> "[root] data.oci_identity_availability_domains.ashburn"
                "[root] oci_core_subnet.subnet1" -> "[root] oci_core_route_table.rt1"
                "[root] oci_core_subnet.subnet1" -> "[root] oci_core_security_list.sl_w"
                "[root] oci_core_subnet.subnet1" -> "[root] var.subnet_cidr_w1"
                "[root] oci_core_subnet.subnet2" -> "[root] data.oci_identity_availability_domains.ashburn"
                "[root] oci_core_subnet.subnet2" -> "[root] oci_core_route_table.rt1"
                "[root] oci_core_subnet.subnet2" -> "[root] oci_core_security_list.sl_w"
                "[root] oci_core_subnet.subnet2" -> "[root] var.subnet_cidr_w2"
                "[root] oci_core_virtual_network.vcn_w" -> "[root] provider.oci"
                "[root] oci_core_virtual_network.vcn_w" -> "[root] provisioner.local-exec"
                "[root] oci_core_virtual_network.vcn_w" -> "[root] var.compartment_ocid"
                "[root] oci_core_virtual_network.vcn_w" -> "[root] var.vcn_cidr_block"
                "[root] output.Webserver-AD1" -> "[root] oci_core_instance.Webserver-AD1"
                "[root] output.Webserver-AD2" -> "[root] oci_core_instance.Webserver-AD2"
                "[root] provider.oci (close)" -> "[root] oci_core_instance.Webserver-AD1"
                "[root] provider.oci (close)" -> "[root] oci_core_instance.Webserver-AD2"
                "[root] provider.oci" -> "[root] var.fingerprint"
                "[root] provider.oci" -> "[root] var.private_key_path"
                "[root] provider.oci" -> "[root] var.tenancy_ocid"
                "[root] provider.oci" -> "[root] var.user_ocid"
                "[root] provisioner.local-exec (close)" -> "[root] oci_core_subnet.subnet1"
                "[root] provisioner.local-exec (close)" -> "[root] oci_core_subnet.subnet2"
                "[root] root" -> "[root] meta.count-boundary (EachMode fixup)"
                "[root] root" -> "[root] provider.oci (close)"
                "[root] root" -> "[root] provisioner.local-exec (close)"
        }
}

最后就是清理工作:

$ terraform destroy
...
Destroy complete! Resources: 8 destroyed.

Terraform的kubernetes实验

首先需要在租户的root compartment建立policy如下:
在这里插入图片描述

allow service OKE to manage all-resources in tenancy

在执行之前,需判断你的OKE cluster资源的限制:
在这里插入图片描述
从上可知,最多3个cluster,但这是整个租户的,不是每一个region都会有3个。所以我在执行时遇到了一下的错误:

oci_containerengine_cluster.oke_cluster: Creating...

Error: Service error:LimitExceeded. The cluster limit for this tenancy has been exceeded.. http status code: 400.

原因就是代码把region写死成us-ashburn-1了,而其实它资源不够了。我将其改为eu-frankfurt-1就成功了。执行三板斧:

$ cd ~/oraclecodelab18/oke-cluster-exercise/
$ cp ~/terraformtest/env-vars .
$ source ./env-vars
$ terraform init
$ terraform plan
$ terraform apply

执行的过程中遇到错误,可以忽略,不影响后续的测试:

...
Error: Service error:InvalidParameter. invalid token version.. http status code: 400. Opc request id: 62772a00e50b825439154bb159100dad/BD53D6C0418B4C278A7CB273147E26F7/763D08E014DE4BD7B5CD2EB419B90BFB

  on kubeconfig.tf line 9, in data "oci_containerengine_cluster_kube_config" "test_cluster_kube_config":
   9: data "oci_containerengine_cluster_kube_config" "test_cluster_kube_config" {
...

接下来试一下kubectl,先单击Access Kubeconfig按钮:
在这里插入图片描述
然后执行3个矩形中的命令,将kubectl的环境配置好:
在这里插入图片描述
运行kubectl命令:

$ kubectl cluster-info
Kubernetes master is running at https://c3gcmrxgu3d.eu-frankfurt-1.clusters.oci.oraclecloud.com:6443
KubeDNS is running at https://c3gcmrxgu3d.eu-frankfurt-1.clusters.oci.oraclecloud.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ kubectl get nodes
NAME       STATUS   ROLES   AGE   VERSION
10.0.1.2   Ready    node    22m   v1.12.7
10.0.2.2   Ready    node    22m   v1.12.7

在Kubernetes Cluster中创建应用:

$ kubectl run nginx --image=nginx --port=80 --replicas=3
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created

$ kubectl get deployments
NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx   3         3         3            3           41s

$ kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP         NODE       NOMINATED NODE
nginx-cdb6b5b95-2ttrf   1/1     Running   0          55s   10.1.1.3   10.0.1.2   <none>
nginx-cdb6b5b95-cjppk   1/1     Running   0          55s   10.1.0.6   10.0.2.2   <none>
nginx-cdb6b5b95-wj2wv   1/1     Running   0          55s   10.1.1.4   10.0.1.2   <none>

$ kubectl expose deployment nginx --port=80 --type=LoadBalancer
service/nginx exposed

$ kubectl get services
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
kubernetes   ClusterIP      10.2.0.1       <none>           443/TCP        40m
nginx        LoadBalancer   10.2.222.248   132.145.227.47   80:32244/TCP   34s

以上最后一个命令的输出中,nginx一行的EXTERNAL-IP即负载均衡器的地址,在网络部分可以看到:
在这里插入图片描述
然后浏览器访问这个地址即可看到输出。
删除服务和部署:

$ kubectl delete service nginx
service "nginx" deleted
$ kubectl delete deployments nginx
deployment.extensions "nginx" deleted

最后清理:

$ cd ~/oraclecodelab18/oke-cluster-exercise/
$ terraform destroy
...
Error: Service error:InvalidParameter. invalid token version.. http status code: 400. Opc request id: 3dc1131f19bb3db87324cf737304545e/C998ED8300534FCCA43576830C4434E9/678211503D4643368781131B4CA85CFC

  on kubeconfig.tf line 9, in data "oci_containerengine_cluster_kube_config" "test_cluster_kube_config":
   9: data "oci_containerengine_cluster_kube_config" "test_cluster_kube_config" {

同样是token错误,没法只好手工在OCI Console中先删除Node Pool,然后删除Cluster。
在Github中开了个case。

参考

  1. https://blogs.oracle.com/developers/infrastructure-as-code-using-terraform-on-oracle-developer-cloud
发布了342 篇原创文章 · 获赞 42 · 访问量 54万+

猜你喜欢

转载自blog.csdn.net/stevensxiao/article/details/103667945