Deleting individual elements

Destroys
This can be tricky!

In this section we want to conclude the lab by providing some details on key concepts that can be confusing to a new Terraform user. In particular, when you are trying to delete items that have been created. As we have explained Terraform keeps states for everything that deploys with some exception which we have previously discussed. For this reason it behaves different than automation tools like Ansible.

Lets take the following example. You create 100 instances in AWS cloud, but ran into a problem and you decide that you want to delete instance 82. You decide to your AWS Console and terminate instance 82. Then, you will run the Terraform to create instance 82. You will use the following command:

terraform plan -var 'instance=["inst82"]' -out "mynewplan.plan"

You then do a terraform apply with the plan file and you walk away proud. Your boss suddenly calls you because there is a massive outage. When you go back and look you see that Terraform deleted all of the other 99 instances and left you with just instance 82.

You are probably saying right now... wait? what! Yes, that is what would happen. Because you just told terraform that you wanted instance 82. You didn't say that you wanted the other 99. So Terraform> deleted everything. Therfore, for this reason we wanted to leave with some "lesson learns" as we ourselves would forget this important concept and cause various "pain points" as we were in the process of building this lab.

Lets use the previous example of using modules as a way to show you how we can delete specific components and re-create them easily with Terraform.

Step 1 - Terraform Show Command

The terraform show command is a very useful command in Terraform. This command shows the user the output from the state or plan file which then can be used to understand the state of the desired state of Terraform.

For example if we execute the command terraform show


terraform show

 labuser@terra-vm-pod03:~# terraform show

# module.my_bd_web.aci_bridge_domain.bd:
resource "aci_bridge_domain" "bd" {
arp_flood = "yes"
bridge_domain_type = "regular"
ep_clear = "no"
host_based_routing = "no"
id = "uni/tn-mod_pod03/BD-pod03_web"
intersite_bum_traffic_allow = "no"
intersite_l2_stretch = "no"
ip_learning = "yes"
ipv6_mcast_allow = "no"
limit_ip_learn_to_subnets = "yes"
ll_addr = "::"
mac = "00:22:BD:F8:19:FF"
mcast_allow = "no"
multi_dst_pkt_act = "bd-flood"
name = "pod03_web"
optimize_wan_bandwidth = "no"
relation_fv_rs_ctx = "uni/tn-mod_pod03/ctx-vrf_pn_03"
tenant_dn = "uni/tn-mod_pod03"
unicast_route = "yes"
unk_mac_ucast_act = "flood"
unk_mcast_act = "flood"
v6unk_mcast_act = "flood"
vmac = "not-applicable"
}

# module.my_bd_web.aci_subnet.bd_subnet:
resource "aci_subnet" "bd_subnet" {
bridge_domain_dn = "uni/tn-mod_pod03/BD-pod03_web"
ctrl = "nd"
id = "uni/tn-mod_pod03/BD-pod03_web/subnet-[6.1.1.1/24]"
ip = "6.1.1.1/24"
preferred = "no"
scope = "private"
virtual = "no"
}


# module.my_bd_app.aci_bridge_domain.bd:
resource "aci_bridge_domain" "bd" {
arp_flood = "no"
bridge_domain_type = "regular"
ep_clear = "no"
host_based_routing = "no"
id = "uni/tn-mod_pod03/BD-pod03_app"
intersite_bum_traffic_allow = "no"
intersite_l2_stretch = "no"
ip_learning = "yes"
ipv6_mcast_allow = "no"
limit_ip_learn_to_subnets = "yes"
ll_addr = "::"
mac = "00:22:BD:F8:19:FF"
mcast_allow = "no"
multi_dst_pkt_act = "bd-flood"
name = "pod03_app"
optimize_wan_bandwidth = "no"
relation_fv_rs_ctx = "uni/tn-mod_pod03/ctx-vrf_pn_03"
tenant_dn = "uni/tn-mod_pod03"
unicast_route = "yes"
unk_mac_ucast_act = "proxy"
unk_mcast_act = "flood"
v6unk_mcast_act = "flood"
vmac = "not-applicable"
}

# module.my_bd_app.aci_subnet.bd_subnet:
resource "aci_subnet" "bd_subnet" {
bridge_domain_dn = "uni/tn-mod_pod03/BD-pod03_app"
ctrl = "nd"
id = "uni/tn-mod_pod03/BD-pod03_app/subnet-[5.1.1.1/24]"
ip = "5.1.1.1/24"
preferred = "no"
scope = "private"
virtual = "no"
}


# module.my_tenant.aci_tenant.tenant:
resource "aci_tenant" "tenant" {
id = "uni/tn-mod_pod03"
name = "mod_pod03"
}


# module.my_vrf.aci_vrf.vrf:
resource "aci_vrf" "vrf" {
bd_enforced_enable = "no"
id = "uni/tn-mod_pod03/ctx-vrf_pn_03"
ip_data_plane_learning = "enabled"
knw_mcast_act = "permit"
name = "vrf_pn_03"
pc_enf_dir = "ingress"
pc_enf_pref = "enforced"
tenant_dn = "uni/tn-mod_pod03"
}

If you inspect closer the output of the terraform show command. You will notice every resource that has been created. Here is the list of the newly created resources:

  1. module.my_tenant.aci_tenant.tenant
  2. module.my_vrf.aci_vrf.vrf
  3. module.my_bd_web.aci_bridge_domain.bd
  4. module.my_bd_web.aci_subnet.bd_subnet
  5. module.my_bd_app.aci_bridge_domain.bd
  6. module.my_bd_app.aci_subnet.bd_subnet

These resources are going to play a major role to understant what needs to be deleted.

Step 2 - Terraform Destroy Command

Now, lets assume you want to just delete the module.my_bd_app.aci_bridge_domain.bd without having any impact to the rest of the ACI fabric. The way to do is to execute the terraform destroy command with the target option.


    terraform destroy -target "module.my_bd_app.aci_bridge_domain.bd"

labuser@terra-vm-pod03:~# terraform destroy -target "module.my_bd_app.aci_bridge_domain.bd"
module.my_tenant.aci_tenant.tenant: Refreshing state... [id=uni/tn-mod_pod03]
module.my_vrf.aci_vrf.vrf: Refreshing state... [id=uni/tn-mod_pod03/ctx-vrf_pn_03]
module.my_bd_app.aci_bridge_domain.bd: Refreshing state... [id=uni/tn-mod_pod03/BD-pod03_app]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy

Terraform will perform the following actions:

# module.my_bd_app.aci_bridge_domain.bd will be destroyed
- resource "aci_bridge_domain" "bd" {
- arp_flood = "no" -> null
- bridge_domain_type = "regular" -> null
- ep_clear = "no" -> null
- host_based_routing = "no" -> null
- id = "uni/tn-mod_pod03/BD-pod03_app" -> null
- intersite_bum_traffic_allow = "no" -> null
- intersite_l2_stretch = "no" -> null
- ip_learning = "yes" -> null
- ipv6_mcast_allow = "no" -> null
- limit_ip_learn_to_subnets = "yes" -> null
- ll_addr = "::" -> null
- mac = "00:22:BD:F8:19:FF" -> null
- mcast_allow = "no" -> null
- multi_dst_pkt_act = "bd-flood" -> null
- name = "pod03_app" -> null
- optimize_wan_bandwidth = "no" -> null
- relation_fv_rs_ctx = "uni/tn-mod_pod03/ctx-vrf_pn_03" -> null
- tenant_dn = "uni/tn-mod_pod03" -> null
- unicast_route = "yes" -> null
- unk_mac_ucast_act = "proxy" -> null
- unk_mcast_act = "flood" -> null
- v6unk_mcast_act = "flood" -> null
- vmac = "not-applicable" -> null
}

# module.my_bd_app.aci_subnet.bd_subnet will be destroyed
- resource "aci_subnet" "bd_subnet" {
- bridge_domain_dn = "uni/tn-mod_pod03/BD-pod03_app" -> null
- ctrl = "nd" -> null
- id = "uni/tn-mod_pod03/BD-pod03_app/subnet-[5.1.1.1/24]" -> null
- ip = "5.1.1.1/24" -> null
- preferred = "no" -> null
- scope = "private" -> null
- virtual = "no" -> null
}

Destroy complete! Resources: 2 destroyed.

It is important to note during this exercise, that even though we were only "destroying" module.my_bd_app.aci_bridge_domain.bd, Terraform also destroyed module.my_bd_app.aci_subnet.bd_subnet. This is because module.my_bd_app.aci_subnet.bd_subnet depends on module.my_bd_app.aci_bridge_domain.bd. Therefore, it is very important when you are destroying/deleting something in Terraform to make sure you are aware of the resources and its dependencies.

What is cool about all this is that you can easily tell Terraform to rebuild what you destroyed without much effort. Since you have the configuration already, telling terraform to plan and run the automation will lead to terraform seeing the missing configuration and rebuild to return it back to its proper state.

This is a big advantage of terraform over stateless automation tools. It knows what is out there and based on that knowledge it can properly determine what it needs to do accelerating that process.

Step 3 - Rebuild the configuration

Using the terraform plan and apply command rebuild the configuration.


terraform plan -out main.plan
terraform apply "main.plan"

You will see that terraform focused purely on the resources that had been deleted. This example might not be as clear as if you had deployed thousands of instances in AWS and one had gotten corrupted and you needed to re-initialize it. With terraform you could easily delete the resource and plan the re-create again.

Finished!