Infrastructure as Code: Managing Okta, GCP, and Cloudflare with Terraform
Yesterday I automated employee onboarding with Okta and Airflow. The day before that, I built the entire platform from scratch for $10/month.
Today I asked a different question: what happens when I need to rebuild all of it?
Without Infrastructure as Code, the answer is: hours of clicking through dashboards, hoping you remember every setting, every DNS record, every Okta app configuration. With Terraform, the answer is: terraform apply.
This is the story of how I took everything I built and turned it into code.
Why Terraform
I've been managing Okta, Cloudflare, and GCP through their respective UIs. It works — until it doesn't.
The problems with manual infrastructure management are subtle at first. A DNS record gets changed and nobody remembers why. An Okta app's redirect URI gets updated during a migration and the old value is lost. A firewall rule exists but nobody can explain when it was added or what it's for.
Terraform solves all of this. Every resource is defined in a .tf file, committed to Git, and applied through a controlled workflow. The state of your infrastructure becomes a fact, not a memory.
The Stack
By the end of today, three providers are fully managed as code:
terraform-inguva/
├── gcp/ ← VM, static IP, firewall rules
├── okta/ ← SSO app, groups, users, assignments
└── cloudflare/ ← A, CNAME, TXT, DMARC, SPF records
State for all three is stored in Terraform Cloud (free tier, up to 500 resources). Every plan and apply runs remotely with a full audit log.
Phase 1: GCP
The GCP setup was straightforward. One VM, one static IP, two firewall rules. The interesting part was importing existing resources rather than creating new ones.
resource "google_compute_instance" "airflow_server" {
name = "airflow-server"
machine_type = "e2-medium"
zone = var.zone
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2204-lts"
size = 20
}
}
lifecycle {
prevent_destroy = true
}
}
The lifecycle.prevent_destroy = true block is worth highlighting. It's a safety net — Terraform will refuse to destroy this resource even if you accidentally write code that would do so. For a production VM running Airflow, that's non-negotiable.
Importing existing resources is done with terraform import:
terraform import \
google_compute_instance.airflow_server \
<GCP_PROJECT_ID>/us-central1-a/airflow-server
One command, and Terraform now knows about a resource that's been running for weeks.
Phase 2: Okta
This is where it gets interesting for IAM engineers.
The Okta Terraform provider is officially maintained by Okta and covers nearly everything: apps, groups, users, policies, authorization servers, and more. For our setup, the key resources are:
# OIDC app for Airflow SSO
resource "okta_app_oauth" "airflow_sso" {
label = "Airflow SSO"
type = "web"
grant_types = ["authorization_code"]
response_types = ["code"]
redirect_uris = [
"https://<your-airflow-domain>/oauth-authorized/okta"
]
lifecycle {
prevent_destroy = true
ignore_changes = [consent_method, hide_web, issuer_mode, login_mode]
}
}
# Groups for role-based access
resource "okta_group" "airflow_admins" {
name = "airflow-admins"
description = "Airflow administrators — mapped to Admin role"
}
# Assign groups to the app
resource "okta_app_group_assignment" "airflow_admins" {
app_id = okta_app_oauth.airflow_sso.id
group_id = okta_group.airflow_admins.id
}
The ignore_changes lifecycle block deserves explanation. Some Okta app attributes get set by Okta itself after creation and differ from what you'd specify in code. Without ignore_changes, every terraform plan would show a diff for those attributes even though nothing meaningful has changed. This is a common pattern when importing existing resources into Terraform state.
The most powerful thing about managing Okta with Terraform is the dependency graph. When you write:
group_id = okta_group.airflow_admins.id
Terraform automatically knows to create the group before the assignment. You never have to think about order of operations.
Phase 3: Cloudflare
Every DNS record for my domain is now code:
resource "cloudflare_record" "airflow" {
zone_id = var.zone_id
name = "airflow"
content = "<VM_IP>"
type = "A"
proxied = false
allow_overwrite = false
}
The import process revealed something interesting: Cloudflare's MX records and DKIM records for Email Routing are marked read_only and cannot be managed via API. Terraform returned a clear error:
Error: This record is managed by Email Routing.
Disable Email Routing to modify/remove this record. (1046)
The right response wasn't to fight it — it was to remove those records from state and document them as comments. Not everything needs to be in Terraform. The goal is to manage what you can, document what you can't, and never let the perfect be the enemy of the good.
The Import Pattern
The most underrated Terraform skill is importing existing infrastructure. Most Terraform tutorials start from scratch. Real-world IAM engineering never does.
The workflow is:
Write the resource block in
.tfto match what existsRun
terraform import <resource> <id>Run
terraform plan— if you see no changes, your code matches realityIf you see changes, adjust
ignore_changesor fix the values
This is exactly how you'd onboard an existing Okta org, an existing GCP project, or an existing DNS setup into Terraform management. It's one of the most valuable practical skills for a Senior IAM Engineer or IT Platform Engineer.
What's Next
The Terraform foundation is in place. Three logical next steps:
Modules — the current code has duplication. A reusable okta-app module that takes an app name and redirect URI as inputs would make adding new SSO apps a 5-line operation.
for_each — the Cloudflare A records are nearly identical. Refactoring them into a single for_each block would be cleaner and easier to maintain.
CI/CD — right now Terraform runs from my Mac. The next step is a GitHub Actions workflow that runs terraform plan on every PR and terraform apply on merge to main. Automated, auditable, and safe.
The code is at github.com/chanderinguva/terraform-inguva if you want to see the full implementation.
The Bigger Picture
Managing identity infrastructure manually doesn't scale. As soon as you have more than a handful of Okta apps, more than one engineer touching DNS, or more than one environment to maintain, the lack of version control becomes a liability.
Terraform changes the conversation from "what did we change?" to "what does our infrastructure look like, and here's the commit that shows why."
For IAM engineers specifically, this is the difference between being the person who clicks through the Okta admin console and being the person who owns the identity platform as code.