Consider this use-case please: As part of our test framework, we have to deploy some resources, then executing some script before we can start using the resource for testing. A typical example is the AirView RDS module. The RDS is often provisioned with flyway module, which has an SSM document for creating the DB. What we had been doing is call the RDS module and the flyway module, apply them in a terraform workspace. Once they are successfully deployed (i.e. applied), a human would need to go through the AWS console and execute the script that creates NGCS database (this is for example). After that it's ready to be used for testing. I would like to find a way to avoid this human interaction step. So the order of creation and actions should be:

  1. Provision DB cluster
  2. Provision utility EC2 instance (where the flyway script can run)
  3. Execute flyway

How can that be done in an automated way? Further, if I have few resources that also need similar set up (may not be flyway, but some kind of scripts), how can I control the sequence of activities (from creating resources to running scripts on them)?

2

Best Answer


Try to use terraform provisioners. aws_instance resource, which I suppose you are using fully supports this feature. With provisioner, you can run any command you want just after instance creation.

Don't forget to apply connection settings. You can read more here and here

Finally, you should get something close to this one:

resource "aws_instance" "my_instance" {ami = "${var.instance_ami}"instance_type = "${var.instance_type}"subnet_id = "${aws_subnet.my_subnet.id}"vpc_security_group_ids = ["${aws_security_group.my_sg.id}"]key_name = "${aws_key_pair.ec2key.key_name}"provisioner "remote-exec" {inline = ["my commands",]}connection {type = "ssh"user = "ec2-user"password = ""private_key = "${file("~/.ssh/id_rsa")}"}}

You need to remember that provisioner are last resort.

From the docs, have you tried user_data?