Deploying a Linux EC2 Instance with Hashicorp Terraform and Vault to AWS and Connect to Pure Cloud Block Store

Share on:

Another update to my “Terraforming AWS” series is the ability to deploy a Linux based Instance and connect to an existing data volume sitting on the Pure Cloud Block Store. This post will cover how to setup your environment and deploy and bootstrap your linux instance to connect via ISCSI.

Overview

If you are not familiar with the Pure Cloud Block Store, it is a purpose build block storage system that currently sits in AWS. There are many benefits and use cases you can find out here.

Today customers wonder once the data is in the Pure Cloud Block Store, how can they rapidly stand up a system and gain access to this data? This blog will cover a piece of automation I am now using to stand up an EC2 instance, configure it with iSCSI and get access to my data.

Pre-Requisites

Hashicorp Vault

  • This is used to store the AWS access and secret key securely.

Hashicorp Terraform

  • This is used to automate the provisioning using a Terraform .TF file.

Amazon Web Services Account

  • This is the infrastructure to run the EC2 virtual machines.

Setup and Addition of AWS Secrets to Vault

Since I am running this on MacOS. I used brew to install Vault

1brew tap hashicorp/tap
2brew install hashicorp/tap/vault

Once Vault is installed, you can run the server locally. This will provide you the environment variable to use and provide the unseal key and root token.

Since I am using this for a lab, I am using the built in vault dev server. This should not be used for production!

1vault server -dev

To add your AWS secret key and access key to the vault, run the following command

1export VAULT_ADDR='http://127.0.0.1:8200'
2vault kv put secret/<secretname> secret_key=<secretkey> access_key=<accesskey>

Terraform Manifest Configuration

Download the sample manifest from GitHub and update the variables for your environment. This includes the Vault Token and Secret Name, and the AWS Region, AMI, Instance Type, VPC Security Groups, Subnet ID, KeyPair and Instance Name.

 1provider "vault" {
 2    address = "http://localhost:8200"
 3    token = "<unsealtokenfromvault>"
 4}
 5
 6data "vault_generic_secret" "aws_auth" {
 7    path = "secret/<keyname>"
 8}
 9
10provider "aws" {
11    region = "us-west-2"
12    access_key = data.vault_generic_secret.aws_auth.data["access_key"]
13    secret_key = data.vault_generic_secret.aws_auth.data["secret_key"]
14}
15
16data "aws_ami" "linux" {
17    owners      = ["amazon"]
18    most_recent = true
19    filter {
20        name   = "name"
21        values = ["amzn2-ami-hvm-2.0*"]
22    }
23    filter {
24        name   = "architecture"
25        values = ["x86_64"]
26    }
27}
28
29resource "aws_instance" "linux" {
30    ami           = data.aws_ami.linux.image_id
31    instance_type = "t2.micro"
32    vpc_security_group_ids = ["sg-id1","sg-id2","sg-id3"]
33    subnet_id = "subnet-id"
34    key_name = "keypair"
35    tags = {
36        Name = "instance_name"
37    }
38    user_data = <<EOF
39        #!/bin/bash
40        yum update -y
41        yum -y install iscsi-initiator-utils
42        yum -y install lsscsi
43        yum -y install device-mapper-multipath
44        service iscsid start
45        amazon-linux-extras install epel -y
46        yum install sshpass -y
47        iqn=`awk -F= '{ print $2 }' /etc/iscsi/initiatorname.iscsi`
48        sshpass -p pureuser ssh  -oStrictHostKeyChecking=no pureuser@<ctmgmt-vip>> purehost create <hostnameforpure> --iqnlist $iqn
49        sshpass -p pureuser ssh  -oStrictHostKeyChecking=no pureuser@<ctmgmt-vip> purehost connect --vol <purevolname> <hostnameforpure>
50        iscsiadm -m iface -I iscsi0 -o new
51        iscsiadm -m iface -I iscsi1 -o new
52        iscsiadm -m iface -I iscsi2 -o new
53        iscsiadm -m iface -I iscsi3 -o new
54        iscsiadm -m discovery -t st -p <ct0-iscsi-ip>:3260
55        iscsiadm -m node -p <ct0-iscsi-ip> --login
56        iscsiadm -m node -p <ct1-iscsi-ip> --login
57        iscsiadm -m node -L automatic
58        mpathconf --enable --with_multipathd y
59        service multipathd restart
60        mkdir /mnt/cbsvol
61        disk=`multipath -ll|awk '{print $1;exit}'`
62        mount /dev/mapper/$disk /mnt/cbsvol
63        EOF
64}
65
66output "public_dns" {
67    value = aws_instance.linux.*.public_dns
68}
69output "public_ip" {
70    value = aws_instance.linux.*.public_ip
71}
72output "name" {
73    value = aws_instance.linux.*.tags.Name
74}

Run the Terraform Manifest

Run terraform init to install any needed providers, terraform plan to make sure all the connectivity is working and then terraform apply to deploy!

1terraform init
2terraform plan
3terraform apply

If everything is successful your EC2 instance should be deployed in ~ 2minutes and after a reboot or two will be fully configured and running!

Viewing your Pure Cloud Block Store data

In my example I already had a volume provisioned with the data on my Pure Cloud Block Store. Within the script I create a new Pure Host object and connect it to the existing volume, configure iscsi and then mount the disk. This just shows what type of automation is possible when managing or deploying workloads in the cloud!

Closing

Hopefully this helped you get started with automating EC2 instance deployment with Terraform!

Any questions or comments? Leave them below.

comments powered by Disqus

See Also