Skip to main content

Provision servers with Terraform on vSphere

·1107 words·6 mins

My last article showed how to build a server template with Packer.

Now we want to use this template to create some servers on VMware vSphere. DNS will be registered manually and all IP addresses will be defined as fixed in the config files.

This code is tested with Terraform 0.12.

tl;dr
#

Terraform provides a good way to implement immutable server life cycle. Immutable means that servers aren’t changed, they get destroyed/deleted and created again when you change something.

Immutable servers have a big advantage that configuration drift will not take place. I like the fact that you can trust that servers act equally when you use the same configuration over and over again.

So we can deploy test and production, deploy patches and still be sure that tests can be trusted.

With mutable deployments, it can happen, that servers act differently. Like deploying fix1, fix2 and fix3 on a test server, but in production, you deploy just fix3 (which is cumulative and have all fixes included from 1-2). Normally the servers should act equally, but often we see differences during production deployments.

I think the time of manually deployed systems is over. The amount of time for installing, documenting and patching is just too high.

Automation Fear

Most of the time we have fear running automated tasks on our servers because we’re unsure that there were changes applied, which are not covered in our scripts. You need to see the advantages:

  • Documentation automated (most of the time our definition files are documentation enough)

  • Reliable server deployments

  • Automated tests can help to deploy patches and updates to all and not only a few systems

Data should be stored on separate disks or storage systems, then it is independent of your servers.

Often containerization is shown as the solution for immutable deployments. That’s right and I love working with them, but how do you deploy servers running Kubernetes or Docker? I prefer generating virtual machines (automated) instead of using very large containers. So when I can’t limit a container to a single micro service, I create a virtual machine most of the time.

Terraform installation
#

Just download the zipped binary from https://www.terraform.io/downloads.html and put it into your PATH.

Download needed plugins
#

The binary checks all files in your current folder and then download the needed plugins for your deployment.

Just run

terraform init

Build the first server with Terraform
#

build.tf

provider "vsphere" {
  user                : var.vsphere_user
  password            : var.vsphere_password
  vsphere_server      : var.vsphere_server
  allow_unverified_ssl: true
}

variable "project_folder" {
  default: "stoeps-example"
}

data "vsphere_datacenter" "dc" {
  name = var.vsphere_datacenter
}

data "vsphere_datastore" "datastore" {
  name          = var.vsphere_datastore
  datacenter_id = data.vsphere_datacenter.dc.id
}

data "vsphere_compute_cluster" "cluster" {
  name          = var.vsphere_cluster
  datacenter_id = data.vsphere_datacenter.dc.id
}

data "vsphere_resource_pool" "pool" {
  name          = var.vsphere_resource_pool
  datacenter_id = data.vsphere_datacenter.dc.id
}

data "vsphere_network" "network" {
  name          = var.vsphere_network
  datacenter_id = data.vsphere_datacenter.dc.id
}

data "vsphere_virtual_machine" "template" {
  # Name of Template
  name          = var.template
  datacenter_id = data.vsphere_datacenter.dc.id
}

resource "vsphere_folder" "folder" {
  path          = "${var.my_devops_folder}/${var.project_folder}"
  type          = "vm"
  datacenter_id = data.vsphere_datacenter.dc.id
}

vars.tf

variable "vsphere_server" {
  default: "khnum.example.com"
}

variable "vsphere_user" {
  default: "cstoettner@example.com"
}

variable "vsphere_password" {
  description: "vsphere server password for the environment"
  default    : ""
}

variable "vsphere_datacenter" {
  default: "HVIE"
}

variable "vsphere_cluster" {
  default = "HVIE PWR HOSTS"
}

variable "vsphere_datastore" {
  default = "devops-01_sas_7.2k_raid10"
}

variable "vsphere_network" {
  default = "vm-net-devops"
}

variable "vsphere_resource_pool" {
  default = "rp_hvie_devops"
}

variable "my_devops_folder" {
  default = "devops"
}

variable "vsphere_domain" {
  default = "devops.example.com"
}

variable "vsphere_dns_servers" {
  type    = list(string)
  default = ["10.10.85.5"]
}

variable "admin_user" {    
  default = "root"
}

variable "admin_password" {
  default = "password"
}

variable "template" {
  default = "centos-76-base"
}

variable "ssh-pub-key" {
  # Do not store with your code in git, provide on the commandline!
  default = ""
}
  • User for first SSH login

  • Password for SSH

During provisioning, I run some shell commands to disable the password login for root and putting an SSH Key into /root/.ssh/authorized_keys.

server1.tf

resource "vsphere_virtual_machine" "example-server1" {
  name            : "example-server1"
  resource_pool_id: data.vsphere_resource_pool.pool.id
  datastore_id    : data.vsphere_datastore.datastore.id
  num_cpus        : 4
  memory          : 4096
  guest_id         = data.vsphere_virtual_machine.template.guest_id
  scsi_type        = data.vsphere_virtual_machine.template.scsi_type
  network_interface {
    network_id   = data.vsphere_network.network.id
    adapter_type = data.vsphere_virtual_machine.template.network_interface_types[0]
  }
  folder = "${var.my_devops_folder}/${var.project_folder}"
  disk {
    label            = "disk0"
    size             = 100
    eagerly_scrub    = "false"
    thin_provisioned = "true"
  }
  clone {
    template_uuid = data.vsphere_virtual_machine.template.id
    customize {
      linux_options {
        host_name = "example-server1"
        domain    = var.vsphere_domain
      }
      network_interface {
        ipv4_address = "10.10.85.95"
        ipv4_netmask = 24
      }
      ipv4_gateway    = "10.10.85.1"
      dns_server_list = var.vsphere_dns_servers
      dns_suffix_list = [var.vsphere_domain]
    }
  }
  provisioner "remote-exec" {
    inline = [
      "systemd-machine-id-setup",
      "mkdir /root/.ssh",
      "touch /root/.ssh/authorized_keys",
      "echo ${var.ssh-pub-key} >> /root/.ssh/authorized_keys",
      "chown root:root -R /root/.ssh",
      "chmod 700 /root/.ssh",
      "chmod 600 /root/.ssh/authorized_keys",
      "passwd --lock root"          
    ]
  }
  connection {
    host     = "${self.default_ip_address}"
    type     = "ssh"
    user     = "root"
    password = "${var.admin_password}"
  }
}
  • Lock user root, so only key authentication is possible

When you want to create multiple servers, you can just duplicate the definition file of server1 and change name and IP addresses, or you use loops and counters.

Now let’s build the first deployment plan:

terraform plan -var "vsphere_password=my-funky-password" \        
               -var "template=stoeps-centos-20190604" \           
               -var "ssh-pub-key=$(cat ~/.ssh/stoeps_rsa.pub)" \  
               -out server1.terraform                             
  • set your password for vSphere here

  • template name which should be used for provisioning

  • cat the ssh public key to the variable ssh-pub-key

  • write a deployment file

Providing the password and key on the command line has a big advantage because then your password isn’t stored in your version control system. On the other side, it will appear in shell history. Bash doesn’t store commands starting with a space into the history, or you set a temporary environment variable to store the password there. The SSH Key will be read from the public key file directly.

Terraform will show you all resources and information which will be created, changed or destroyed. Don’t forget that Terraform will work with all files of the current folder.

Changing a variable can trigger a recreation of the whole environment of the current folder, so check carefully what will happen on applying.

Apply the change

terraform apply server1.terraform

Recreate a single server
#

Sometimes destroying or changing all servers is too much. With Terraform you have the option to taint a server, a tainted server will be deleted and created again on the next terraform plan and terraform apply.

Mark server for recreation (taint)

terraform taint vsphere_virtual_machine.example-server2   
  • The complete name is needed here

Destroy servers
#

When you want to delete all servers, you can run

Destroy servers

terraform destroy -var "vsphere_password=`echo $TF`" -var "template=stoeps-centos-20190604" -var "ssh-pub-key=$(cat ~/.ssh/stoeps_rsa.pub)"

Next Ansible
#

Next step will be further provisioning with Ansible. This can run separately, or as a post-provisioning task from Terraform.

Christoph Stoettner
Author
Christoph Stoettner
I work at Vegard IT GmbH as a senior consultant, focusing on collaboration software, Kubernetes, security, and automation. I primarily work with HCL Connections, WebSphere Application Server, Kubernetes, Ansible, Terraform, and Linux. My daily work occasionally leads to technical talks and blog articles, which I share here more or less regularly.

Related

Create vSphere Template with Packer

·778 words·4 mins
The last months I built a lot of environments for tests with IBM Connections Componentpack, Rancher, plain Kubernetes, IBM Domino and some more. In former years, I deployed single virtual machines, cloned them and created snapshots to “easily” jump back in cases of errors. Then I found Packer, which helped me to automate the first virtual machines on my local notebook. Now I use Packer to create templates for VMware vSphere, which then are deployed and multiplied with Terraform. Terraform needs some packages installed in the template, that it can provision virtual machines on vSphere.

Asciidoctor Windows

·127 words·1 min
During my talk at FrOSCon I wasn’t sure how to install Asciidoctor on Windows. So I tried on a Windows 10 VM. When you want to use Asciidoctor on a Windows desktop, you need to download the Rubyinstaller and install it. Now you can open a administrative command window and install with gem install asciidoctor.

Using Docker and ELK to Analyze WebSphere Application Server SystemOut.log

·938 words·5 mins
I often get SystemOut.log files from customers or friends to help them analyzing a problem. Often it is complicated to find the right server and application which generates the real error, because most WebSphere Applications (like IBM Connections or Sametime) are installed on different Application Servers and Nodes. So you need to open multiple large files in your editor, scroll each to the needed timestamps and check the lines before for possible error messages.