Skip to content

Template Machine

📸 Summary

This workflow deploys a packer image bootstrapped using cloudinit. This worlflow requires packer to be installed. These vm templates follow Instances outlined here.

Virtual machines have a different deployment mechanism than any of the other automations. I originally had packer installed on my localhost and would have terraform execute packer code to build the virtual machine across the network on the datacenter. However, packer hosts a web service while the automation is running and the instance on proxmox waits to read information from it for cloud init. That was a bit temperamental and so i offloaded the work to the node. The datacenter pre-installs and configures packer in the firewall.

This workflow does a remote exec, copying the packer code to the target node and executing packer on the node to build its own virtual machine template. This automation can run on average 35 min to take it from an iso to a finalized template depending on your network and disk speed. If you add in multiple nodes simultaneously it could put a load on the system so be aware of that.

🗃️ Repo

git clone git@gitlab.com:loganmancuso_public/infrastructure/proxmox/template-machine.git

📜 Dependencies

📃 Sample TFvars

config = {
  env = "env"
}


image = {
  os           = "ubuntu" 
  name         = "noble-2404"
  version      = "latest"
  description  = "# Ubuntu Server Template\n## Noble Image 24.04 with docker pre-installed"
  id           = XXXXXXX
  network      = "network"
  tags = ["docker"]
}

⚙️ Deployment Instructions

🛑 Pre-Deployment

In order for this deployment to work the node will host a packer web server on an incremental port starting at 8800. The cidr will be the network for the hosted instance.

Packer is preinstalled on each proxmox node from the Datacenter workflow bootstrap. The tofu workflow has a resource with a remote exec; that copies the transformed packer files to the target node for deployment. Packer is then running locally on the node hosting the cloud-init for the instance template it is provisioning. Offloading the work to the proxmox node allows for isolated network traffic between the node and the instance.

(deprecated functionality) This is in contrast to the original method, which ran packer from the development machine meaning that each machine being bootstrapped was pulling its cloud-init from the local dev machine. This was difficult to scale as 3+ machines all pulling from one machine caused my laptop to overwork. It also avoids the issue of a long running bootstrap crashing due to an issue with the development machines. This is how you can permit fw rules between the development machine and the proxmox node and instance.

sudo ufw allow from XXX.XXX.XXX.XXX/YY to any port 8800:88XX proto tcp

🟢 Deployment

To deploy this workflow link the environment folder to the root directory.

the file path for this symlink has a specific structure and it helps avoid having to use tofu workspaces. Gitlab does not yet support remote state files with workspaces so side along deployments need to be deployed under different state files. the env is the datacenter env that this template will belong to, the os will be the os common name (ubuntu, debian). The name references the name of the os version (noble-2404, bookwork1200). In my case I use 4 digits to help distinguish version numbers. Then under that is the revision, either latest or stable.

ln -s env/{env}/{os}/{name}/{version}/* .
tofu init .
tofu plan
tofu apply

🏁 Post-Deployment


📝 Notes

  • an important note about this deployment. This workflow will iterate over each node available in that environment, and for each node will generate a new template id number as well as spin up a new packer web server to host the cloud init files.
tf destroy --target terraform_data.packer_deploy --auto-approve

📅 Tasks

👎 Known Issues

  • there is a bug in the packer deploy when deploying to a vm version on proxmox, the boot cmd runs garbled. this is the boot command and the ip/port could change but that is it.
    linux /casper/vmlinuz -- autoinstall ds='nocloud-net;s=http://192.168.1.20:8805/'
    initrd /casper/initrd
    boot
    

<<Datacenter --- Instances/Sandbox>>