This morning I was asked by a friend if I could share any Ansible roles we use at $WORK for our Red Hat Atomic Host servers. It was a relatively easy task to review and sanitize our configs – Atomic Hosts are so minimal, there’s almost nothing we have to do to configure them.
When our Atomic hosts are initially created, they’re minimally configured via cloud-init to setup networking and add a root user ssh key. (We have a VMWare environment, so we use the RHEL Atomic .ova provided by Red Hat, and mount an ISO with the cloud-init ‘user-data’ and ‘metadata’ files to be read by cloud-init.). Once that’s done, we run Ansible tasks from a central server to setup the rest of the Atomic host.
Below is a snippit of most of the playbook.
I think the variables are self-explanatory. Some notes are added to explain why we’re doing a particular thing. The disk partitioning is explained in more detail in a previous post of mine.
---
# Set w/Ansible because cloud-init is plain text
- name: Access | set root password
user:
name: root
password: "{{ root_password }}"
- name: Access | add ssh user keys
authorized_key:
user: "{{ item.name }}"
key: "{{ item.key }}"
with_items: "{{ ssh_users }}"
- name: Access | root access to cron
lineinfile:
dest: /etc/security/access.conf
line: "+:root:cron crond"
- name: Access | fail closed
lineinfile:
dest: /etc/security/access.conf
line: "-:ALL:ALL"
# docker-storage-setup service re-configures LVM
# EVERY TIME Docker service starts, eventually
# filling up disk with millions of tiny files
- name: Disks | disable lvm archives
copy:
src: lvm.conf
dest: /etc/lvm/lvm.conf
notify:
- restart lvm2-lvmetad
- name: Disks | expand vg with extra disks
lvg:
vg: '{{ volume_group }}'
pvs: '{{ default_pvs }}'
- name: Disks | expand the lvm
lvol:
vg: '{{ volume_group }}'
lv: '{{ root_lv }}'
size: 15g
- name: Disks | grow fs for root
filesystem:
fstype: xfs
dev: '{{ root_device }}'
resizefs: yes
- name: Disks | create srv lvm
lvol:
vg: '{{ volume_group }}'
lv: '{{ srv_lv }}'
size: 15g
- name: Disks | format fs for srv
filesystem:
fstype: xfs
dev: '{{ srv_device }}'
resizefs: no
- name: Disks | mount srv
mount:
name: '{{ srv_partition }}'
src: '{{ srv_device }}'
fstype: xfs
state: mounted
opts: 'defaults'
## This is a workaround for XFS bug (only grows if mounted)
- name: Disks | grow fs for srv
filesystem:
fstype: xfs
dev: '{{ srv_device }}'
resizefs: yes
## Always check this, or it will try to do it each time
- name: Disks | check if swap exists
stat:
path: '{{ swapfile }}'
get_checksum: no
get_md5: no
register: swap
- debug: var=swap.stat.exists
- name: Disks | create swap lvm
## Shrink not supported until 2.2
#lvol: vg=atomicos lv=swap size=2g shink=no
lvol:
vg: atomicos
lv: swap
size: 2g
- name: Disks |make swap file
command: mkswap '{{ swapfile }}'
when:
- swap.stat.exists == false
- name: Disks | add swap to fstab
lineinfile:
dest: /etc/fstab
regexp: "^{{ swapfile }}"
line: "{{ swapfile }} none swap sw 0 0"
- name: Disks | swapon
command: swapon '{{ swapfile}}'
when: ansible_swaptotal_mb < 1
- name: Docker | setup docker-storage-setup
lineinfile:
dest: /etc/sysconfig/docker-storage-setup
regexp: ^ROOT_SIZE=
line: "ROOT_SIZE=15G"
register: docker-storage-setup
- name: Docker | setup docker-network
lineinfile:
dest: /etc/sysconfig/docker-network
regexp: ^DOCKER_NETWORK_OPTIONS=
line: >
'DOCKER_NETWORK_OPTIONS=-H unix:///var/run/docker.sock
-H tcp://0.0.0.0:2376
--tlsverify
--tlscacert=/etc/pki/tls/certs/ca.crt
--tlscert=/etc/pki/tls/certs/host.crt
--tlskey=/etc/pki/tls/private/host.key'
- name: add CA certificate
copy:
src: ca.crt
dest: /etc/pki/tls/certs/ca.crt
owner: root
group: root
mode: 0644
- name: Admin Helpers | thinpool wiper script
copy:
src: wipe_docker_thinpool.sh
dest: /usr/local/bin/wipe_docker_thinpool.sh
mode: 0755
- name: Journalctl | set journal sizes
copy:
src: journald.conf
dest: /etc/systemd/journald.conf
mode: 0644
notify:
- restart systemd-journald
- name: Random Atomic Bugfixes | add lastlog
file:
path: /var/log/lastlog
state: touch
- name: Random Atomic Bugfixes | add root bashrc for prompt
copy:
src: root-bashrc
dest: /root/.bashrc
mode: 0644
- name: Random Atomic Bugfixes | add root bash_profile for .bashrc
copy:
src: root-bash_profile
dest: /root/.bash_profile
mode: 0644
### Disable Cloud Init ###
## These are in Ansible 2.2, which we don't have yet
- name: stop cloud-config
systemd: name=cloud-config state=stopped enabled=no masked=yes
ignore_errors: yes
- name: stop cloud-init
systemd: name=cloud-init state=stopped enabled=no masked=yes
ignore_errors: yes
- name: stop cloud-init-local
systemd: name=cloud-init-local state=stopped enabled=no masked=yes
ignore_errors: yes
- name: stop cloud-final
systemd: name=cloud-final state=stopped enabled=no masked=yes
ignore_errors: yes
- name: Find old cloud-init files if they exist
shell: rm -f /etc/init/cloud-*
ignore_errors: yes
The only other tasks we run are related to $WORK specific stuff (security office scanning, patching user account for automated updates, etc).
One of the beneficial side-effects of mixing cloud-init and Ansible is that the cloud-init is only used for the initial setup (networking and root access), so it ends up being under the size limit imposed by Amazon Web Services on their user-data files. This allows us to create and maintain RHEL Atomic hosts in AWS using the exact same cloud-init user-data file and Ansible roles.
Photo by Heather Gill on Unsplash