Changes are inevitable and no one knows when a restore is going to be needed. Today backup and restore processes are standard, required, and are part of nearly all basic deployment strategies. With that in mind, I was tasked/required to backup a environment of 32 hosts and running NRT manually was not a option, so I had to try to automate it.
Regular backups from all the netwitness hosts throughout the environment in the event of a problem, emergency, or a upgrade.
NRT is a manual process out of the box and requires a significant amount of time to process (run the script and relocate the backup files).
Through some trial and error I built a script to automate the process that can be controlled with a cron job. The issue was not NRT per se, it was pulling the backup files to a central location. I initially thought chef may be able to pull these items back but unfortunately, pulling was not something Chef did well. I did experiment with some Chef solutions but none worked the way I wanted them to. So, I went back to old school private and public keys (because of scp interactive) to move the backups to secondary ESA - for storage only. The next issue I came across with NRT was that it was device specific (--category Decoder). This was extremely problematic due to the automation.
Get the keys situated (interactive responses make this step necessary)
On the backup host (where all the backups will reside)
Create a script for each host type in your environment
Put the scripts on the SA Server and distribute to the environment using salt-cp '*' file.copy command.
Example script for a broker
Notice the --category this is what will change from script to script. The zip file created contains the host name and date.
Create a script for the backup server (in my case it is a Secondary ESA)
Example of the manual backup
Iterate through the hosts using your script.
Generated public/private key for the backup host
Distributed the public key via salt on the SA Server
Validated shh non-interactive connectivity between the hosts from the backup server
Created a backup script for each host type (broker, decoder, etc.)
Distributed the host scripts via salt on the SA Server
Created a script on the backup server to run the host scripts and then scp them back to the backup server.
In the end, my solution ended up having 8 scripts (same script with edits to the device type by services and category). The goal was to put all the backup files into one place so I could copy them and put them in a safe place. Please keep in mind, this is just one way of many possibilities to centralize backups and it is a solution I chose based on the environment I was working on.