FinanceServersMigrationPlan: Difference between revisions

From Newroco Tech Docs
Jump to navigationJump to search
Line 5: Line 5:
* Create the VMs according to their needs
* Create the VMs according to their needs
* Install all packages required
* Install all packages required
* Update session.gc_maxlifetime = 48480 in /etc/php/7.0/apache2.php.ini
* Update session.gc_maxlifetime = 48480 in /etc/php/7.0/apache2/php.ini
* Copy all scripts on finance and install the same cron jobs
* Copy all scripts on finance and install the same cron jobs
* Update IPs and passwords in scripts
* Update IPs and passwords in scripts

Revision as of 11:26, 14 June 2017

Overview

This page explains the necessary steps to follow to fully update timesheets, finance, hr and tenders VMs. These communicate with each other and other VMs through direct postgresql connections and also by Rabbit AMQP messages. They are high profile and require down time for users when this procedure is done, so it needs proper preparation before starting to limit the down time and to make sure they will work correctly after migration with minimal impact on users.

Steps before down time

  • Create the VMs according to their needs
  • Install all packages required
  • Update session.gc_maxlifetime = 48480 in /etc/php/7.0/apache2/php.ini
  • Copy all scripts on finance and install the same cron jobs
  • Update IPs and passwords in scripts
  • Add sudoers command aliases on finance:
# Cmnd alias specification
Cmnd_Alias FINANCESCRIPT= /home/finance/synchCommand.sh . /home/finance/midnightScripts.sh /home/finance/.financeLock
Cmnd_Alias PULLTSSCRIPT= /home/finance/synchCommand.sh . /home/finance/pullTimeSheets.sh * /home/finance/.financeLock
Cmnd_Alias SLIPEXPSCRIPT= /home/finance/synchCommand.sh . /home/finance/slipExpAndDatedProjects.sh /home/finance/.financeLock

# User www-data needs to run the above command as finance
www-data ALL=(finance) NOPASSWD:FINANCESCRIPT
www-data ALL=(finance) NOPASSWD:PULLTSSCRIPT
www-data ALL=(finance) NOPASSWD:SLIPEXPSCRIPT

  • Take full DB dumps from the old VMs
  • Restore dumps to new VMs
    • create user admin with password '<admin password>';
    • create database admin owner admin;
    • create user financedbadministratorwith password '<financedbadministrator password>';
    • create database finance owner financedbadministrator;
    • create user timesheetadministrator password '<timesheetadministrator password>';
    • create database timesheetsystem owner timesheetadministrator;
    • create user humanresourcedbadministrator'<humanresourcedbadministrator password>';
    • create database humanresource owner humanresourcedbadministrator;
    • create user tenderdbadmin with password '<tenderdbadmin password>';
    • create database tenders owner tenderdbadmin;
    • pg_restore -Fc -d <database name> <database name>-backup.sql
  • Take schema only dump from old VMs and update them:
    • update passwords where necessary
    • update amqp_XXX functions to use different routing_keys so messages from this one are not transmitted to LIVE archives DB. Use instead of ['database.administration.staff','database.administration.projects'] these ['database.administration.staff.test','database.administration.projects.test']
    • in amqp.broker update host to rabbitmq.thehumanjourney.net instead of 10.0.10.107
    • update all functions to use domain names when accessing remote DBs instead of IPs. Former IPs and what they should become:
    • update hr_update_script user and use instead hrupdatescript
10.0.10.48 finance.goo.thehumanjourney.net
10.0.10.23 timesheets.goo.thehumanjourney.net
10.0.10.42 humanresources.goo.thehumanjourney.net
10.0.10.63 tenders.goo.thehumanjourney.net
10.0.10.130 archives-db.goo.thehumanjourney.net
192.168.98.24 mapdata.goo.thehumanjourney.net
  • Edit /etc/hosts and add names and IPs to all VMs involved so they can be addressed by DNS rather than IPs:
10.0.10.49 finance.goo.thehumanjourney.net
192.168.98.51 timesheets.goo.thehumanjourney.net
10.0.10.43 humanresources.goo.thehumanjourney.net
10.0.10.64 tenders.goo.thehumanjourney.net
10.0.10.130 archives-db.goo.thehumanjourney.net
192.168.98.24 mapdata.goo.thehumanjourney.net
10.0.10.107 rabbitmq.goo.thehumanjourney.net
  • Copy and update pg_hba.conf and postgresql.conf files on each VM
  • Update IPs in pg_hba.conf. For specific machine IPs, replace with goo domain name
  • Update the firewall to allow finance, timesheets and finance to send e-mails from their allocated .thehumajourney.net domains
  • Copy vhost declaration(s)
  • Copy code from OLD to NEW
  • Update paths in config.php
  • Update IPs to DNS names and also passwords in all config.php files on any of the VMs
  • Comment out phpCAS::setDebug(); so we don't keep huge login logs anymore
  • Set proper permissions for all code
  • Open port 5432 on the firewall (IP restricted)
    • for timesheets to access finance
    • for HRDB to access timesheets
    • for tenders to access mapdata
  • Install and configure RabbitMQ client on Finance
  • Test everyhing works well:
    • Test database backup procedures worked correctly
    • Test "pull time sheets" script on PMDB interface - this will test both ways finance - timesheets connection
    • Test making changes to staff on PMDB interface - this will test HRDB connection to timesheets, admin, finance and also RabbitMQ sending messages from Finance to archives
    • Test making changes to projects on PMDB interface - this will test finance RabbitMQ sending messages to archives
    • Test making changes to an episode or tender on Tenders interface and see if the changes were transmitted to mapdata into tenders table on oasites DB
    • Test making changes to a tender on Tenders interface and see if the changes were transmitted to timesheets
    • Test sending e-mails from timesheets and from finance
  • When everything tests OKAY
    • Change domain names for new VMs by removing "replacement" from the temporary name
    • Update e-mail domains as well by removing "replacement" from the temporary name
    • Move VMs to production hosts

Steps during down time

  • Stop apache on old VMs
  • Take DB dumps for all old VMs
  • Restore only data to new VMs - this does not work, try instead a full restore and then again function update
  • Change domain names for old VMs by adding "old"
  • Stop old VMs, park them
  • Update DNS records to point to new VMs
    • Proxy
    • DNS
    • OAN proxy and DNS
    • external proxy
  • update amqp_XXX functions to use the LIVE routing_keys so messages from finance can now be transmitted to LIVE archives DB. Use instead of ['database.administration.staff.test','database.administration.projects.test'] these ['database.administration.staff','database.administration.projects']

Steps after down time

  • Test all interfaces work well
  • Test special functions that might no longer work
    • Make staff changes in HRDB and check if the change was propagated to TRS, Finance, Admin DB on Finance VM, Tenders, Archives
    • Make project changes on Finance and see if the change was propagated to Archives - this checks if RabbitMQ worked well
  • Monitor any support tickets related to any of these services and first find a cause in the migration done.
  • Check alerts are flowing normally from finance and TRS
  • Inform Simon of the domain names to use with his ODBC connections