Reproducible RT Configuration management

Hi,

I’ve had a look through the list archives and seen a couple of mentions
of this but nothing recent and thought I’d ask again in case there is
something new out there.

What are people doing to manage reproducable deployments of RT other
than just dumping the database of a production machine and loading on a
development one.

I am using puppet currently to deploy RT.

Puppet does a good job of getting RT installed and running.

I am struggling with how to manage the RT configuration itself, the
stuff that is done from within the web interface or from initialdata
using rt-setup-database.

We use vagrant for the development environment and the ideal situation
is that running “vagrant up” will bring up a copy of RT running the
latest config.

I want all changes on the production machines done not by the web
interface but in some sort of reproducable way.

What I have so far is a hacked up solution using a custom script to call
rt_setup_database and using my own custom fragments to init the data.

The main issue here is I wanted it to be idempotent so if called from
puppet no harm is done if it has already made the change.

So far I’m doing ugly things like using the @Init section to check if a
particular change exists in the database already and not making it if it
does. This also prevents adding multiple entries for things when the
code is run multiple times.

My solution is working although it feels clunky.

I guess one better option would be a full puppet implementation modeling all of
Rt’s configuration. That just felt like a job far too big to tackle :(.

Does anyone have any suggestions or stories of how they are managing
this situation?

Kind regards
Bart

Bart Bunting - URSYS
PH: 02 87452811
Mbl: 0409560005

1 Like

We script the base install including the setup of the base PERL/CPAN stuff, the installation of the base RT base system using the typical make + make install and then we run scripts to populate the RT database using scripts. We have chosen not to alter the base initial data file because we don’t want to have to keep track of any changes that may happen from version to version.

Since the database is provided by our database group we initialize the database in the following way

/opt/rt4/sbin/rt-setup-database --dba-password=$RT_DB_PASS --action init --skip-create

This creates the database shell with just the default content to get a functioning RT install. We then enable full text searching using

/opt/rt4/sbin/rt-setup-fulltext-index --dba ${RT_DB_USER} --dba-password ${RT_DB_PASS} --table=AttachmentsIndex --column=ContentIndex --index-type=GIN

followed by installing any plugins that we need to install using a git checkout + make + make install + make initdb (if required). We version control all the configuration files and drop them into place when needed.

So far this has allowed us to get a fully reproducible base installation of RT. We then apply our scripts to add custom fields, populate their values, make changes to initial data configurations such as default templates and scrips, etc.

This makes for an easy way to create the base RT with all our customizations. We only run this if we’re running tests to ensure that starting from scratch still works as expected, otherwise we make a backup of the database and just restore that because it’s so much faster.----- Original Message -----
|
| Hi,
|
| I’ve had a look through the list archives and seen a couple of mentions
| of this but nothing recent and thought I’d ask again in case there is
| something new out there.
|
| What are people doing to manage reproducable deployments of RT other
| than just dumping the database of a production machine and loading on a
| development one.
|
| I am using puppet currently to deploy RT.
|
| Puppet does a good job of getting RT installed and running.
|
| I am struggling with how to manage the RT configuration itself, the
| stuff that is done from within the web interface or from initialdata
| using rt-setup-database.
|
| We use vagrant for the development environment and the ideal situation
| is that running “vagrant up” will bring up a copy of RT running the
| latest config.
|
| I want all changes on the production machines done not by the web
| interface but in some sort of reproducable way.
|
| What I have so far is a hacked up solution using a custom script to call
| rt_setup_database and using my own custom fragments to init the data.
|
| The main issue here is I wanted it to be idempotent so if called from
| puppet no harm is done if it has already made the change.
|
| So far I’m doing ugly things like using the @Init section to check if a
| particular change exists in the database already and not making it if it
| does. This also prevents adding multiple entries for things when the
| code is run multiple times.
|
| My solution is working although it feels clunky.
|
| I guess one better option would be a full puppet implementation modeling all
| of
| Rt’s configuration. That just felt like a job far too big to tackle :(.
|
| Does anyone have any suggestions or stories of how they are managing
| this situation?
|
| Kind regards

Bart
Bart Bunting - URSYS
PH: 02 87452811
Mbl: 0409560005
---------
RT 4.4 and RTIR Training Sessions Training — Best Practical Solutions
* Los Angeles - September, 2016

James A. Peltier
IT Services - Research Computing Group
Simon Fraser University - Burnaby Campus
Phone : 604-365-6432
Fax : 778-782-3045
E-Mail : jpeltier@sfu.ca
Website : Information Systems at SFU - Simon Fraser University
Twitter : @sfu_rcg
Powering Engagement Through Technology

We’ve built a set of Ansible playbooks around this role:

The role doesn’t depend on any other roles - it sets up
MariaDB and Apache itself. It assumes CentOS 7.x and can configure
SELinux, install multiple versions of RT 4.2.x. I’m going to add support
for installing different Perl versions. Right now, it just installs
5.10.1 through perlbrew.

Ansible best practices seem to assume that “one-shot” functions like
upgrading RT, exporting configuration, importing configuration, and
installing extras like RT extensions belong in playbooks. I would like
to find a way of shoehorning those functions into this role.

Brian

James A. Peltier writes:

We script the base install including the setup of the base PERL/CPAN stuff, the installation of the base RT base system using the typical make + make install and then we run scripts to populate the RT database using scripts. We have chosen not to alter the base initial data file because we don’t want to have to keep track of any changes that may happen from version to version.

Since the database is provided by our database group we initialize the database in the following way

/opt/rt4/sbin/rt-setup-database --dba-password=$RT_DB_PASS --action init --skip-create

This creates the database shell with just the default content to get a functioning RT install. We then enable full text searching using

/opt/rt4/sbin/rt-setup-fulltext-index --dba ${RT_DB_USER} --dba-password ${RT_DB_PASS} --table=AttachmentsIndex --column=ContentIndex --index-type=GIN

followed by installing any plugins that we need to install using a git checkout + make + make install + make initdb (if required). We version control all the configuration files and drop them into place when needed.

So far this has allowed us to get a fully reproducible base installation of RT. We then apply our scripts to add custom fields, populate their values, make changes to initial data configurations such as default templates and scrips, etc.

This makes for an easy way to create the base RT with all our customizations. We only run this if we’re running tests to ensure that starting from scratch still works as expected, otherwise we make a backup of the database and just restore that because it’s so much faster.

----- Original Message -----
|
| Hi,
|
| I’ve had a look through the list archives and seen a couple of mentions
| of this but nothing recent and thought I’d ask again in case there is
| something new out there.
|
| What are people doing to manage reproducable deployments of RT other
| than just dumping the database of a production machine and loading on a
| development one.
|
| I am using puppet currently to deploy RT.
|
| Puppet does a good job of getting RT installed and running.
|
| I am struggling with how to manage the RT configuration itself, the
| stuff that is done from within the web interface or from initialdata
| using rt-setup-database.
|
| We use vagrant for the development environment and the ideal situation
| is that running “vagrant up” will bring up a copy of RT running the
| latest config.
|
| I want all changes on the production machines done not by the web
| interface but in some sort of reproducable way.
|
| What I have so far is a hacked up solution using a custom script to call
| rt_setup_database and using my own custom fragments to init the data.
|
| The main issue here is I wanted it to be idempotent so if called from
| puppet no harm is done if it has already made the change.
|
| So far I’m doing ugly things like using the @Init section to check if a
| particular change exists in the database already and not making it if it
| does. This also prevents adding multiple entries for things when the
| code is run multiple times.
|
| My solution is working although it feels clunky.
|
| I guess one better option would be a full puppet implementation modeling all
| of
| Rt’s configuration. That just felt like a job far too big to tackle :(.
|
| Does anyone have any suggestions or stories of how they are managing
| this situation?
|
| Kind regards

Bart
Bart Bunting - URSYS
PH: 02 87452811
Mbl: 0409560005
---------
RT 4.4 and RTIR Training Sessions https://bestpractical.com/training
* Los Angeles - September, 2016

Brian [he/him/his]
Composed in Emacs

Hi,

Sorry for the slow reply.

That is more or less what I have using puppet scripts.

The question I am struggling with is that if we are to write scripts
every time a change is required, e.g. an admin cc is added to a queue
etc it is a time consuming process and I am questioning the effort
involved

I had hoped that rt-dump-metadata would be a useful tool to assist in
seeing which changes had been made by hand and then either used to
update the initialdata or create scripts to ensure the database was
consistent.

It appears that there are bugs in rt-dump-metadata.

When anything to do with a custom field is changed in the UI
rt-dump-metadata subsequently bombs out:

./sbin/rt-dump-metadata
[3047] [Sat Jun 18 01:34:29 2016] [critical]: RT::CustomField::AppliedTo Unimplemented in main. (./sbin/rt-dump-metadata line 187) (/opt/rt4/sbin/…/lib/RT.pm:390)
RT::CustomField::AppliedTo Unimplemented in main. (./sbin/rt-dump-metadata line 187)
.

I guess this is a question for the Bestpractical folks is
rt-dump-metadata going to be supported ongoing or is it something that
isn’t really best practice to use any more?

The git log suggests it hasn’t been worked on much recently but then
again it could just mean there haven’t been any recent issues found.

Any advice most welcome.

Kind regards

Bart

“James A. Peltier” jpeltier@sfu.ca writes:

We script the base install including the setup of the base PERL/CPAN stuff, the installation of the base RT base system using the typical make + make install and then we run scripts to populate the RT database using scripts. We have chosen not to alter the base initial data file because we don’t want to have to keep track of any changes that may happen from version to version.

Since the database is provided by our database group we initialize the database in the following way

/opt/rt4/sbin/rt-setup-database --dba-password=$RT_DB_PASS --action init --skip-create

This creates the database shell with just the default content to get a functioning RT install. We then enable full text searching using

/opt/rt4/sbin/rt-setup-fulltext-index --dba ${RT_DB_USER} --dba-password ${RT_DB_PASS} --table=AttachmentsIndex --column=ContentIndex --index-type=GIN

followed by installing any plugins that we need to install using a git checkout + make + make install + make initdb (if required). We version control all the configuration files and drop them into place when needed.

So far this has allowed us to get a fully reproducible base installation of RT. We then apply our scripts to add custom fields, populate their values, make changes to initial data configurations such as default templates and scrips, etc.

This makes for an easy way to create the base RT with all our customizations. We only run this if we’re running tests to ensure that starting from scratch still works as expected, otherwise we make a backup of the database and just restore that because it’s so much faster.

----- Original Message -----
|
| Hi,
|
| I’ve had a look through the list archives and seen a couple of mentions
| of this but nothing recent and thought I’d ask again in case there is
| something new out there.
|
| What are people doing to manage reproducable deployments of RT other
| than just dumping the database of a production machine and loading on a
| development one.
|
| I am using puppet currently to deploy RT.
|
| Puppet does a good job of getting RT installed and running.
|
| I am struggling with how to manage the RT configuration itself, the
| stuff that is done from within the web interface or from initialdata
| using rt-setup-database.
|
| We use vagrant for the development environment and the ideal situation
| is that running “vagrant up” will bring up a copy of RT running the
| latest config.
|
| I want all changes on the production machines done not by the web
| interface but in some sort of reproducable way.
|
| What I have so far is a hacked up solution using a custom script to call
| rt_setup_database and using my own custom fragments to init the data.
|
| The main issue here is I wanted it to be idempotent so if called from
| puppet no harm is done if it has already made the change.
|
| So far I’m doing ugly things like using the @Init section to check if a
| particular change exists in the database already and not making it if it
| does. This also prevents adding multiple entries for things when the
| code is run multiple times.
|
| My solution is working although it feels clunky.
|
| I guess one better option would be a full puppet implementation modeling all
| of
| Rt’s configuration. That just felt like a job far too big to tackle :(.
|
| Does anyone have any suggestions or stories of how they are managing
| this situation?
|
| Kind regards

Bart
Bart Bunting - URSYS
PH: 02 87452811
Mbl: 0409560005
---------
RT 4.4 and RTIR Training Sessions https://bestpractical.com/training
* Los Angeles - September, 2016


James A. Peltier
IT Services - Research Computing Group
Simon Fraser University - Burnaby Campus
Phone : 604-365-6432
Fax : 778-782-3045
E-Mail : jpeltier@sfu.ca
Website : Information Systems at SFU - Simon Fraser University
Twitter : @sfu_rcg
Powering Engagement Through Technology
Bart

Bart Bunting - URSYS
PH: 02 87452811
Mbl: 0409560005

Hi All, Hi Ruslan,

I have a similar issue: we develope plugins (aka extensions), and for making those installable we have to create “initialdata” files.
This is the doc: Initialdata - RT 4.4.1 Documentation - Best Practical As it says:
"Summary of initialdata files

It’s often useful to be able to test configuration/database changes and then apply the same changes in production without manually clicking around. It’s also helpful if you’re developing customizations or extensions to be able to get a fresh database back to the state you want for testing/development.

This documentation applies to careful and thorough sysadmins as well as extension authors who need to make database changes easily and repeatably for new installs or upgrades."

This article is very useful. On the other hand one important thing is missing: there is only manula way of creating the initialdata files. (Or at least: I can not find any way to create initialdata file from RT by chosing RT objects (queue, CF, scrip) to export in that format.)
This is an uncompfortable barrier of effective RT plugin development: when we create a new plugin, it is a huge, boring and easy-to-make-a-mistake task to create the CF and Queue - and other RT object - creator initialdata files.

For CFs I found this tool (by Ruslan) as a solution: rt-custom-fields-initialdata - export CFs in format suitable for rt-setup-database - metacpan.org , but I don’t know much about this.

Ruslan, could you tell me please is it working on RT4.4? Is is any further tool for the other RT object?: scrips, queues, groups, etc.?

Thanks,

Akos

Hi Akos,

A certian amount of this should be pretty easy for example here is a
quick script I knocked together to get the queues and output in initial
data format.

#!/usr/bin/env perl

use strict;
use warnings;

use Data::Dumper::Names;

use lib qw(/home/wheldonm/Maestro/rt4-test-modules/local/lib 
/home/wheldonm/Maestro/rt4-test-modules/lib);

use RT;
RT::LoadConfig();
RT::Init();


sub massage_output()
{
     my $code = shift;

     $code =~ s/\s+=/,/xms;
     $code =~ s/}$/},/xms;
     return $code;

}


my $right = 'ShowTicket';
my $Qs = RT::Queues->new( RT::SystemUser );
$Qs->UnLimit;

my @Queues = grep { $right ? $_->CurrentUserHasRight($right) : 1 } 
@{$Qs->ItemsArrayRef};

@Queues = map {
     {  Name                 => $_->Name,
        Description          => $_->Description || '',
        Lifecycle            => $_->Lifecycle,
#       CorrespondAddress    => $_->CorrespondAddress,
#       CommentAddress       => $_->CommentAddress,
     }
} grep $_, @Queues;


printf('push %s', &massage_output(Dumper \@Queues) );


1;

Hope it helps

Best Regards

Martin

I just found this thread looking for a way of setting up a test instance in an automated way, either with a configuration management tool like Puppet or by exporting and importing an existing config without the actual ticket data.

I stumbled across rt-serializer and rt-importer (in RT 4.4.2).

I haven’t fully tested it yet, but I’ll definetly give it a chance.

This could be the right command to get a copy of all the data without ticket related contents:
/opt/rt/sbin/rt-serializer --no-tickets --scrips --acls --directory /root/test